Who decides what AI tells you? Campbell Brown, once Meta’s news chief, has thoughts

by Amelia Forsyth


Campbell Brown has spent her career chasing accurate information, first as a renowned TV journalist, then as Facebook’s first, and only, dedicated news chief. Now, watching AI reshape how people consume information, she sees history threatening to repeat itself. This time, she’s not waiting for someone else to fix it.

Her company, Forum AI — which she discussed recently with TechCrunch’s Tim Fernholz at a StrictlyVC evening in San Francisco — evaluates how foundation models perform on what she calls “high-stakes topics” — geopolitics, mental health, finance, hiring — subjects where “there are no clear yes-or-no answers, where it’s murky and nuanced and complex.”

The idea is to find the world’s foremost experts, have them architect benchmarks, then train AI judges to evaluate models at scale. For Forum AI’s geopolitics work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to roughly 90% consensus with those human experts, a threshold she says Forum AI has been able to reach.

Brown traces the origin of Forum AI, founded 17 months ago in New York, to specific moment. “I was at Meta when ChatGPT was first released publicly,” she recalled, “and I remember really shortly after realizing this is going to be the funnel through which all information flows. And it’s not very good.” The implications for her own children made the moment feel almost existential. “My kids are going to be really dumb if we don’t figure out how to fix this,” she recalled thinking.

What frustrated her most was that accuracy didn’t seem to be anyone’s priority. Foundation model companies, she said, are “extremely focused on coding and math,” whereas news and information are harder. But harder, she argued, doesn’t mean optional.

Indeed, when Forum AI began evaluating the leading models, the findings weren’t exactly encouraging. She cited Gemini pulling from Chinese Communist Party websites “for stories that have nothing to do with China,” and noted a left-leaning political bias across nearly all models. Subtler failures abound too, she said, including missing context, missing perspectives, straw-manning arguments without acknowledgment. “There’s a long way to go,” she said. “But I also think that there are some very easy fixes that would vastly improve the outcomes.”

Brown spent years at Facebook watching what happens when a platform optimizes for the wrong thing. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program she built no longer exists. The lesson, even if social media has turned a blind eye to it, is that optimizing for engagement has been lousy for society and left many less informed.

Her hope is that AI can break that cycle. “Right now it could go either way,” she said; companies could give users what they want, or they could “give people what’s real and what’s honest and what’s truthful.” She acknowledged the idealistic version of that — AI optimizing for truth — might sound naive. But she thinks enterprise may be the unlikely ally here. Businesses using AI for credit decisions, lending, insurance, and hiring care about liability, and “they’re going to want you to optimize for getting it right.”

That enterprise demand is also what Forum AI is betting its business on, though turning compliance interest into consistent revenue remains a challenge, particularly given that much of the current market is still satisfied with checkbox audits and standardized benchmarks that Brown considers inadequate.

The compliance landscape, she said, is “a joke.” When New York City passed the first hiring bias law requiring AI audits, the state comptroller found more than half had violations that went undetected. Real evaluation, she said, requires domain expertise to work through not just known scenarios but edge cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists aren’t going to cut it.”

Brown — whose company last fall raised $3 million led by Lerer Hippeau — is uniquely positioned to describe the disconnect between the AI industry’s self-image and the reality for most users. “You hear from the leaders of the big tech companies, ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,'” she said. “But then to a normal person who’s just using a chatbot to ask basic questions, they’re still getting a lot of slop and wrong answers.”

Trust in AI sits at extraordinarily low levels, and she thinks that skepticism is, in many cases, justified. “The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers.”

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.



Source link

You may also like

Leave a Comment