After months of experimenting with artificial intelligence to make their work more efficient, some newsrooms are now dipping their toes in more treacherous waters — trying to harness AI to detect bias or inaccuracies in their work.
Why it matters: Confidence in the news media is at an all-time low, pressuring news leaders to look for new ways to win back trust. But today’s AI, which has its own biases and makes up fake facts, is an unlikely savior.
While AI is tech’s hottest sector right now, the generative AI products like ChatGPT that are driving the trend are known for being fact-challenged.
Driving the news: The Messenger, a new digital media company, Wednesday said it plans to partner with a company called Seekr to ensure its editorial content “consistently aligns with journalism standards” using AI.
The Messenger’s president, Richard Beckman, said in a statement announcing the partnership that “we believe Seekr’s responsible AI technology will help hold our newsroom accountable to our core mission,” which is “to deliver the news — not shape it.”
Meanwhile, the CEO of Politico and Insider parent company Axel Springer told CNN Tuesday that the firm will use AI for “fact-checking,” without specifying how.
How it works: Seekr analyzes individual articles using factors like “title exaggeration,” “subjectivity,” “clickbait” and “personal attack” as well as purported political leaning.
The promise is that a neutral AI will somehow arrive at purely objective ratings — but AI itself is trained on human data, and that data is full of its own biases.
Reality check: Taking humans out of the loop introduces other problems, and automating judgments by algorithm opens the door to many unpredictable failures.
It took less than a minute to find, for instance, that Seekr gave a “very low” rating to a harmless Messenger story rounding up late-night comedy hosts’ schticks about Kevin McCarthy’s ouster.
The story was a compilation of jokes from Stephen Colbert and Jimmy Kimmel, which the program must have viewed as “subjective” and “personal attacks.”
The big picture: Several companies have launched in recent years with the goal of evaluating news accuracy and bias. Most rely on human judgment to assess whether a particular outlet or article is credible by analyzing factors like funding transparency and original sourcing.
Critics argue that relying on human review opens these companies, such as Ad Fontes or NewsGuard, to their own biases. Some firms take measures to prevent bias, such as relying on politically balanced panels to evaluate the same material.
Between the lines: Experts see some value in using AI to fact-check very large datasets — for instance, to track the spread of a falsehood identified by a human across multiple stories and media outlets.
Google, for example, says it uses AI to identify false claims that have been debunked by fact-checkers that have been repeated in a wide set of information sources.
Our thought bubble: Whatever systems publishers and editors impose, AI will probably enter newsroom workflows informally, as time-pressed journalists turn to tools like ChatGPT to answer questions fast — even if they’re advised not to.
Long ago, using Google or Wikipedia to confirm facts was also verboten in many old-school newsrooms, but now most journalists know how to use those tools with appropriate cautions.
Talking with knowledgeable sources is always the best road to truth — and no algorithm can tell how many calls a reporter made.
Newt’s guest is David Trulio, President and CEO of the Ronald Reagan Presidential Foundation and Institute. They discuss the 35th anniversary of the fall of…
Tomorrow the House Ethic Committee is expected to discuss the fate of its report on Matt Gaetz, President-elect Trump’s choice for attorney general. The former Florida…