Newsrooms grapple with rules for AI
Leading media organizations are issuing guidance on leveraging artificial intelligence in the newsroom at the same time they’re making licensing deals to let AI firms use their content to train AI models.
Why it matters: The sudden arrival of publicly and commercially available generative AI tools has forced a new set of ethical choices on media companies struggling to protect public trust while still experimenting with the technology and preserving their legal rights.
Driving the news: Most news companies are allowing some use of AI under the editorial supervision of humans, but many of the new guidelines prohibit AI from being used to write articles, and extra scrutiny is applied to AI-generated images and video.
- The Associated Press last week issued a list of standards for using generative AI in its news report, writing, “Any output from a generative AI tool should be treated as unvetted source material.” The AP will not use AI to alter any elements of its photos, video or audio, but will publish generative AI images if they are the subject of a news story, with clear labels.
- The Guardian in June said it will only use AI in its news products “with clear evidence of a specific benefit, human oversight, and the explicit permission of a senior editor,” labeling generative AI as “exciting but unreliable.”
- Insider earlier this year told its newsroom that while it can experiment with generative AI, “ChatGPT is not a journalist. You are responsible for the accuracy, fairness, originality, and quality of every word in your stories.”
- Reuters has given itself room to maneuver by adopting AI principles that promise trust and accountability while leaving flexibility in adapting to advances in the technology.
Be smart: The AP last month became the first major news company to strike an agreement deal with OpenAI that will allow the firm to use AP’s content to train its AI models.
- Because of that partnership, and its history as an early adopter of automation, its editorial guidance will likely weigh heavily with other news organizations.
Yes, but: The AP’s commercial agreement with OpenAI may not serve as a blueprint for other media companies weighing efforts to protect their intellectual property interests.
- Last week, NPR reported that the New York Times is considering legal action against OpenAI for unauthorized use of Times stories as training data. The publication updated its terms of service on Aug. 3 to forbid using Times content in “training a machine learning or artificial intelligence (AI) system.”
- News Corp. CEO Robert Thomson told investors on a recent earnings call that the firm is “already in active negotiations to establish a value for our unique content sets and IP that will play a crucial role in the future of AI.”
- Other major media companies are forming a coalition to collectively negotiate with Big Tech firms over the use of their content in training AI algorithms.
Between the lines: One area of almost uniform agreement seems to be disclosure.
- An embarrassing publishing experiment from CNET earlier this year has prompted more media companies to ensure their standards include disclosures of the use of AI in editorial products.
- Still, it’s unclear whether or how the fractious industry could adopt a uniform approach to disclosing the use of AI in news products.
Our thought bubble: As news publishers weigh different AI standards, some level of consistency will be necessary to develop broad reader trust.
- News media in the U.S. have long struggled to retain credibility as readers doubt they are drawing clear boundaries between opinion and fact.
Of note: Axios does not use generative AI to create content, except where the point is to show readers what the technology can or can’t do, in which case the AI-generated material is clearly labeled.