📖 Read more: AI Drug Discovery: New Drugs in 18 Months
AI Enters the Newsroom
According to a survey by the Reuters Institute for the Study of Journalism covering more than 300 media executives from 56 countries, artificial intelligence isn't merely threatening journalism. It's reshaping it. 56% of respondents consider back-end automation — transcription, tagging, copyediting — to be the most important use of AI, followed by recommender systems (37%), content creation with human oversight (28%), and commercial applications (27%).
What AI Already Does in Media
AI doesn't just write articles. It's already being deployed across dozens of newsroom functions, from the mundane to the surprising:
- Article summaries: Sweden's Aftonbladet adds AI-generated bullet-point summaries (“Snabbversions”) at the top of articles. The result: increased engagement, especially among younger readers
- Translation: Le Monde uses AI to translate approximately 30 articles per day into English — far more than human staff could handle. The system has been trained on the newspaper's own style guide
- CMS assistants: Finland's Helsingin Sanomat has “Hennibot” embedded directly in its CMS, suggesting language improvements and relevant links in real time
- Voice cloning: Radio Expres in Slovakia cloned the voice of popular presenter Bára Hacsi. The AI replica “Hacsiko” now covers the nightshift with commentary on music and news
- Full AI reporting: Klara Indernach at Express.de writes articles across multiple topics. Human editors choose the stories and review every text, but the writing itself is handled by AI
NewsGPT is an experimental 24-hour television channel where all news stories and all presenters are generated by AI, with zero human intervention. Channel.1 AI is developing a personalized newscast that learns what each viewer wants to see.
When AI Fails
The failure cases are equally instructive. CNET had AI write articles without adequate oversight — the texts were found to be riddled with errors and poorly labelled. Sports Illustrated published product reviews from a third party that were alleged to be at least partly AI-generated, without proper disclosure.
📖 Read more: Biological Computers: Neurons Instead of Transistors
In the Reuters Institute survey, 56% of respondents consider content creation the biggest reputational risk, followed by newsgathering (28%). By contrast, back-end automation (11%) is considered low risk — which explains why newsrooms are moving fastest in that direction.
The Copyright Battle
The legal battles have just begun. The New York Times lawsuit against OpenAI in December 2023 raised a fundamental question: millions of articles were used without permission to train ChatGPT, which now produces “verbatim excerpts” and competes with the newspaper as a trusted source.
The Axel Springer – OpenAI deal, reportedly worth tens of millions of euros annually for content from Bild, Politico, and Business Insider, offers one model: a flat fee for historical data, an annual fee for new content, plus extra payments for frequently used material. Yet 48% of publishers surveyed believe that ultimately there will be very little money for any publisher.
📖 Read more: Do You Trust AI? Why We Get It Wrong
Deepfakes, Elections, and Trust
But AI's political weaponization poses a deeper threat. In Slovakia, fake audio recordings of a candidate discussing election rigging circulated days before a tight vote. In Argentina, AI-generated images were used to defame both major candidates. In Britain, a fabricated audio clip of opposition leader Keir Starmer swearing at staffers went viral on X with millions of views, even after it was exposed as fake.
A US poll found that 58% of adults believe AI tools will increase the spread of false information. As Hannah Arendt warned, the real danger isn't that people believe lies — it's that they stop believing anything. We're approaching that cliff.
The Future of the Newsroom
Europol predicts that by 2026 the vast majority of internet content will be synthetically produced. Newsrooms are already adapting. The Reuters Institute survey shows that forward-thinking organizations are focusing on content that can't be easily replicated by AI: live news curation, deep analysis, human stories that build connection, and audio and video formats that are more defensible than text.
The New York Times appointed Zach Seward as its first editorial director of AI Initiatives. Reporters Without Borders created an AI ethics charter for journalism with 16 organizations. The direction is clear: AI as a tool, not a replacement. The human stays at the wheel — for now, at least.
Reuters Institute for the Study of Journalism – Journalism, Media, and Technology Trends and Predictions 2024
