Podcast Summary:
Business Daily (BBC World Service)
Episode: "Could AI Ever Replace the News?"
Date: September 17, 2025
Host: Sam Grouet
Overview
This episode of Business Daily explores the rapidly evolving role of artificial intelligence in the news industry. From AI-generated anchors and news summaries to the threat of misinformation and newsroom automation, the discussion probes whether AI could ever truly replace traditional news — and whether audiences would trust it if it did. The episode features expert guests including Adam Mossam (Channel One), Chris Stoker Walker (journalist and author), Dylan Jacks (Telegraph Media Group), and Akin Tunde Bapatunde (Center for Journalism Innovation Development).
Key Discussion Points & Insights
1. The Arrival of AI Anchors and News Content
- Introduction to AI-Generated Anchors:
- In December 2023, an AI-produced news broadcast went viral, garnering over 5 million views in 24 hours ([01:22], [04:16]).
- India Today introduced its AI anchor, Sana Mary, in the same year ([01:39]).
- The show’s host, Sam Grouet, tries ChatGPT and Elevenlabs to generate and present a news script, demonstrating the speed and improving quality of current AI tools ([02:47]-[03:18]).
- Channel One’s Vision:
- Channel One briefly launched an AI news service with content available in 30 languages, aiming for efficiency and regional relevance ([03:53]-[04:16]).
- Adam Mossam: “I think we're operating on the leading edge of AI, and it was probably the first time a lot of people saw some type of digital avatar with that much precision or that lifelike.” ([04:49])
2. AI in the Newsroom: Help or Hindrance?
- Efficiency over Replacement:
- News organizations face budgetary pressures, e.g., Business Insider laying off over 20% of its staff while moving toward more automated content ([05:03]).
- Adam Mossam: “Our product essentially acts as an assistant to the creatives…helping them get a lot more done in the same amount of time.” ([05:21])
- Channel One shifted focus to shorter, mobile-friendly content, revealing both technological possibility and generational habits ([05:59])
- Current Uses:
- Chris Stoker Walker:
- AI is already summarizing articles at leading news outlets, e.g., Bloomberg offers bullet-point AI summaries for readers ([07:27]).
- “We are seeing AI being used on the front lines of journalism at the point at which the audience consumes that bit of information.” ([07:27])
- Chris Stoker Walker:
3. Legal & Quality Challenges in AI-Generated News
- Copyright Disputes:
- The NYT sues Microsoft and OpenAI for allegedly using its journalism to train large language models ([08:01]).
- Chris Stoker Walker: “The outcome…could significantly affect the future of AI development. Because if the judge rules against OpenAI and Microsoft, then…large parts of what we believe might be in its knowledge base will have to be erased.” ([08:46])
- Quality and Accuracy Issues:
- Even major companies like Apple suspended AI news summaries after persistent factual errors ([09:07]).
- Chris Stoker Walker: “AI can be convincing even when it's wrong.” ([09:26])
- Newsroom Standards:
- Many organizations (BBC, Guardian, AP) now have formal guidelines to oversee AI’s use in journalism ([11:17]).
4. Human-AI Collaboration at Telegraph Media Group
- Dylan Jacks:
- The Telegraph has leveraged AI for years, focusing in recent times on generative AI’s potential to enhance, not replace, journalistic output ([12:06]).
- Novel project: Using AI to translate and recreate podcasts ("Ukraine: The Latest") in other languages while preserving presenters’ voices ([12:35])
- “To be clear, this is AI helping to present our journalism, not produce it.” ([12:47])
- Strict editorial standards remain: “Our policy is really that…ultimately the person that's publishing that and their name's against that…they need to review it.” ([13:30])
- Audience Attitudes & Trust:
- Trust and truth remain paramount; technology cannot outweigh the product’s integrity ([14:20]).
- “There is only so much an AI can do. And being on the frontline of a conflict like Ukraine…is a big part of the value and how we communicate that.” ([14:20])
5. Risks: AI Disinformation and Manipulation
- Generative AI as Weapon:
- Chris Stoker Walker:
- Early fears about deepfakes have given way to concerns about data poisoning — manipulating training data to bias AI outputs ([15:36]).
- “You can poison these models and therefore their outputs at source.” ([15:36])
- Example: A Russian-backed network creates fake news sites to influence AI chatbot outputs ([16:11]).
- “You can shift the balance of these AI models in a way that serves your goals rather than the goals of truth, reality and impartiality.” ([16:24])
- Chris Stoker Walker:
6. Countering Disinformation: Fighting Fire with Fire
- Akin Tunde Bapatunde (Center for Journalism Innovation Development/Dubwa):
- His organization has debunked over 27 AI-generated fakes in 2025 alone ([17:12]).
- Common examples: Viral deepfakes of world leaders, altered celebrity images, and entirely fabricated incidents ([17:36]).
- Disinformation can profoundly affect both health choices and democratic processes ([18:04]).
- Innovative Tools:
- AI model transcribes and flags questionable audio from radio, helping fact-checkers ([18:30]).
- A free WhatsApp chatbot allows anyone to verify rumors in real time ([18:30]).
- “You cannot fight fire without understanding how fire bites and how fire works.” ([19:20])
Notable Quotes and Memorable Moments
- On the initial wow factor of AI news avatars:
- Adam Mossam ([04:49]):
“It was probably the first time a lot of people saw some type of digital avatar with that much precision or that lifelike.”
- Adam Mossam ([04:49]):
- On why AI should amplify, not replace:
- Adam Mossam ([05:21]):
“Our product essentially acts as an assistant to the creatives. Right. It's really bolstering your existing staff and helping them get a lot more done in the same amount of time.”
- Adam Mossam ([05:21]):
- On the quality imperative:
- Dylan Jacks ([13:30]):
“Our policy is really that it's fine to use tools to help accelerate and draw together things, but ultimately the person that's publishing that and their name's against that, that is a representation of their work. They need to be confident and comfortable with that and they, they need to review it.”
- Dylan Jacks ([13:30]):
- On data poisoning as the real disinformation risk:
- Chris Stoker Walker ([15:36]):
“Actually the real risk from disinformation through generative AI has been a much more subtle but no less pernicious concern, which is you can poison these models and therefore their outputs at source.”
- Chris Stoker Walker ([15:36]):
- On using AI to combat AI-created fakes:
- Akin Tunde Bapatunde ([19:20]):
“You cannot fight fire without understanding how fire bites and how fire works.”
- Akin Tunde Bapatunde ([19:20]):
- On trust, truth, and journalism’s value:
- Dylan Jacks ([14:20]):
"...in news in particular, trust and truth...that is just paramount and the raisin d’être to why we're here..."
- Dylan Jacks ([14:20]):
Timestamps for Key Segments
- AI Anchors Go Viral — [01:22] to [04:24]
- Channel One’s Approach: Human-free News — [04:16] to [06:33]
- AI’s True Role: Assistant, Not Replacement — [05:21] to [07:15]
- How AI Is Integrated in Major Outlets — [07:27] to [08:42]
- Legal Battles & Factual Errors in AI Summaries — [08:01] to [09:26]
- BBC, Guardian, AP: Guidelines for AI Use — [11:17]
- Telegraph Media Group’s Innovations & Boundaries — [12:06] to [14:20]
- Public Attitudes: The Trust Factor — [14:20] to [15:08]
- Disinformation Threats & Data Poisoning — [15:23] to [16:57]
- Fighting AI Lies with AI Tools — [17:04] to [19:26]
Tone & Style
The episode keeps a balanced and forward-looking tone — alternating between curiosity regarding technological advances and cautious skepticism about AI’s pitfalls.
Conclusion
This episode offers an in-depth look at the potential, limitations, and risks of AI-generated news. While technologies are rapidly advancing, widespread adoption will hinge on legal clarity, editorial guidance, and above all, audience trust. Both industry leaders and fact-checkers agree: AI is a valuable tool in the newsroom — but not a replacement for human judgment, ethics, and storytelling.
