The Poynter Institute’s new programme aims to rebuild trust in AI-driven journalism amid audience scepticism.
As newsrooms rush to adopt artificial intelligence to streamline workflows and enhance reporting, many of their audiences remain deeply uneasy about the technology’s growing role. That disconnect presents a serious challenge for trust in journalism and has prompted the Poynter Institute to develop a new resource aimed at bridging the gap.
The Talking About AI: Newsroom Toolkit is designed to help journalists speak more clearly and confidently about how they use AI in their work. Backed by Microsoft and developed with The Associated Press, the toolkit offers templates, scripts and examples for explaining AI to readers and viewers, including on controversial or sensitive beats such as politics.
Research cited by Poynter found that more than half of respondents in the US and UK were uncomfortable with AI-generated news. Among Americans, 52% said they were uneasy with AI journalism; in the UK the figure rose to 63%. “If audiences aren’t able to understand how we are using AI, we can very quickly lose trust and credibility,” said Sean Marcus, interactive learning designer at MediaWise, Poynter’s media literacy initiative.
The toolkit is structured to fit easily into existing newsroom routines. It also includes videos aimed at independent creators, with contributions from TikTok and YouTube personalities such as V Spehar and Dave Jorgenson, who argue that demystifying AI is both doable and essential. “Your audience will appreciate knowing how the sausage is made, even if the sausage is part robot,” said Jorgenson.
The resource reflects a wider push to embed ethics and transparency into AI adoption. “Understanding AI is not just about understanding a new technology,” said Alex Mahadevan, director of MediaWise. “It’s also key to preserving the values of journalism.”
Audience wariness of AI is often tied to concerns about misinformation and fake news. Previous AI ethics summits hosted by Poynter revealed that many publishers lacked frameworks for discussing the issue publicly, despite using AI in tasks ranging from transcription to headline writing. The new toolkit is based on feedback from those sessions and aims to strike a balance between technical explanation and audience engagement.
“It’s a fine line between how much and how deep you go into talking about AI versus how effectively you explained it,” said Marcus.
Source: Noah Wire Services
- https://www.poynter.org/from-the-institute/2025/journalists-are-using-ai-they-should-be-talking-to-their-audience-about-it-microsoft-associated-press/ – Please view link – unable to able to access data
- https://www.reuters.com/technology/artificial-intelligence/global-audiences-suspicious-ai-powered-newsrooms-report-finds-2024-06-16/ – A Reuters report highlights growing global concerns about AI in news production, with 52% of U.S. and 63% of UK respondents uncomfortable with AI-generated news, especially on sensitive topics like politics. Despite AI tools from companies like Google and OpenAI, doubts about content reliability persist. The report also notes a rise in concerns over fake news, particularly in countries like South Africa and the U.S., and a general reluctance to pay for news subscriptions, with only 17% of respondents paying for online news.
- https://www.poynter.org/ethics-trust/2024/poynter-when-it-comes-to-using-ai-in-journalism-put-audience-and-ethics-first/ – Poynter’s article discusses the importance of audience engagement and ethical considerations when integrating AI into journalism. It emphasizes the need for transparency, with audiences expressing a desire for disclosure about AI usage in news production. The piece also highlights concerns about AI leading to societal isolation and the potential erosion of journalistic standards, urging newsrooms to balance technological advancements with ethical practices.
- https://www.poynter.org/commentary/2023/jouranlism-artificial-intelligence-ethical-uses/ – This Poynter commentary explores the necessity for new standards and disclosures as AI becomes more prevalent in journalism. It highlights a Trusting News post that disclosed the use of AI in writing, underscoring the importance of transparency in AI applications. The article advocates for clear communication about AI’s role in news production to maintain credibility and trust with audiences.
- https://www.poynter.org/ethics-trust/2024/artificial-intelligence-principles-journalism/ – Amy Mitchell’s commentary in Poynter calls for a rearticulation of journalistic principles in the age of AI. She argues that technology is deeply embedded in journalism, necessitating updated principles that address verification, authenticity, and transparency. The piece stresses the need for journalists to lead discussions on policy structures that safeguard an independent press while embracing technological advancements.
- https://trustingnews.org/trusting-news-artificial-intelligence-ai-research-newsroom-cohort/ – Trusting News discusses research on journalists’ use of AI, emphasizing the importance of transparency and audience engagement. The article suggests that by openly discussing AI applications and incorporating audience feedback, journalists can demystify the technology and build trust. It advocates for responsible and ethical AI use, highlighting the opportunity to model best practices and educate the public.
- https://www.poynter.org/ethics-trust/2025/news-audience-feelings-artificial-intelligence-data/ – Poynter’s research reveals that nearly half of Americans are uncomfortable with AI-generated news, with 20% believing publishers shouldn’t use AI at all. The study indicates a lack of understanding about AI’s role in news production and a demand for disclosure and ethical guidelines. It underscores the need for newsrooms to address audience concerns and maintain trust in the age of AI.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is recent, published on May 30, 2025, and does not appear to be recycled or republished content. The Poynter Institute’s ‘Talking About AI: Newsroom Toolkit’ was launched in April 2025, indicating the report’s timeliness. ([poynter.org](https://www.poynter.org/mediawise/programs/talking-about-ai-newsroom-toolkit/?utm_source=openai))
Quotes check
Score:
10
Notes:
The quotes from Sean Marcus and Dave Jorgenson are unique to this report, with no earlier matches found online. This suggests original or exclusive content.
Source reliability
Score:
10
Notes:
The narrative originates from the Poynter Institute, a reputable journalism think tank, and references collaborations with Microsoft and The Associated Press, both established organisations.
Plausability check
Score:
10
Notes:
The claims about audience apprehension regarding AI in journalism are supported by recent studies, including one from Poynter and the University of Minnesota, which found that nearly half of Americans are sceptical about AI-generated news. ([poynter.org](https://www.poynter.org/ethics-trust/2025/news-audience-feelings-artificial-intelligence-data/?utm_source=openai))
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The report is recent, original, and originates from reputable sources. The claims made are plausible and supported by recent studies, indicating a high level of credibility.