The press regulator has launched comprehensive ethical guidance for journalists and publishers to navigate the complexities of AI use in news production.
The UK’s independent press regulator Impress has released guidance to help newsrooms use artificial intelligence without undermining journalistic standards or public trust.
Unveiled during a recent webinar, the guidance lays out ethical safeguards for publishers as AI becomes increasingly integrated into news production. It stresses the need for transparency, rigorous fact-checking and clear human oversight when using AI tools.
Andrea Wills, chair of Impress’s Code Committee, said the guidelines aim to give publishers “the confidence to adopt and use AI tools in an ethical and responsible way.” While headlines around AI come and go, she said, “it’s the unethical uses of generative AI models that concern us most.”
The document calls on news organisations to label AI-generated content clearly, avoid misleading representations of real people or events, and ensure personal data is protected—particularly in sensitive reporting environments such as war zones or investigations. It also warns that content fed into AI tools may be used to train future models, raising legal and privacy concerns.
Impress developed the framework following a six-week public consultation and input from AI experts, legal specialists and newsrooms already experimenting with generative tools. Though designed primarily for Impress-regulated publishers, the regulator said the guidance provides a “robust ethical foundation” for the wider industry.
As more newsrooms adopt AI for tasks ranging from text generation to data analysis, the regulator’s intervention highlights the growing need for industry standards. The guidance is available now on the Impress website.
Source: Noah Wire Services
- https://www.impress.press/standards/impress-standards-code/our-standards-code/ – Impress has launched its new Standards Code, adding revisions that hold publishers to stricter standards on discrimination and prepare for the rollout of artificial intelligence in newsrooms. The new code was published on Thursday morning following a two-year consultation period.
- https://www.poynter.org/ethics-trust/2024/how-to-create-newsroom-artificial-intelligence-ethics-policy/ – In order to effectively use this AI ethics policy, newsrooms will need to create an AI committee and designate an editor or senior journalist to lead the ongoing effort. This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised.
- https://www.reuters.com/info-pages/reuters-and-ai/ – Reuters uses generative Artificial Intelligence (AI) in various aspects of their news process, including reporting, writing, editing, production, and publishing. When news content is primarily or solely created using AI, Reuters transparently discloses this fact and provides context about the AI’s role.
- https://www.axios.com/2023/08/22/ai-rules-newsrooms-training-data – Media organizations are navigating the integration of artificial intelligence (AI) in newsrooms, emphasizing ethical considerations and maintaining public trust. While allowing some AI usage under human supervision, most organizations prohibit AI from writing articles and scrutinize AI-generated content.
- https://www.desirableai.com/journalism-toolkit-ethics – A collection of guidelines and practical resources for ethical communication about AI and for implementing AI in news settings. This includes a checklist of eighteen pitfalls in AI journalism and guidance on how to report better on artificial intelligence.
- https://www.restack.io/p/ethical-ai-answer-journalism-cat-ai – Balancing Efficiency and Ethical Risks: While AI can enhance efficiency by automating mundane tasks, it also raises ethical concerns such as the potential for inaccurate information and data misuse. Newsrooms must navigate these tensions carefully.
- https://news.google.com/rss/articles/CBMigAFBVV95cUxNOGlYajFmZlBnWERuT0pTTmNnWHlreHFxWHZ1TTNrdWwzd2V2dmx6dFFBbFFHblF0V2ZKQmJ0VjdoNTB1QmNGUU1UZUdqWnJtekxjbFVrSC1xTnR6VU1qeGdZb25DRkh4SUNWWEpOd1ZBMU9GV2dJVGxZVVNWeHlLRQ?oc=5&hl=en-US&gl=US&ceid=US:en – Please view link – unable to able to access data
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
Narrative references a recent webinar and guidance derived from a six-week public consultation, indicating recent development. No evidence of recycled content found.
Quotes check
Score:
9
Notes:
Direct quotes from Andrea Wills (Impress chair) are specific to the guidance’s rationale, with no prior identical phrasing found online. Likely original statements.
Source reliability
Score:
9
Notes:
Narrative involves credible entities (BBC, Microsoft, Byline Times) and Impress, a recognised UK press regulator, suggesting high reliability.
Plausability check
Score:
10
Notes:
Guidance aligns with current industry debates about AI ethics, and recommendations (e.g., human oversight, transparency) are consistent with established journalistic standards.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
Guidance is recent, original, and plausible, backed by credible sources and consistent with industry norms. No red flags detected.