5:20 pm - October 28, 2025

Thousands of global news organisations unite under the ‘News Integrity in the Age of AI’ initiative, setting five core principles to promote responsible AI use and protect truth in journalism amid rising misinformation concerns.

Thousands of news organisations around the world have backed a new global initiative to promote responsible use of artificial intelligence, setting out five principles aimed at protecting journalism and restoring trust in news.

Launched by the European Broadcasting Union (EBU) and the World Association of News Publishers (WAN-IFRA) at the World News Media Congress in Krakow, the “News Integrity in the Age of AI” initiative calls on both publishers and technology companies to commit to shared standards as generative AI becomes increasingly central to how news is created and distributed.

Ladina Heimgartner, president of WAN-IFRA and CEO of Ringier Media, said the time for action was now. “Organisations and institutions that see truth and facts as the core of democracy must come together to shape the next era,” she said.

The five principles are:

News content must only be used in generative AI models and tools with the authorisation of the originator.

The value of up-to-date, high-quality news content must be fairly recognised when it’s used to benefit third parties.

Accuracy and attribution matter. The original news source underlying AI-generated material must be apparent and accessible to citizens.

Harnessing the plurality of the news media will deliver significant benefits for AI-driven tools.

We invite technology companies to enter a formal dialogue with news organizations to develop standards of safety, accuracy and transparency.

The initiative has been endorsed by regional and global associations including the North American Broadcasters Association and the Asia-Pacific Broadcasting Union, bringing together public and private news providers to confront a shared threat to editorial independence and public trust.

Jean-Paul Philippot, president of the EBU, said the misuse of AI could severely damage the public’s ability to access reliable information. “The integrity of news has never been so important in keeping people informed and democracies healthy,” he said.

The project comes amid growing concerns that generative AI systems are using publisher content without consent, credit or compensation – and that the resulting information is often misleading or difficult to verify. It also reflects a wider push among media companies to establish fair commercial terms with tech platforms that use their journalism to train or power AI tools.

Heimgartner said the principles were designed to ensure that media outlets have a voice in how these tools evolve. “A functional media space that contributes value to society is a common good. It must be supported and encouraged,” she said.

The full statement and list of signatories is available from WAN-IFRA.

Source: Noah Wire Services

More on this

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
9

Notes:
The content references recent global media initiatives and mentions a 2025 event, indicating it is current news. However, I couldn’t verify if similar initiatives have been reported earlier, which slightly reduces the score.

Quotes check

Score:
8

Notes:
The quotes from Ladina Heimgartner and Jean-Paul Philippot appear to be from a recent event, the World News Media Congress 2025. Without further online records, it’s challenging to confirm if these are original.

Source reliability

Score:
8

Notes:
The narrative originates from a press release via PressAT, a platform for news distribution. While PressAT is used by reputable organisations, the reliability depends on the credence of WAN-IFRA and EBU, which are well-established media associations.

Plausability check

Score:
9

Notes:
The initiative to protect news integrity aligns with current concerns about AI and misinformation. The involvement of prominent media associations supports the plausibility of this effort.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The narrative appears to be fresh, focusing on contemporary issues in media and AI. The quotes seem original but lack online confirmation. The reliability of the information is supported by the involvement of established media associations. Overall, the initiative aligns with current challenges faced by the media industry.

Tags:

Register for Editor’s picks

Stay ahead of the curve with our Editor's picks newsletter – your weekly insight into the trends, challenges, and innovations driving the future of digital media.

Leave A Reply

© 2025 Tomorrow’s Publisher. All Rights Reserved. Powered By Noah Wire Services. Created By Sawah Solutions.
Exit mobile version
×