It would be so easy to stand in line, nod solemnly and pretend this new set of synthetic media guidelines is flawless. But someone has to push back on behalf of the industry, so here I am.
The report in question comes from Partnership on AI, a major global AI and media ethics group, and is supported by organisations with genuinely good intentions. It lays out thoughtful, well-structured principles for how publishers should handle synthetic media, including the need for traceable sourcing, contextual information, human oversight and heightened safeguards around elections, child safety and manipulated visuals.
I agree with all of it – except one bit.
They suggest that every time AI is used, in any capacity, it should be declared. Loudly. In every article. As if we’re all walking around with “Assisted by AI” badges pinned to our chests.
To me, that’s like asking every accountant to declare whether they used a calculator, or demanding that cars still be preceded by a man waving a red flag. It is already the case that the vast majority of content has had some sort of AI touchpoint, whether it’s summarising, fact checking, proofreading or just helping you fix a dodgy sentence.
This isn’t some niche corner of publishing any more. Within a year or two, everything – every newsroom, every publication, every tool we use – will have AI running through it. It won’t be flagged; it’ll just be quietly there, like spellcheck or the internet itself.
We should absolutely build smart, ethical guidelines. Humans must stay accountable. Sourcing must be clear and we should not deny that high-risk content needs extra care.
But insisting that all AI use be labelled is not only impractical, it’s outdated before it’s even begun. It imagines a world where AI is still exceptional. In reality, it’s already everywhere.
And while publishers debate labelling policies, others are out there building integrated tools that will outpace them.
I think this report is important. It gets almost everything right. But on this point, we need to call it out.
Trust comes from responsibility and clarity, not disclaimers.
Ivan Massow is co-founder of Tomorrow’s Publisher and founder and ceo of NoahWire
- https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ – This article discusses a U.S. federal judge’s ruling that Anthropic’s use of copyrighted books to train its AI system constitutes fair use, highlighting the legal complexities surrounding AI-generated content and copyright law.
- https://www.ft.com/content/44d530bc-a710-4705-be3a-9d11bc23b131 – This piece examines the publishing industry’s challenges with generative AI technologies, including concerns over AI-generated content replacing human authorship and the need for new contractual clauses to protect authors’ intellectual property.
- https://apnews.com/article/fcdf454a5b467dad3adfed6ca1a224d2 – This article reports on a summer book list that included nonexistent books due to AI-generated content, underscoring the importance of verifying AI-generated information in journalism.
- https://time.com/6257471/clarkesworld-ai-science-fiction/ – This article highlights the surge of AI-generated pitches in the science fiction publishing industry, leading to a temporary halt in submissions, and discusses the challenges of maintaining originality and quality in the face of AI-generated content.
- https://www.axios.com/2025/06/11/taboola-gen-ai-search – This piece covers Taboola’s launch of DeeperDive, a generative AI search engine for publishers, illustrating the industry’s efforts to integrate AI tools to enhance user search experiences and support monetisation strategies.
- https://www.ft.com/content/c581fb74-8d85-4c08-8a46-a7c9ef174454 – This article discusses how media groups are adopting AI tools to improve efficiency and complement storytelling, highlighting the balance between leveraging AI for cost-cutting and maintaining ethical standards in content creation.


