1:58 am - October 29, 2025

A recent study reveals the concerning implications of AI search tools on news publishers.

Recent research conducted by the Tow Center for Digital Journalism reveals that nearly 25% of Americans now use artificial intelligence (AI) search tools instead of traditional search engines, highlighting a shift in how information is accessed online.

The study involved an examination of eight AI chatbots which included tools like ChatGPT and Perplexity. It aimed to assess their capabilities in retrieving and correctly citing news content from various publishers. The methodology included running 1,600 queries, where researchers provided short excerpts from randomly selected articles, expecting the chatbots to deliver appropriate responses that included the article’s title, original publisher, publication date and a URL.

The study findings indicate a troubling trend: over 60% of the responses provided by these chatbots were incorrect, revealing a marked deficiency in accurately sourcing original articles. For example, the AI tool Grok 3 had a particularly high error rate, incorrectly answering 94% of the queries assessed.

One of the core issues found was that while premium chatbots may seem more credible due to their cost, they often delivered confidently incorrect responses instead of acknowledging uncertainty. This phenomenon presents a risk as users may struggle to discern correct information amid authoritative-sounding inaccuracies. Many chatbots exhibited a pattern of confidently presenting answers, with tools like ChatGPT incorrectly asserting information 134 times, but only expressing uncertainty on 15 occasions.

Moreover, several AI chatbots were observed to ignore publishers’ preferences delineated in the Robot Exclusion Protocol, which is intended to prevent crawler access to certain content. Perplexity identified excerpts from paywalled articles from National Geographic, even though the publisher had disallowed access. This raises pressing questions about publishers’ control over their content and potential misrepresentations in AI-generated summaries.

Chatbots also demonstrated a propensity to falsely attribute content, with one instance revealing DeepSeek misattributing material 115 times in 200 queries. Incorrect linking practices further complicated matters, as chatbots frequently fabricated URLs or directed users to organic versions of articles rather than the original sources, undermining the work of publishers who rely on referral traffic.

Despite existing partnerships between AI companies, such as OpenAI and Perplexity, with news organisations aimed at fostering mutual benefit, the data suggests these relationships have not yet translated into increased accuracy concerning citation. Time magazine and the San Francisco Chronicle both have partnerships with such AI companies; however, the accuracy rate for correctly identifying their content remained disappointingly low.

Source: Noah Wire Services

More on this

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
8

Notes:
The narrative discusses recent research and ongoing issues with AI search tools, indicating it is relatively current. However, specific dates or recent updates are not provided.

Quotes check

Score:
6

Notes:
The quote from Danielle Coffey, President of the News Media Alliance, is included but lacks an earliest known online source. It appears original to this context.

Source reliability

Score:
9

Notes:
The narrative originates from the Columbia Journalism Review (CJR), a reputable source in journalism studies, enhancing its credibility.

Plausability check

Score:
8

Notes:
The claims about AI search tools’ inaccuracies in citing news sources are plausible and align with known challenges in AI-generated content.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative is well-supported by a reputable source and discusses timely issues with AI search tools. While some quotes lack early online references, the overall credibility and plausibility of the claims are strong.

Tags:

Register for Editor’s picks

Stay ahead of the curve with our Editor's picks newsletter – your weekly insight into the trends, challenges, and innovations driving the future of digital media.

Leave A Reply

© 2025 Tomorrow’s Publisher. All Rights Reserved. Powered By Noah Wire Services. Created By Sawah Solutions.
Exit mobile version
×