1:00 pm - July 3, 2025

Research highlights the diminishing trust readers have in AI-influenced journalism and the need for clearer communication on the role of AI in content creation.

Research from the University of Kansas suggests that when readers are made aware that AI played a role in news production, their trust in the credibility of that content tends to diminish.

This research was undertaken by Alyssa Appelman and Steve Bien-Aimé, both associate professors at the William Allen White School of Journalism and Mass Communications, who conducted experimental studies to assess how different bylines — specifically those indicating AI involvement — affect readers’ perceptions. The studies focused on responses to an article concerning the safety of the artificial sweetener aspartame.

Participants in the study were randomly assigned to read articles attributed to various bylines, such as “written by staff writer” and “written by staff writer with artificial intelligence tool.” Despite each byline being linked to the same content, the researchers observed variations in reader perception. The findings revealed that when AI was mentioned, readers reported lower confidence in the credibility of the news source and the author.

Appelman explained that while many readers understand that AI can assist with tasks such as research and drafting, their interpretation of the extent of AI’s involvement appears to be skewed. “People have a lot of different ideas on what AI can mean, and when we are not clear on what it did, people will fill in the gaps on what they thought it did,” she said.

Simultaneously, a companion study focused on how readers’ perceptions of human contributions influence their judgments about credibility when AI is disclosed in bylines. The analysis showed that participants tended to credit higher degrees of authorship to human journalists rather than AI. The findings suggest that even a minor reference to AI could adversely impact the perceived credibility of news articles, leading to distrust.

The scope of these studies indicates a necessary path for journalism. Bien-Aimé emphasised the need for transparency, stating, “This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not.” This has particular relevance in light of recent controversies, such as the allegations against Sports Illustrated for publishing AI-generated articles while implying human authorship.

The researchers advocate for greater clarity in disclosing AI’s role in journalism, noting that simply mentioning AI’s involvement might not suffice. They argue that without an explicit explanation of the nature and extent of AI contributions, readers may draw incorrect conclusions that impact their trust in the medium.

Both studies were published in reputable academic journals: Communication Reports and Computers in Human Behavior: Artificial Humans. They suggest a pressing need for ongoing research into public understanding and perceptions of AI in media and the ethical frameworks that govern its use.

Source: Noah Wire Services

More on this

Tags:

Register for Editor’s picks

Stay ahead of the curve with our Editor's picks newsletter – your weekly insight into the trends, challenges, and innovations driving the future of digital media.

Leave A Reply

© 2025 Tomorrow’s Publisher. All Rights Reserved. Powered By Noah Wire Services. Created By Sawah Solutions.
Exit mobile version
×