Research highlights the diminishing trust readers have in AI-influenced journalism and the need for clearer communication on the role of AI in content creation.
Research from the University of Kansas suggests that when readers are made aware that AI played a role in news production, their trust in the credibility of that content tends to diminish.
This research was undertaken by Alyssa Appelman and Steve Bien-Aimé, both associate professors at the William Allen White School of Journalism and Mass Communications, who conducted experimental studies to assess how different bylines — specifically those indicating AI involvement — affect readers’ perceptions. The studies focused on responses to an article concerning the safety of the artificial sweetener aspartame.
Participants in the study were randomly assigned to read articles attributed to various bylines, such as “written by staff writer” and “written by staff writer with artificial intelligence tool.” Despite each byline being linked to the same content, the researchers observed variations in reader perception. The findings revealed that when AI was mentioned, readers reported lower confidence in the credibility of the news source and the author.
Appelman explained that while many readers understand that AI can assist with tasks such as research and drafting, their interpretation of the extent of AI’s involvement appears to be skewed. “People have a lot of different ideas on what AI can mean, and when we are not clear on what it did, people will fill in the gaps on what they thought it did,” she said.
Simultaneously, a companion study focused on how readers’ perceptions of human contributions influence their judgments about credibility when AI is disclosed in bylines. The analysis showed that participants tended to credit higher degrees of authorship to human journalists rather than AI. The findings suggest that even a minor reference to AI could adversely impact the perceived credibility of news articles, leading to distrust.
The scope of these studies indicates a necessary path for journalism. Bien-Aimé emphasised the need for transparency, stating, “This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not.” This has particular relevance in light of recent controversies, such as the allegations against Sports Illustrated for publishing AI-generated articles while implying human authorship.
The researchers advocate for greater clarity in disclosing AI’s role in journalism, noting that simply mentioning AI’s involvement might not suffice. They argue that without an explicit explanation of the nature and extent of AI contributions, readers may draw incorrect conclusions that impact their trust in the medium.
Both studies were published in reputable academic journals: Communication Reports and Computers in Human Behavior: Artificial Humans. They suggest a pressing need for ongoing research into public understanding and perceptions of AI in media and the ethical frameworks that govern its use.
Source: Noah Wire Services
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Corroborates the University of Kansas research on how readers’ trust in news credibility diminishes when AI involvement is mentioned, and the impact of different bylines on reader perceptions.
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Provides details on the experimental studies by Appelman and Bien-Aimé, including the effects of AI mentions in bylines on reader confidence and credibility judgments.
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Explains the necessity for transparency in disclosing AI’s role in journalism and the potential for misinterpretation by readers without clear explanations.
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Discusses the companion study on how readers’ perceptions of human contributions affect credibility judgments when AI is disclosed, and the impact on trust in news articles.
- https://red.library.usd.edu/diss-thesis/255/ – Supports the broader context of declining trust in media and the specific issue of lower credibility perceived in AI-generated news articles compared to human-written ones.
- https://red.library.usd.edu/diss-thesis/255/ – Highlights the importance of transparency in AI-generated content and its impact on reader trust and credibility perceptions in the context of automated journalism.
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Mentions the recent controversy involving Sports Illustrated and the allegations of publishing AI-generated articles as human-written, underscoring the need for clear disclosure.
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Indicates the publication of the studies in reputable academic journals such as *Communication Reports* and *Computers in Human Behavior: Artificial Humans*.
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Emphasizes the need for ongoing research into public understanding and perceptions of AI in media and the ethical frameworks governing its use.
- https://news.ku.edu/news/article/study-finds-readers-trust-news-less-when-ai-is-involved-even-when-they-dont-understand-to-what-extent – Highlights the gap between media creators’ and consumers’ understanding of journalistic practices and the necessity for corrective measures in communication strategies regarding AI.
- https://wit-ie.libguides.com/c.php?g=648995&p=4551538 – Provides general guidelines on evaluating online sources, including the importance of transparency, authority, and objectivity, which are relevant to the trust issues raised by AI-generated content.