5:34 am - February 23, 2025

The paper has approved AI tools for editorial tasks, setting guidelines and emphasising human oversight

The New York Times has approved the use of artificial intelligence tools within its newsroom, permitting staff to utilise these technologies for tasks such as editing, summarising, coding and writing. This decision was communicated to employees through an internal email and was first reported by Semafor.

As part of this initiative, product and editorial staff will receive training on AI, with a focus on a new internal tool named Echo. This tool is designed to summarise articles, briefings and other company activities. Alongside Echo, additional editorial guidelines have been circulated, detailing how and when AI tools may be employed by staff. These include using AI to propose edits and enhancements to their work, as well as for generating summaries, promotional text for social media and SEO-optimised headlines.

A training video shared with employees outlines further potential applications of AI in the newsroom. These applications could encompass creating news quizzes, generating quote cards, formulating FAQs and even advising journalists on questions to pose during interviews.

However, the Times has set clear restrictions regarding the use of AI: the technology cannot be employed to draft or significantly alter articles, bypass paywalls, utilise copyrighted third-party materials, or publish AI-generated images or videos without appropriate labelling.

It remains uncertain to what extent the Times will permit AI-edited content to appear in its published articles. In a memo last year, the outlet asserted that journalism will remain the domain of its experienced journalists, a commitment reiterated in subsequent communications. The organisation said: “Times journalism will always be reported, written and edited by our expert journalists.”

The principles regarding generative AI, adopted in May 2024, emphasise the necessity for human oversight in all AI-assisted processes. “Generative AI can sometimes help with parts of our process, but the work should always be managed by and accountable to journalists,” the principles maintain. It underscores that any information generated by AI must originate from fact-checked data vetted by journalists and must undergo review by editors.

In tandem with Echo, The New York Times has authorised the use of several other AI tools, including GitHub Copilot for programming assistance, Google Vertex AI for product development, NotebookLM, the NYT’s ChatExplorer, OpenAI’s non-ChatGPT API, alongside select Amazon AI products.

Source: Noah Wire Services

More on this

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
9

Notes:
The narrative references recent developments, such as the approval of AI tools in May 2024 and ongoing legal disputes, indicating it is relatively fresh. However, the absence of specific dates for some events prevents a perfect score.

Quotes check

Score:
5

Notes:
There are no direct quotes in the narrative that can be verified against earlier sources. The text references a memo and principles adopted by The New York Times, but specific quotes are not provided.

Source reliability

Score:
9

Notes:
The narrative originates from The Verge, a reputable technology news outlet. It also references Semafor, another known publication, which adds to the reliability.

Plausability check

Score:
8

Notes:
The claims about AI integration in newsrooms are plausible given the current trend in the journalism sector. However, some details, such as the extent of AI-edited content in published articles, remain uncertain.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative is generally fresh, reliable, and plausible. It lacks direct quotes for verification but is supported by reputable sources. The integration of AI tools in newsrooms is a current trend, making the claims plausible.

Tags:

Register for Editor’s picks

Stay ahead of the curve with our Editor's picks newsletter – your weekly insight into the trends, challenges, and innovations driving the future of digital media.

Leave A Reply

© 2025 Tomorrow’s Publisher. All Rights Reserved. Powered By Noah Wire Services. Created By Sawah Solutions.
Exit mobile version
×