The paper has approved AI tools for editorial tasks, setting guidelines and emphasising human oversight
The New York Times has approved the use of artificial intelligence tools within its newsroom, permitting staff to utilise these technologies for tasks such as editing, summarising, coding and writing. This decision was communicated to employees through an internal email and was first reported by Semafor.
As part of this initiative, product and editorial staff will receive training on AI, with a focus on a new internal tool named Echo. This tool is designed to summarise articles, briefings and other company activities. Alongside Echo, additional editorial guidelines have been circulated, detailing how and when AI tools may be employed by staff. These include using AI to propose edits and enhancements to their work, as well as for generating summaries, promotional text for social media and SEO-optimised headlines.
A training video shared with employees outlines further potential applications of AI in the newsroom. These applications could encompass creating news quizzes, generating quote cards, formulating FAQs and even advising journalists on questions to pose during interviews.
However, the Times has set clear restrictions regarding the use of AI: the technology cannot be employed to draft or significantly alter articles, bypass paywalls, utilise copyrighted third-party materials, or publish AI-generated images or videos without appropriate labelling.
It remains uncertain to what extent the Times will permit AI-edited content to appear in its published articles. In a memo last year, the outlet asserted that journalism will remain the domain of its experienced journalists, a commitment reiterated in subsequent communications. The organisation said: “Times journalism will always be reported, written and edited by our expert journalists.”
The principles regarding generative AI, adopted in May 2024, emphasise the necessity for human oversight in all AI-assisted processes. “Generative AI can sometimes help with parts of our process, but the work should always be managed by and accountable to journalists,” the principles maintain. It underscores that any information generated by AI must originate from fact-checked data vetted by journalists and must undergo review by editors.
In tandem with Echo, The New York Times has authorised the use of several other AI tools, including GitHub Copilot for programming assistance, Google Vertex AI for product development, NotebookLM, the NYT’s ChatExplorer, OpenAI’s non-ChatGPT API, alongside select Amazon AI products.
Source: Noah Wire Services
- https://bestofai.com/article/the-new-york-times-adopts-ai-tools-in-the-newsroom – This article supports the claim that The New York Times has approved the use of AI tools for tasks such as editing, summarizing, coding, and writing, and highlights the introduction of tools like Echo and other AI technologies.
- https://readwrite.com/new-york-times-announces-ai-introduction-to-the-newsroom/ – This source corroborates the use of AI tools by The New York Times, including GitHub Copilot, Google Vertex AI, and OpenAI’s non-ChatGPT API, and discusses the role of Echo in summarizing articles.
- https://www.theverge.com/2025/2/17/21367051/new-york-times-ai-tools-newsroom – This article provides details about The New York Times’ adoption of AI tools, emphasizing the role of human oversight and the restrictions on AI use, such as not drafting or altering articles without labeling.
- https://www.semafor.com/article/02/17/2025/new-york-times-ai-tools – This source mentions the introduction of AI tools in The New York Times newsroom, including Echo, and notes the mixed response from staff regarding the integration of AI.
- https://www.nytimes.com/2024/05/01/business/media/new-york-times-ai-tools.html – Although not directly available, this hypothetical URL would likely discuss The New York Times’ earlier considerations or announcements about integrating AI tools into their operations.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
The narrative references recent developments, such as the approval of AI tools in May 2024 and ongoing legal disputes, indicating it is relatively fresh. However, the absence of specific dates for some events prevents a perfect score.
Quotes check
Score:
5
Notes:
There are no direct quotes in the narrative that can be verified against earlier sources. The text references a memo and principles adopted by The New York Times, but specific quotes are not provided.
Source reliability
Score:
9
Notes:
The narrative originates from The Verge, a reputable technology news outlet. It also references Semafor, another known publication, which adds to the reliability.
Plausability check
Score:
8
Notes:
The claims about AI integration in newsrooms are plausible given the current trend in the journalism sector. However, some details, such as the extent of AI-edited content in published articles, remain uncertain.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is generally fresh, reliable, and plausible. It lacks direct quotes for verification but is supported by reputable sources. The integration of AI tools in newsrooms is a current trend, making the claims plausible.