- Fake Facebook pages impersonate New Zealand news outlets, misleading the public with fabricated visuals
- Synthetic images linked to disasters and political campaigns raise legal and ethical concerns
- Experts warn about the increasing sophistication and reach of AI-generated misinformation across social media
A growing network of Facebook pages masquerading as New Zealand news outlets is flooding social media with AI-generated images and videos that distort genuine reporting, according to an investigation. The accounts lift copy from established mastheads and attach computer-generated visuals or lightly rewritten text.
While the examples uncovered are New Zealand-specific, researchers and fact-checkers say the same pattern is playing out across news markets worldwide, largely unnoticed by readers until harm is done.
Reporting by Australian Associated Press found one page, operating under the name NZ News Hub, repeatedly republished stories from RNZ, the New Zealand Herald, Stuff and others. The material was overlaid with AI-produced images and short videos and presented as original content.
The page’s biography promises “latest New Zealand news, breaking stories, politics, business, sport, and community updates”. It has attracted thousands of followers and steady engagement despite producing no journalism of its own.
The practice has been particularly cruel in its treatment of the Mount Maunganui landslide that killed six people. A still photograph supplied by police of a 15-year-old victim, Sharon Maccanico, was animated to make it appear she was dancing. RNZ confirmed no such video was recorded by its crews.
Fact checks by AAP and others show multiple images linked to the disaster contain geographic errors, implausible details or digital markers indicating they were generated by AI rather than captured on scene.
Experts say the incentive is straightforward. Andrew Lensen, a senior lecturer in AI and programme director at Victoria University of Wellington, told AAP: “These pages want to get as much engagement (reactions, comments, shares) as possible, in order to build their following/exposure and potential ad revenue.”
The ready availability of generative tools has lowered the barrier to creating what look like news operations, he said, and some synthetic images even carry watermarks such as Google’s SynthID that most users would not recognise.
Other outlets have documented similar behaviour. A 1News analysis identified at least 10 Facebook pages that repurpose local reporting, run it through generative systems and publish it with fabricated visuals. One review found a single page posted more than 200 items in a month. Separate AAP fact checks detail repeated cases in which purported footage of politicians, police responses or grieving families was fabricated or manipulated.
The spread is not confined to Facebook. Fact-checking organisations report false images and clips appearing on TikTok, Instagram and X within minutes of breaking events. Transparency data for Facebook pages shows many of the accounts are administered from overseas, including operators in Vietnam and Malaysia, complicating questions of intent and accountability. Even when platforms act, moderators say near-identical clones often reappear quickly.
The legal position offers limited comfort. New Zealand’s Classification Office says the law treats AI-generated material the same as other content under the Films, Videos, and Publications Classification Act 1993: what matters is what is depicted, not how it was created. Civil defence agencies and community groups have warned the public about synthetic posts during emergencies, citing the real-world harm misinformation can cause.
Mainstream outlets are responding cautiously. RNZ has published AI principles stating it will generally not knowingly disseminate output created by generative systems. Some industry observers argue trusted media may gain renewed authority as sources of verified information. Others warn that any reliance on AI by legacy organisations risks further blurring the line between fact and fabrication.
Source: Noah Wire Services
- https://www.rnz.co.nz/news/national/586298/how-fake-nz-news-pages-are-swamping-facebook-with-ai-slop – Please view link – unable to able to access data
- https://www.rnz.co.nz/news/national/586298/how-fake-nz-news-pages-are-swamping-facebook-with-ai-slop – An investigation reveals that numerous Facebook pages, such as ‘NZ News Hub’, are disseminating AI-generated content that misrepresents actual events. These pages often take legitimate news reports from sources like RNZ, the New Zealand Herald, and Stuff, and add misleading AI-generated images or videos. For example, a video was posted that grotesquely animates a still photo of a 15-year-old Mount Maunganui landslide victim, making her appear to dance. Despite lacking original reporting, these pages amass significant engagement, with nearly 5,000 followers and numerous interactions. Experts express concern over the proliferation of such content and its potential to erode trust in legitimate news sources. Attempts to contact ‘NZ News Hub’ for comment went unanswered. The ease of creating AI-generated content has led to a surge in ‘fake news’ factories, with little moderation by tech giants to curb the spread. Many fake images bear a ‘SynthID’ watermark, indicating the use of Google’s AI tools, but detecting this watermark requires specific knowledge. The Mount Maunganui disaster, which resulted in six fatalities, has been particularly exploited, with numerous false images and videos circulating online. False information about the victims has also been disseminated. For instance, a still photo provided by NZ Police of victim Sharon Maccanico, 15, was animated by ‘NZ News Hub’ to make it appear as if she was dancing, a video that was not taken by RNZ. Such practices highlight the challenges posed by AI-generated content in maintaining the integrity of news reporting.
- https://www.aap.com.au/factcheck/nz-media-outlet-misrepresents-news-with-ai-images-and-video/ – A Facebook page named ‘NZ News Hub’ is publishing AI-generated content that misrepresents actual events. The page, which claims to provide the latest New Zealand news, regularly shares stories based on legitimate news reports but adds AI-generated images and videos. For instance, a video was posted that animates a still photo of a 15-year-old Mount Maunganui landslide victim, making her appear to dance. Despite lacking original reporting, the page has amassed nearly 5,000 followers and numerous interactions. Experts express concern over the proliferation of such content and its potential to erode trust in legitimate news sources. Attempts to contact ‘NZ News Hub’ for comment went unanswered. The ease of creating AI-generated content has led to a surge in ‘fake news’ factories, with little moderation by tech giants to curb the spread. Many fake images bear a ‘SynthID’ watermark, indicating the use of Google’s AI tools, but detecting this watermark requires specific knowledge. The Mount Maunganui disaster, which resulted in six fatalities, has been particularly exploited, with numerous false images and videos circulating online. False information about the victims has also been disseminated. For instance, a still photo provided by NZ Police of victim Sharon Maccanico, 15, was animated by ‘NZ News Hub’ to make it appear as if she was dancing, a video that was not taken by RNZ. Such practices highlight the challenges posed by AI-generated content in maintaining the integrity of news reporting.
- https://www.aap.com.au/factcheck/facebook-pages-peddle-ai-images-of-nz-landslide-disaster/ – Fake images and stories about a New Zealand landslide that killed six people are being used to drive engagement on Facebook. The images do not contain artificial intelligence (AI) labels but all feature geographic impossibilities, hallucinated details or digital watermarks indicating they were AI-generated or show other unrelated disasters. The stories shared online about the actions and purported videos of victims have not been reported by any credible news outlets. Six people are presumed dead after a massive landslide at one of NZ’s most popular beach campsites at Mt Maunganui on the North Island.
- https://www.1news.co.nz/2026/02/09/ai-generated-news-pages-on-social-media-misleading-thousands-of-kiwis/ – Thousands of New Zealanders are liking, commenting on and sharing ‘news’ on social media they may not realise has been written by artificial intelligence and paired with fabricated imagery that is unlabelled and inaccurate, a 1News investigation has found. Experts say the popularity and proliferation of these accounts blur the line between real reporting and fabricated content and may contribute to Kiwis’ already low trust in news, while civil defence groups have issued public warnings about the pages. 1News has identified at least 10 Facebook pages that take existing New Zealand news stories, run them through artificial intelligence to rewrite them, and publish them on Facebook with synthetic images. A review of one of these social media ‘news’ pages, named NZ News Hub with thousands of likes, comments and shares, looked at 209 posts made in the month of January. The page’s name was similar to national outlet Newshub, which closed in 2024. Its bio read, ‘NZ News Hub brings you the latest New Zealand news, breaking stories, politics, business, sport, and community updates’, but the page does not appear to contain any original reporting.
- https://www.classificationoffice.govt.nz/classification-info/what-we-classify/ai-generated-content/ – Artificial intelligence (AI) tools can now create images, videos, audio, and written material that may look highly realistic or completely synthetic. It appears across social media, memes, advertising, gaming, entertainment, and everyday online interactions. Some AI might look like it involves real people – even if it doesn’t. Apps, including deepfake or so‑called ‘nudify’ tools, make it easy for anyone to generate content that can be harmful, abusive, or illegal. A common misunderstanding is that AI‑generated or ‘fake’ content cannot be illegal because it isn’t real. In New Zealand, it can. Under the Films, Videos, and Publications Classification Act 1993, AI‑generated content is treated the same as any other content. What matters is what the content shows, not how it was made. Even if something is fictional, computer‑generated, or created as a joke, it can still be illegal. This page explains how the classification law applies to AI‑generated content, what types of AI content can be illegal, why this matters, and what to do if you come across it.
- https://www.theguardian.com/world/2023/may/24/new-zealand-national-party-admits-using-ai-generated-people-in-ads – The New Zealand National Party has admitted to using AI-generated images in its political advertising. The images, which depicted a woman with exaggerated features and individuals with unrealistic skin textures, raised suspicions about their authenticity. Initially, party leader Christopher Luxon was uncertain about the use of AI in the ads but later confirmed that AI was used to create some stock images. The party described this as an ‘innovative way to drive our social media’ and stated its commitment to using AI responsibly. This admission highlights the growing use of AI in political campaigns and the potential for misinformation.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article was published on 9 February 2026, which is recent. However, similar reports have appeared in the past week, such as the AAP’s article on 5 February 2026. ([aap.com.au](https://www.aap.com.au/factcheck/nz-media-outlet-misrepresents-news-with-ai-images-and-video/?utm_source=openai)) This suggests the topic is currently under active investigation, but the specific content may not be entirely original.
Quotes check
Score:
7
Notes:
The article includes direct quotes from experts like Andrew Lensen, a senior lecturer in AI at Victoria University of Wellington. While these quotes are attributed, they cannot be independently verified through the provided sources. The lack of direct links to the original statements raises concerns about the authenticity and context of the quotes.
Source reliability
Score:
9
Notes:
The article is published by RNZ, a reputable New Zealand news organisation. However, the content heavily relies on information from the Australian Associated Press (AAP), which may affect the independence of the reporting. The AAP’s article is also cited as a source, indicating a reliance on external reporting.
Plausibility check
Score:
8
Notes:
The claims about AI-generated content on Facebook are plausible and align with recent reports. However, the article’s reliance on a single source (AAP) for specific details about the ‘NZ News Hub’ page raises questions about the comprehensiveness of the investigation. The absence of direct evidence or examples from RNZ’s own findings is a concern.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article addresses a timely and plausible issue regarding AI-generated content on Facebook. However, it heavily relies on information from AAP, lacks direct verification from RNZ’s own reporting, and includes unverifiable quotes. These factors raise concerns about the originality, independence, and verification of the content.






