Shoppers, creators and policymakers are pushing AI makers to build stronger legal guardrails for name, image and likeness use , because when a tool lets you clone a face or voice by default, things get messy fast. Here’s what developers, talent and lawmakers are doing, and why an opt‑in approach matters.
Essential Takeaways
- Backlash was immediate: OpenAI’s Sora 2 drew rapid criticism for defaulting to allow use of real people’s likenesses, prompting policy changes and pledges to support federal rules.
- Federal fix is coming: The NO FAKES Act, reintroduced as a bipartisan bill, would create a national right of publicity for voice and visual likeness, reducing the patchwork of state laws.
- Practical guardrails: Prompt filtering, consent systems, context analysis and opt‑in defaults reduce misuse and help defend developers from secondary liability.
- Who needs to act: Developers should loop in IP and tech counsel early; performers, estates and creators should seek advice to protect their likenesses and monetise responsibly.
- Risk signals: Public figures, talent agencies and unions have voiced concrete harms , reputational, commercial and privacy , that policy and product design must address.
Why Sora 2 became the test case for likeness rights
When OpenAI launched Sora 2, the visual and voice‑replication features looked slick , and then celebrities and estates started spotting unauthorised recreations of their faces and voices. AP News detailed swift alarm among public figures, and talent agencies like Creative Artists Agency called the rollout risky for creators’ rights. The sensory jolt , seeing a convincing fake of someone you know in a short clip , made the issue visceral, not abstract. That public outrage pushed OpenAI to backtrack from an opt‑out model to opt‑in controls, which is exactly the kind of product pivot lawyers recommend before regulators weigh in.
What the NO FAKES Act would change (and why it matters)
Legislators reintroduced the NO FAKES Act as a bipartisan solution to this problem, aiming to set a federal baseline for likeness protections and potentially pre‑empt some state laws. The Senate and House sponsors argue the bill balances innovation with creator control, by recognising a federal right of publicity for voice and visual likeness. For developers, that means a single, nationwide standard could replace a confusing patchwork , and for talent, it could give clearer avenues to stop and monetise digital replicas. The bill’s progress is worth watching because it will shape what “responsible defaults” actually look like in code.
Product fixes that actually reduce misuse (and are lawyer‑friendly)
There are clear technical and policy levers teams can flip today. Prompt filtering flags requests that target identifiable people; consent gates prevent use without explicit permission; context analysis separates newsworthy or educational uses from commercial ads; and opt‑in defaults put control in people’s hands. Industry lawyers tell developers these measures not only protect individuals but also create a stronger defence against secondary liability if someone abuses a tool. In short: build the safety net before the headline storm hits.
Industry reaction , creators, agencies and countries aren’t waiting
Hollywood unions and agencies have been loud: SAG‑AFTRA and major agencies warned of mass misappropriation without guardrails. Meanwhile, international pushes complicated the picture , reports showed creators abroad raising alarms about racist or harmful AI clones and countries like Japan pushing back on some OpenAI moves. That global mix means compliance teams must think across jurisdictions, not just US states. For creators, the takeaway is simple: monitor where your likeness is used, opt out or licence proactively, and get counsel who understands both IP and reputational risk.
How to choose the right approach for your product or portfolio
If you’re a developer shipping a generative tool, start with legal input during design sprints: pick opt‑in as the safer default, layer in prompt and content filters, and provide granular controls for IP owners. If you’re a creator, catalogue what’s unique about your brand , voice, mannerisms, signature looks , and consult an IP lawyer about contracts and potential statutory remedies. For both camps, transparency is key: clear labelling of synthetic content and straightforward takedown or licensing pathways cut down on harm and build trust.
It’s a small change in settings that can make every generated clip safer and more respectful.
Source Reference Map
Story idea inspired by: [1]
Sources by paragraph:
- https://news.bloomberglaw.com/legal-exchange-insights-and-commentary/ai-tool-developers-must-make-systems-with-strong-legal-guardrails – Please view link – unable to able to access data
- https://www.apnews.com/article/214d578d048f39c9c7b327f870dc6df8 – OpenAI’s Sora 2, an AI video generator, was released to the public through its premium ChatGPT platform, enabling users to create high-quality videos from text prompts. To prevent misuse, OpenAI imposed strict limitations on depicting people, allowing only a select group of testers to generate human likenesses while evaluating potential abuse. The company also prohibited content involving nudity or sexual exploitation. The release saw significant demand, leading to a temporary halt on new account creation. OpenAI consulted with artists and policymakers before the public release and has not disclosed the datasets used to train Sora. Sora joins a growing field of text-to-video tools that promise cost-saving in media production but also raise ethical and legal concerns.
- https://www.coons.senate.gov/news/press-releases/senators-coons-blackburn-reps-salazar-dean-colleagues-reintroduce-no-fakes-act-to-protect-individuals-and-creators-from-digital-replicas – U.S. Senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Thom Tillis (R-N.C.), and Amy Klobuchar (D-Minn.), along with U.S. Representatives Maria Salazar (R-Fla.) and Madeleine Dean (D-Pa.), introduced the bipartisan Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. This legislation aims to protect the voice and visual likenesses of individuals and creators from the proliferation of digital replicas created without their consent. The Act seeks to establish a federal right of publicity, granting individuals greater control over the creation and use of digital replicas of their likenesses, including AI-generated content.
- https://www.apnews.com/article/741a6e525e81e5e3d8843aac20de8615 – President Donald Trump signed the Take It Down Act into law on April 29, 2025. This bipartisan legislation, co-sponsored by Senators Ted Cruz and Amy Klobuchar, aims to combat the spread of non-consensual intimate imagery, including AI-generated deepfakes. The law criminalizes the publishing or threatening to publish such content without consent and mandates online platforms to remove the material within 48 hours of notification from a victim. It also requires the removal of duplicate content across platforms. The bill has garnered broad bipartisan support and backing from First Lady Melania Trump, major tech firms like Meta (Facebook/Instagram), and advocacy groups. Supporters hail it as a crucial step in protecting victims from online abuse and enforcing accountability on perpetrators and tech platforms.
- https://www.forbes.com/sites/legalentertainment/2025/10/17/sora-2-does-a-copyright-somersault-upon-launch/ – OpenAI’s rollout of its Sora 2 video app faced backlash from the entertainment industry due to its ability to allow users to generate videos containing copyrighted content and upload it across the internet. Sora 2 launched with a questionable third-party rights model, inviting intellectual property owners to opt out of the app, effectively permitting users to access and manipulate copyrighted material, voices, and likenesses until the rightsholder requests they stop. Within 72 hours of the launch, OpenAI CEO Sam Altman released a statement stating he wanted to give rightsholders ‘more granular control’ over their intellectual property and switched the program to an opt-in model.
- https://www.windowscentral.com/artificial-intelligence/from-studio-ghibli-to-square-enix-japans-stand-against-openai – Japanese IP holders, represented by the Content Overseas Distribution Association (CODA), have formally challenged OpenAI over its use of copyrighted material in training its AI model, Sora 2. Released in 2025, Sora 2 gained attention for generating viral videos featuring characters from popular franchises like Dragon Ball and Mario. However, this triggered backlash from Japanese media companies, including Studio Ghibli, Bandai Namco, and Square Enix, due to potential copyright infringements. CODA sent a letter to OpenAI on October 28, insisting the company stop using Japanese content unless prior consent is given—as required under Japanese law. Although OpenAI has updated its opt-out policy, critics argue it’s insufficient, as Japan emphasizes pre-authorization rather than post-facto exclusion. The controversy highlights growing global tensions over AI training data usage and may have broad legal implications, not only in Japan but globally, with similar concerns recently raised by authors like George R.R. Martin. CODA’s demands emphasize the need for more stringent practices around AI training methodologies involving copyrighted works.
- https://www.lemonde.fr/en/pixels/article/2025/11/08/top-french-youtuber-caught-in-flood-of-ai-generated-racist-videos-on-sora_6747253_13.html – A controversy has emerged involving French YouTuber Tibo InShape, whose likeness has been used in a wave of AI-generated videos on the new social app Sora, many of which contain racist content. Sora, launched by OpenAI, allows users to create videos using AI and insert public figures with permission through its ‘cameo’ feature. Tibo InShape had consented to use his image, aiming to expand his visibility and global reach. However, users began generating highly offensive content, including videos where he appears to shout slurs and perform discriminatory actions. Initially addressing the issue by condemning the realism and racist nature of the videos, Tibo later offered a more ambiguous message, calling some of the clips ‘dark humor’ and noting their popularity online. Despite criticism, he has not removed the problematic videos and continues to benefit from the visibility, using the attention to promote his new book. The incident highlights growing concerns over platform moderation on apps like Sora and TikTok, especially among youth. OpenAI has not responded publicly, though it claims to prohibit hateful content. The case further intensifies debates around deepfake technology, consent, and platform accountability.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is current, with the latest developments in AI likeness rights and legal guardrails being reported in October 2025. ([theguardian.com](https://www.theguardian.com/technology/2025/oct/21/bryan-cranston-sora-2-openai?utm_source=openai))
Quotes check
Score:
10
Notes:
Direct quotes from Bryan Cranston and other stakeholders are unique to this report, with no earlier matches found. ([theguardian.com](https://www.theguardian.com/technology/2025/oct/21/bryan-cranston-sora-2-openai?utm_source=openai))
Source reliability
Score:
10
Notes:
The narrative originates from Bloomberg Law, a reputable organisation known for its legal reporting.
Plausability check
Score:
10
Notes:
The claims about OpenAI’s Sora 2 and the NO FAKES Act are consistent with other reputable sources, including The Guardian and Investing.com. ([theguardian.com](https://www.theguardian.com/technology/2025/oct/21/bryan-cranston-sora-2-openai?utm_source=openai))
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is current, originates from a reputable source, and presents unique quotes and consistent claims, with no signs of recycled content or disinformation.






