Sir David Attenborough is among a number of famous people who have expressed their dismay about their voice being used for AI-generated deepfakes.
Prominent public figures, including Sir David Attenborough and Scarlett Johansson, have expressed their growing unease over AI-generated voice cloning. American websites are increasingly using advanced AI technology to clone the voices of well-known personalities, stirring up public debate over identity and privacy rights.
Sir David Attenborough said he was deeply concerned upon discovering that partisan news outlets in the United States have been replicating his voice without permission. These AI-generated imitations have been used to present news reports on hot-button issues such as the electoral efforts of Donald Trump and conflict in Ukraine. The Intellectualist, a website at the centre of the controversy, has been noted for publishing videos on YouTube that impersonate Attenborough’s voice.
Speaking to the BBC, Sir David described this technological infringement as akin to “identity theft”. He emphasised the disconnect between the truth he has meticulously cultivated across his career and the sentiments misattributed to him through such cloning practices. “I am profoundly disturbed,” Attenborough said.
Hollywood actress Scarlett Johansson also confronted a similar issue when an AI-generated entity named Sky emerged on ChatGPT, bearing an uncanny resemblance to her voice. Despite not granting permission to any AI company, Johansson found herself grappling with an unexpectedly personal encounter with this technology.
Dr Jennifer Williams, an expert in electronics and computer science at Southampton University, said preventative measures lag behind the accelerating capabilities of AI, especially concerning the safeguarding of personal identities. She voiced specific worries about how these advancements could be exploited in fraudulent activities, such as scammers mimicking financial institutions during telephone interactions.
The implications for public figures are significant, as Dr Williams elaborated on the potential reputational risks. “Sir David is renowned worldwide,” she explained, noting the jeopardy posed to his legacy by unauthorised AI usage. The combination of technological prowess and ethical impunity could irreversibly impact both personal and professional domains, she suggested.
Attempts to curb the unauthorised use of AI voice cloning have seen limited success. While some public figures, like Johansson, have sought to protect their vocal likeness, the absence of rigorous controls or legislative frameworks leaves the door open for further exploitation.
As the technology becomes more ingrained and accessible, the publishing industry finds itself on the frontline of an emerging debate about AI’s role and reach in content creation. Professionals in this sector are keenly observing the developments, recognising both the potential and pitfalls such advancements may hold for news dissemination and subscriber engagement strategies.
The Intellectualist, identified as utilising Sir David’s voice, was unavailable for comment.
Source: Noah Wire Services
- https://www.corporatecomplianceinsights.com/ai-voice-cloning-extortion-vishing-scams/ – Corroborates the use of AI voice cloning in extortion and vishing scams, highlighting the threat to corporate executives and the ease with which voices can be cloned.
- https://www.cbsnews.com/newyork/news/ai-voice-clone-scam/ – Supports the ease and accessibility of AI voice cloning, including the example of a journalist cloning a voice for just $5 and the growing threat of business imposter scams.
- https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning – Details the FTC’s efforts to address the harms of AI-enabled voice cloning, including the launch of the Voice Cloning Challenge and the consideration of new regulations.
- https://elm.umaryland.edu/elm-stories/2024/Phantom-Voices-Defend-Against-Voice-Cloning-Attacks.php – Provides an example of a voice cloning scam and explains how cybercriminals use AI to clone voices, as well as tips for protecting oneself against such scams.
- https://www.corporatecomplianceinsights.com/ai-voice-cloning-extortion-vishing-scams/ – Discusses the high-value CEO fraud and virtual kidnapping scams using AI voice cloning, highlighting the severity of the threat.
- https://www.cbsnews.com/newyork/news/ai-voice-clone-scam/ – Mentions the regulatory challenges and the need for digital watermarking of AI-generated content to prevent misuse.
- https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning – Explains the multidisciplinary approach needed to prevent the harms of AI-enabled voice cloning, including enforcement, rulemaking, and public challenges.
- https://elm.umaryland.edu/elm-stories/2024/Phantom-Voices-Defend-Against-Voice-Cloning-Attacks.php – Describes how voice cloning attacks are carried out and the importance of being aware of the technology to protect oneself.
- https://www.corporatecomplianceinsights.com/ai-voice-cloning-extortion-vishing-scams/ – Highlights the challenge for corporate security teams in defending against voice cloning attacks that exploit trust in familiar voices.
- https://www.cbsnews.com/newyork/news/ai-voice-clone-scam/ – Emphasizes the importance of individual vigilance and the use of secret code words to verify identities.
- https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning – Discusses the potential benefits of AI voice cloning in medical applications while highlighting the significant risks and the need for robust solutions.