California’s recent enactment of 17 bills addressing generative AI aims to enhance data transparency and tackle misinformation during elections, marking a significant regulatory effort in the evolving AI landscape.
California Enacts Stringent AI Legislation Amid Concerns Over Data Transparency and Election Security
In a significant legislative move, California has enacted 17 new bills focused on generative AI, addressing issues such as deepfakes, AI watermarking, child safety, election misinformation, and the rights of performers regarding their digital likenesses. This legislative package marks a comprehensive attempt to regulate the burgeoning AI landscape within the state.
One of the standout pieces of legislation, Assembly Bill 2013, requires AI firms to provide detailed summaries of the datasets utilised in training their systems. This legislation, signed into law by Governor Gavin Newsom, obliges AI companies to disclose the sources and ownership of the data, as well as whether it was licensed or purchased. The bill, applicable to systems launched on or after January 2022, also mandates transparency about whether personal information is included in the training data and the time frame of its collection.
The bill potentially addresses ongoing legal challenges faced by AI companies, such as the lawsuits against OpenAI and Microsoft by major publishers like The New York Times, who claim that copyrighted content is being improperly used to train AI models. While these companies argue for fair use, OpenAI has acknowledged the need to possibly pay content creators, having already signed licensing agreements with several publishers.
AI’s Influence on Upcoming Elections Raises Concerns
As the US gears up for its elections, concerns about AI’s role in spreading misinformation are mounting. OpenAI released a report indicating that over 20 operations attempted to misuse its AI models for deceptive purposes in 2024. These operations ranged from creating misleading articles to fabricating social media posts shared by fake personas.
Despite the potential for AI-generated misinformation, OpenAI assured that it has not observed any significant advancements in threat actors’ capacities to produce new malware or amass viral audiences. It highlighted that the most notable deceptive event regarding social media involved a hoax about the use of AI, rather than the technology itself.
A September report by the Pew Research Center revealed widespread apprehension among Americans about AI’s potential misuse in influencing elections. Both Democratic and Republican respondents expressed concerns about AI being employed to disseminate fake information. Additionally, while many believe tech companies should prevent this misuse, only a small fraction are confident that these companies can adequately safeguard against it.
AI Video Tools Usher a New Era for Creative Content
Simultaneously, advances in AI video technology are gaining traction, with major tech companies like Adobe, OpenAI, Meta, and Microsoft developing tools for creating videos from text or images. Adobe has introduced the Adobe Firefly Video Model in public beta, ensuring that all training content is licensed. This measure seeks to assuage concerns that such tools may undermine traditional creative processes.
Following Adobe’s lead, Meta is collaborating with industry creatives to refine its AI video creation tools, while Microsoft explores live image generation through a novel patent application.
ChatGPT’s Rising Popularity
In line with AI’s expanding influence, ChatGPT has become one of the most visited websites globally, with a record-breaking 3.1 billion visits in September. This surge in popularity positions ChatGPT above major platforms like Amazon in terms of web traffic.
OpenAI’s recent capital raise of $6.6 billion, propelling the company’s valuation to $157 billion, highlights the robust investor confidence in AI’s potential.
AI Applications in Other Sectors
Beyond tech development, AI is proving beneficial in diverse domains. For instance, the US government’s use of machine learning techniques helped recover $1 billion in check fraud, demonstrating AI’s effective application in financial crime prevention.
Innovative financial solutions are also being explored, with CNET documenting the utilisation of AI chatbots to aid individuals in managing personal finances. Additionally, tech giants Amazon and Google have unveiled plans to incorporate nuclear energy into their operations, catering to the increased energy demands driven by AI.
These developments underscore the dual focus on enhancing AI capabilities while addressing the challenges and ethical considerations associated with their deployment.
Source: Noah Wire Services