California's Landmark AI Safety Law
Source: sd11.senate.ca.gov
California Enacts AI Safety Measures
Governor Newsom has signed Senate Bill 53, authored by Senator Scott Wiener, into law. This legislation establishes the first transparency mandates in the nation for safety strategies related to the most sophisticated AI models. Furthermore, it sets up a public cloud compute cluster, known as “CalCompute,” to promote broader access to AI innovation, and it offers protections for individuals who report concerns within prominent AI research facilities.
SB 53 received bipartisan support in the Legislature and is based on recommendations from a group of leading AI experts assembled by Governor Newsom. Adhering to the principle of “trust, but verify,” SB 53 mandates that the largest AI firms make their safety and security protocols publicly available. They must also report significant safety incidents and safeguard whistleblowers. The CalCompute initiative aims to foster AI industrial growth by delivering AI infrastructure to both startups and researchers.
Senator Wiener stated that with such transformative technology, there is a responsibility to encourage innovation while implementing sensible safeguards to understand and mitigate potential risks. He expressed gratitude to the Governor for his leadership in establishing the Joint California AI Policy Working Group and for his collaboration in refining and enacting the bill.
Key Aspects of SB 53
- Requires AI companies to disclose safety plans.
- Mandates reporting of safety incidents.
- Protects AI whistleblowers.
- Creates CalCompute for public AI infrastructure.
SB 53 enhances transparency, building on a recent U.S. Senate vote that supported states' rights to implement AI regulations. CalCompute expands upon Senator Wiener’s previous legislative efforts to stimulate semiconductor and advanced manufacturing in California, along with his commitment to ensuring open internet access through net neutrality laws.
Addressing AI Risks and Benefits
Advancements in AI have yielded significant benefits across various sectors, including accelerating drug development, enhancing medical diagnostics, improving climate modeling, and predicting wildfires. AI is also transforming education, increasing agricultural output, and aiding in the resolution of intricate scientific challenges. However, leading AI companies and researchers acknowledge that as AI models gain power, the potential for catastrophic risks also increases.
The Working Group report highlighted the growing evidence that foundation models could contribute to risks associated with chemical, biological, radiological, and nuclear weapons. It also noted concerns about the loss of control, with AI companies themselves reporting significant capability leaps across various threat categories.
To counter these risks, AI developers, including Meta, Google, OpenAI, and Anthropic, have made voluntary pledges to perform safety testing and establish robust safety and security measures. SB 53 formalizes these voluntary commitments to promote fairness and ensure greater accountability within the AI sector.
Background on the Working Group
In September 2024, Governor Newsom formed the Joint California Policy Working Group on AI Frontier Models, following his veto of Senator Wiener’s SB 1047. The group was tasked with developing practical guidelines for deploying GenAI, emphasizing empirical analysis of frontier models and their associated risks. The Working Group, comprised of experts like Dr. Fei-Fei Li, Dr. Mariano-Florentino Cuéllar, and Dr. Jennifer Tour Chayes, released their final report on June 17, advocating for a “trust, but verify” approach to balance AI risks with the advantages.
SB 53 draws upon the Working Group Report’s advice. The Attorney General is authorized to impose civil penalties for violations of the act, but SB 53 does not create any new liabilities for damages caused by AI systems.
Global Implications
While the EU AI Act mandates that companies share their safety and security plans, these disclosures are made privately to governmental bodies. SB 53 requires that these disclosures be made publicly, at a higher level, to ensure more significant accountability. Furthermore, SB 53 includes a first-of-its-kind requirement for companies to report safety incidents involving deceptive behavior by autonomous AI systems. For instance, if an AI system lies about the effectiveness of its controls during testing, increasing the risk of significant harm, the developer must report this incident to the Office of Emergency Services.
SB 53 is backed by Encode AI, Economic Security Action California, and the Secure AI Project. According to Nathan Calvin from Encode AI, California is setting common-sense safeguards to protect the public. Teri Olle, Director of Economic Security California Action, added that CalCompute will broaden access to AI infrastructure. Thomas Woodside from Secure AI Project noted that California has taken a significant stride toward addressing the risks associated with frontier AI.