California AI bill veto opens door to bank-unfriendly revisions

Gavin Newsom
The vetoed bill would have forced several restrictions on the largest AI model providers, including an ability to instantly shut their models down.

California Governor Gavin Newsom has vetoed a bill that would have forced foundation or "frontier" model providers like OpenAI to test and audit their software, to be liable for harms their models cause and to create "kill switches" that would immediately halt a model's work.

Instead, Newsom said he has signed 17 bills over the past 30 days that cover the deployment and regulation of generative AI technology. 

"This was all a balancing act for Governor Newsom," said John Cunningham, a corporate compliance and investigations partner at Dickinson Wright, in an interview. "This is about the cost-benefit issue and the balance between continued innovation in the AI area and reasonable regulatory oversight. If we can put reasonable reins around what we do with AI, this is going to be good for everybody."

In explaining why he vetoed the bill, Newsom said its focus on the largest providers of AI models was misplaced. "SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," he said.

Some of the most critical decisions and most sensitive data are in financial services. Banks' use of AI in lending and hiring decisions has been heavily scrutinized by their regulators, but not subject to new laws. A revised bill that focuses on riskier use cases could have banks in its crosshairs.   

California is one of several states trying to put guardrails around advanced AI in the absence of national laws. California, Pennsylvania, Massachusetts, New Jersey and the District of Columbia have had AI laws on the books for some time. Another five states – Colorado, Illinois, Maryland, New York and Utah – enacted AI legislation this year, according to a state law tracker maintained by law firm Husch Blackwell. And eight states tried but failed to pass AI laws this year.

National AI laws have been floated in the U.S. Congress. For instance, an AI Civil Rights Act was introduced in the Senate last week that would outlaw discrimination by corporate algorithms, require independent testing of AI models and ensure consumers have the option to have a decision made by a human being instead of AI.

The vetoed bill

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1074, would have been the toughest state AI legislation in the country. California has long been known for coming up with strict consumer protections. For instance, it enacted the first data protection law, the provisions of which were taken up by many other states.

"California, like New York, is often on the vanguard of regulation," Cunningham said. "So a lot of folks will look to them to say, hey, how do we start to get our hands around the regulatory piece here before AI gets too far, from a regulatory perspective? A lot of folks rely on states like California and New York to be the deeper thinkers on that."

SB 1074 would have required developers of large artificial intelligence models, like OpenAI, Anthropic, Google and Meta, to put safeguards and policies in place to prevent catastrophic harm. For instance, they would have to provide a "kill switch" that could shut down their systems. They would have to provide safety plans and audit reports. The bill would also provide protections for whistleblowers and establish a state entity called the Board of Frontier Models to oversee the development of these models. 

More than 125 Hollywood celebrities, including Mark Hamill, Jane Fonda and Alec Baldwin, signed a letter urging Newsom to sign the bill.

Many of the companies that would have been affected by the bill, including OpenAI, Meta, Google and Anthropic, are based in California. In a letter to California Democratic senator Scott Wiener, who proposed the bill, OpenAI chief strategy officer Jason Kwon said the bill would hinder innovation and that AI regulations should be left to the federal government. 

Newsom said the bill's focus on only the most expensive and large-scale models could give the public a false sense of security, whereas smaller, specialized models could be just as dangerous. 

Newsom said his thought process in this decision was guided by several AI experts, including Fei-Fei Li, a professor of computer science at Stanford University; Tino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research; and Jennifer Tour Chayes, dean of the College of Computing, Data Science, and Society at UC Berkeley. He has asked these advisors to help develop responsible guardrails for the deployment of generative AI. 

Banking experts say the shift from policing the foundational model providers to smaller, more specific laws governing generative AI makes sense.

"The regulation as written was too general and risked pushing innovative companies out of California with little concrete consumer protection to show for such potential impact," said Ian Watson, research director at Celent. "Pushing it out to smaller teams of experts not only allows California more time to let a more national consensus form but it sets up the possibility to draft a series of more targeted industry specific regulations that address tangible pain points for state politicians' constituents."

Some thought the California bill's focus on the existential risks of AI was misguided. 

"AI can be very dangerous, but I strongly feel the immediate dangers are to consumers, via predatory practices and surveillance, and to our democratic institutions, via misinformation and surveillance, and not to humanity's survival," said Patrick Hall, assistant professor at George Washington University.

It's too early to say what a new law might look like. 

"Newsom's messaging sounds like he would like a tougher and better bill, but that doesn't mean he will get one," Hall said. "My research and experience leads me to believe that regulating use cases, and the people around those use cases — such as establishing chief model risk officers — is much more effective than directly regulating a technology."

Hall liked some aspects of the vetoed California bill, such as the kill switch requirement. 

"I have argued for this for years as it's clear that AI systems can malfunction sometimes and turning them off quickly is a good option in some cases," he said.

The 17 bills California has passed

The 17 AI bills Newsom has signed are intended to crack down on deepfakes, require AI watermarking, protect children and workers, and combat AI-generated misinformation. 

Several apply to foundation model developers and companies, like banks, that use those models. 

For instance, one of the bills (AB 1008) clarifies that the California Consumer Privacy Act applies to personally identifiable information stored by AI systems. Another bill (AB 1836), prohibits anyone from producing, distributing, or making available the digital replica of a deceased personality's voice or likeness without prior consent. 

A third, AB 2013, requires AI developers to post information on the data used to train the AI system or service on their websites. A fourth, SB 942, requires the developers of covered generative AI systems to include provenance disclosures in the original content their systems produce and make tools available to identify generative AI content produced by their systems. 

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER