Virginia's governor is the latest to veto an AI bill. What's next?

Glenn Youngkin, governor of Virginia
Glenn Youngkin, governor of Virginia, spoke at President Donald Trump's inauguration in January.
Victor J. Blue/Bloomberg

Proposed state laws that would create rules for the use of artificial intelligence have been falling like dominoes in the wake of President Trump's executive order calling for U.S. leadership and innovation in AI — an about-face from his predecessor's focus on consumer protection and safety. But experts say AI regulation could still emerge, from Democratic-leaning states and perhaps the federal government itself.

This week, Virginia Gov. Glenn Youngkin killed the High-Risk Artificial Intelligence Developer and Deployer Act, a bill that aimed to prevent algorithmic discrimination and other potential harms from AI models. HB 2094, which passed the Virginia legislature in February, would have applied to businesses that use AI for "consequential decisions," including for financial or lending services. It would have required users and developers to avoid algorithmic discrimination and set risk management policies and programs, as well as provide disclosures.

People interviewed for this article all said Youngkin was reacting to the president's executive order.

"Governor Youngkin's veto aligns with the Trump administration's AI focus on reducing regulatory burdens on AI rather than erecting safeguards around AI," said Michele Alt, partner at Klaros Group. "I think we are likely to see a red state-blue state divide on this issue, with red states aligning on the deregulatory approach and blue states advancing new safeguards. My guess is that California and New York are likely to lead the blue states on this issue."

In September, California Gov. Gavin Newsom vetoed an AI bill that would have forced foundation model providers like OpenAI to test and audit their software, to be liable for harms that their models cause and to create "kill switches" that would immediately halt a model's work. In explaining why he vetoed the bill, Newsom said its focus on the largest providers of AI models was misplaced. "SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," he said. 

Newsom has signed 17 AI bills that crack down on deepfakes, require AI watermarking, protect children and workers, combat AI-generated misinformation and more.

In October, the New York State Department of Financial Services added guidance to its existing cybersecurity regulation that focused on the security risks of AI. "It is obviously one of those risks that keeps everybody up at night," Adrienne Harris, New York's superintendent of banking, said at an American Banker event at the time.

Utah, Colorado, Illinois, Maryland, the District of Columbia, Ohio, Pennsylvania and New Jersey have laws governing AI and/or automated decision-making tools, according to a state law tracker maintained by law firm Husch Blackwell. 

Under the Colorado law, if a deployer or developer of generative AI has determined that the technology has the propensity to or does potentially discriminate against consumers, there's a self-reporting requirement to the attorney general. This may be a difficult provision for businesses to adopt; the bill may still be amended. 

Other states may pick up a perceived void in regulatory enforcement at the federal level. 

"I think right now we are at a crossroads in terms of how states and the federal government want to regulate AI, if at all, and what that looks like," said Jennifer Everett, partner at the law firm Alston & Bird.

Meanwhile, banking regulators have repeatedly reminded banks of the need for explainability, transparency, fairness, lack of bias and lack of disparate impact in their AI models, especially those they use for lending decisions. They have warned banks that their AI models must be in compliance with all existing regulation, including the Fair Credit Opportunity Act and disparate impact rules. They've also issued third-party risk management guidance that applies to their AI tech providers as well as to all their other suppliers.

It's early days for the Trump executive order on AI — the comment period closed March 15 and generated more than 8,700 submissions.

"There's a long way to go before the high-level principles are translated into more concrete rules," said Dan Latimore, chief research officer at The Financial Revolutionist. "All else being equal, avoiding regulatory fragmentation is a worthy goal. If, however, some blue states see those federal rules as lacking sufficient consumer protection, particularly with respect to anti-discrimination provisions, then they're likely to enact stronger rules."

Developing sound regulations takes time and typically lags the fast-paced development of new technology, Latimore said. 

"Tech companies abhor regulatory uncertainty, so any steps to give some clarity around AI regulation, particularly at the federal level, will be welcome," he said.

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance Politics and policy
MORE FROM AMERICAN BANKER