Hsu says banks and AI companies should share responsibility for model errors

Michael Hsu
Michael Hsu, acting director of the Office of the Comptroller of the Currency, said in a speech Thursday that AI providers and their customers — including banks — should share responsibility for any errors that arise from reliance on AI models.
Bloomberg News

WASHINGTON — Acting Comptroller of the Currency Michael Hsu on Thursday said artificial intelligence providers and end-users like banks should develop a framework of shared responsibility for errors that stem from adoption of AI models. 

"In the cloud computing context, the 'shared responsibility model' allocates operations, maintenance, and security responsibilities to customers and cloud service providers depending on the service a customer selects," he said. "A similar framework could be developed for AI."

Hsu said the U.S. Artificial Intelligence Safety Institute — a department of the National Institute of Standards and Technology that was created late last year — may be the appropriate agency to take up the task of hammering out the details of such a shared-responsibility framework.

The statements came in a speech before the 2024 Conference on Artificial Intelligence and Financial Stability hosted jointly by the Financial Stability Oversight Council and the Brookings Institution. Hsu remarks came only hours after the Treasury Department issued a request for information from the public on risks and potential benefits of AI. The agency says it is hoping to better understand how AI's risks can be mitigated while harnessing its potential to streamline processes and promote economic inclusion. 

As the financial industry's interest in artificial intelligence and machine learning grows, banking regulators have been trying to better understand the potential risks and benefits of the technology

Hsu likened AI's adoption in the financial sector to the trajectory of electronic trading 20 years ago, whereby the technology begins as a novelty, then becomes a more trusted tool before emerging as an agent in its own right. In AI's case, Hsu said, the technology will provide information to firms, then it will assist firms in making operations faster and more efficient before it ultimately will graduate to autonomously executing tasks. 

He said that banks need to be wary of the steep escalation of risk involved as they progress through these stages. Establishing safeguards — or "gates" — between each stage will be essential to ensure that firms can pause and evaluate AI's role before that role is expanded. 

"Before opening a gate and pursuing the next phase of development, banks should ensure that proper controls are in place and accountability is clearly established," Hsu said. "We expect banks to use controls commensurate with a bank's risk exposures, its business activities and the complexity and extent of its model use."

Hsu also touched on the financial stability risks of AI, saying the emerging technology is significantly increasing the ability of bad actors to implement increasingly sophisticated attacks and scams. And while firms may be impatient to adopt a technology that promises enhanced efficiency and profitability, Hsu noted that consumers appear willing to endure some inefficiency in the name of safety and security.

"Say an AI agent … concludes that to maximize stock returns, it should take short positions in a set of banks and spread information to prompt runs and destabilize them," he said. "This financial scenario seems uncomfortably plausible given the state of today's markets and technology."

Hsu said firms should also be wary of the potential for AI to expand their liabilities as well as their efficiency, citing a case in Canada whereby a chatbot gave a customer incorrect information about how to obtain a bereavement fare and the airline employing the chatbot was ultimately held liable for the error.  

AI systems are harder to hold accountable compared to company websites or staff, Hsu said, since AI's complex and often opaque nature makes it hard to pinpoint responsibility and fix errors. That same principle is present when it comes to using AI to assist in credit underwriting; consumers denied by AI might question the fairness of such decisions, Hsu said.

"With AI, it is easier to disclaim responsibility for bad outcomes than with any other technology in recent memory," he said. "Trust not only sits at the heart of banking, it is likely the limiting factor to AI adoption and use more generally."

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance Politics and policy
MORE FROM AMERICAN BANKER