BankThink

Bank regulators have concepts of a plan to deal with AI

Nvidia artificial intelligence
Bloomberg News

Many years ago when I was in college I worked on a charter fishing boat for a summer. I learned a great deal over those months — how to tie a line and net a fish, how to cut bait and weigh an anchor, the different kinds of fish in the Chesapeake Bay, when they run and which ones have sharp teeth. But the two most important lessons I learned were, first, that I am not capable of earning a living through manual labor, and second, that confident obfuscation is a suitable alternative to expertise. Or, as the first mate who trained me put it, "If you can't dazzle them with brilliance, baffle them with [nonsense]."

Last week it seemed all of the current and now-outgoing class of federal bank regulators offered their thoughts about the future of artificial intelligence in financial services, occasioned in part by a conference on the subject from DC think tank FinRegLab. The subject is timely and important — the public debut of generative AI two years ago left an exciting and occasionally frightening impression on the world, much the way fire, the wheel and the telephone did when they made their debuts. But how this breakthrough will change financial services and how regulators will approach this new technology is still being articulated.

The potential use cases for AI are manifold. Banks have already been using machine learning and artificial intelligence for years in low-risk applications like customer service chatbots and fraud screening, but have not yet incorporated the technology into more business-central applications like loan underwriting or AML compliance. The benefits of using AI are easy to understand: if a machine can do a better or faster job of detecting money laundering operations or approving creditworthy borrowers, that's a good thing. The drawbacks are equally easy to understand: if AI ends up steering all banks into making the same prudential mistakes, that's a bad thing. To that end, there are risks for regulators in being too cautious in allowing banks to try AI in more important roles — other banks in other countries can gain a competitive advantage — and equal, countervailing risks in being too permissive as well.

The paragraph above represents something like the consensus among regulators and lawmakers about the importance of addressing AI in banking, and it seemed to me that there is also something like a consensus on where to go from here. Sen. Mike Rounds, R-S.D., touted his regulatory sandbox bill last week, arguing for a framework whereby banks and other financial services companies can try new ideas of applying AI to their offerings without fear of enforcement reprisal if things don't go according to plan.

"The oversight coming from the agencies is one that says, 'OK, here's what we want to pursue,' so it's a joint venture, so to speak, and everybody wins," Rounds said. "You want them to offer these products and services to you, but if they're restricted because of regulatory oversight — this is not allowed, we don't know how to do that — then we'll never get ahead, and other people in other countries will do this to us."

Acting Comptroller of the Currency Michael Hsu — whose panel immediately followed Rounds' comments — said more or less the same thing, but emphasized that any regulatory "sandbox" should not dispense with bank liability altogether.

"[We need] supervised learning as regulators, where we're in a space where we're co-learning as these things are being developed, rather than giving permission for a [sandbox] and then if something bad happens, someone says, 'No, you told me it would be OK,'" Hsu said. "I think this requires a bit more of an interactive engagement for this to work."

Federal Reserve vice chair for supervision Michael Barr emphasized that the biggest risk AI poses is if a limited number of models that all work more or less the same take on more and more functions — and perform those functions quickly and automatically — then a small blip could cause a cascading failure in the financial system that might not have occurred if humans were at the helm.

"You might worry about generative AI models that are very concentrated and correlated with each other, and if you have a handful of generative AI models … and ubiquity of use of those models, you'll have real risks that you'll see herding behavior in the market, and generative AI might facilitate contagion in the market," Barr said. "If you combine that ubiquity with speed, which is already present in the market, and automaticity — that is, taking the human out of the loop, which is not in the market — that can be a real recipe for instability in the financial system."

Federal Reserve Gov. Michelle Bowman — a Trump appointee and dissenting voice on the Fed board — made her own speech on AI last Friday at a separate event, emphasizing that regulating AI as a category makes less supervisory sense than considering the risks and benefits of each use case individually.

"We must have an openness to the adoption of AI," Bowman said. "We should avoid fixating on the technology and instead focus on the risks presented by different use cases. These risks may be influenced by a number of factors, including the scope and consequences of the use case, the underlying data relied on, and the capability of a firm to appropriately manage these risks."

What is striking to me about these comments taken together is that they aren't really incompatible — everyone wants to avoid catastrophe, nobody wants to be left behind, we can't just ban AI and we can't just let banks do whatever they want and escape liability. But the problem is that none of these individuals are really empowered to articulate how the regulatory apparatus is going to proceed going forward so much as they are empowered to emphasize one or another aspect of the challenge.

Much of what is scary about AI is that it is new, and new things can be scary because we don't yet know how they work — people had similar fears around electricity when it first became commonplace. New things can also inspire unfounded optimism that they can solve all our problems lickety split — an attribute that was applied to radiation when it was first discovered, with disastrous results.

But ready or not, AI is coming to the banking industry, and while we may look to our regulators and lawmakers to tell us how it's going to be, we simply don't know because for the most part, we haven't tried yet. Banks are waiting for a green light from regulators to try new things without planting the seeds of some future enforcement action, while regulators want to give that green light just as soon as they're not planting the seeds of some future economic disaster. But sooner or later we are going to take the leap.  What everyone is looking for is a little push.

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance Politics and policy
MORE FROM AMERICAN BANKER