Many years ago when I was in college I worked on a charter fishing boat for a summer. I learned a great deal over those months — how to tie a line and net a fish, how to cut bait and weigh an anchor, the different kinds of fish in the Chesapeake Bay, when they run and which ones have sharp teeth. But the two most important lessons I learned were, first, that I am not capable of earning a living through manual labor, and second, that confident obfuscation is a suitable alternative to expertise. Or, as the first mate who trained me put it, "If you can't dazzle them with brilliance, baffle them with [nonsense]."
Last week it seemed all of the current and now-outgoing class of federal bank regulators offered their thoughts about the future of artificial intelligence in financial services, occasioned in part by a conference on the subject from DC think tank FinRegLab. The subject is timely and important — the public debut of generative AI two years ago left an
The potential use cases for AI are manifold. Banks have already been using machine learning and artificial intelligence
The paragraph above represents something like the consensus among regulators and lawmakers about the importance of addressing AI in banking, and it seemed to me that there is also something like a consensus on where to go from here. Sen. Mike Rounds, R-S.D., touted his
"The oversight coming from the agencies is one that says, 'OK, here's what we want to pursue,' so it's a joint venture, so to speak, and everybody wins," Rounds said. "You want them to offer these products and services to you, but if they're restricted because of regulatory oversight — this is not allowed, we don't know how to do that — then we'll never get ahead, and other people in other countries will do this to us."
Acting Comptroller of the Currency Michael Hsu — whose panel immediately followed Rounds' comments —
"[We need] supervised learning as regulators, where we're in a space where we're co-learning as these things are being developed, rather than giving permission for a [sandbox] and then if something bad happens, someone says, 'No, you told me it would be OK,'" Hsu said. "I think this requires a bit more of an interactive engagement for this to work."
Federal Reserve vice chair for supervision Michael Barr emphasized that the biggest risk AI poses is if a limited number of models that all work more or less the same take on more and more functions — and perform those functions quickly and automatically — then a small blip could cause a cascading failure in the financial system that might not have occurred if humans were at the helm.
"You might worry about generative AI models that are very concentrated and correlated with each other, and if you have a handful of generative AI models … and ubiquity of use of those models, you'll have real risks that you'll see herding behavior in the market, and generative AI might facilitate contagion in the market," Barr said. "If you combine that ubiquity with speed, which is already present in the market, and automaticity — that is, taking the human out of the loop, which is not in the market — that can be a real recipe for instability in the financial system."
Federal Reserve Gov. Michelle Bowman — a Trump appointee and dissenting voice on the Fed board — made
"We must have an openness to the adoption of AI," Bowman said. "We should avoid fixating on the technology and instead focus on the risks presented by different use cases. These risks may be influenced by a number of factors, including the scope and consequences of the use case, the underlying data relied on, and the capability of a firm to appropriately manage these risks."
What is striking to me about these comments taken together is that they aren't really incompatible — everyone wants to avoid catastrophe, nobody wants to be left behind, we can't just ban AI and we can't just let banks do whatever they want and escape liability. But the problem is that none of these individuals are really empowered to articulate how the regulatory apparatus is going to proceed going forward so much as they are empowered to emphasize one or another aspect of the challenge.
Much of what is scary about AI is that it is new, and new things can be scary because we don't yet know how they work — people had
But ready or not, AI is coming to the banking industry, and while we may look to our regulators and lawmakers to tell us how it's going to be, we simply don't know because for the most part, we haven't tried yet. Banks are waiting for a green light from regulators to try new things without planting the seeds of some future enforcement action, while regulators want to give that green light just as soon as they're not planting the seeds of some future economic disaster. But sooner or later we are going to take the leap. What everyone is looking for is a little push.