Fed's Barr: Core providers need to step up cybersecurity

Mic
Federal Reserve Gov. Michael Barr.
Anna Rose Layden/Bloomberg

A Federal Reserve governor called on core banking system providers to do more to safeguard community banks against emerging cyber threats. 

Fed Gov. Michael Barr said core providers have not invested sufficiently in systems to identify fraudulent activity to keep up with the rapid evolution of new and more sophisticated methods of attack.

"It's not clear to me that we're making progress in relation to the risk. The risks are growing faster than the progress that we've seen, and in my experience in other areas, the core service providers are pretty slow to bring new technology to bear in these ways," Barr said. "It's an ongoing risk, an important risk to monitor, and one where we're likely to see significant problems for quite some time. I know that small banks would like to be in a much better place than they are."

Barr said smaller banks are still struggling to deal with check fraud and rudimentary scams. When it comes to more sophisticated schemes, including so-called "deepfakes" — voices and videos produced by generative artificial intelligence programs, or gen AI  — these institutions and their systems providers are woefully behind.

He added that the vulnerabilities created by this lack of progress are particularly dire for individual small banks.

"The financial system is only as secure as its weakest link," Barr said. "It is often the case that even though smaller institutions may be a smaller target, they also tend to have less robust cyber practices, and that can create enormous risk for them as individual institutions, for their customers and for the broader financial system."

For core providers, he said the objective is not just to create better protection systems, but also to deliver them at scale and at a cost point that makes them obtainable. 

"It's really important that the core service providers who they work with for so many of their systems invest sufficiently in cyber defense and bring down the cost of that cyber defense so that smaller institutions can actually afford to deploy it," Barr said. "We have a very long way to go in that regard."

Barr's comments came during a Thursday conference on cybersecurity hosted by the Federal Reserve Bank of New York. 

In prepared remarks, the former Fed vice chair for supervision called for banks and regulators alike to get better acquainted with deepfake technology to understand the risks it poses to the banking system and how those threats might be mitigated. 

Barr noted that many banks rely on voice detection as a form of identity verification and warned that such systems could be vulnerable to deepfake gen AI attacks. Because of this, he urged banks and service providers to invest in more advanced authentication processes, including "facial recognition, voice analysis and behavioral biometrics."

He also flagged concerns about the presence of gen AI in investment platforms, noting that trading systems that rely on the technology could be vulnerable to collusion and "herding" practices that ultimately prove harmful to markets. 

Barr acknowledged that his latest comments on gen AI were "darker" than some of his previous discussions of the technology. He added that the innovation holds great promise not only in the financial sector but also the medical field, the energy sector and other critical parts of the economy. 

Barr said regulators should not stifle the use of AI innovations, but rather harness them as a means for keeping up with bad actors or simply tracking traditional banking risks and vulnerabilities more efficiently.

"Regulators should consider how we could leverage AI technologies ourselves, including to enhance our ability to monitor and detect patterns of fraudulent activity at regulated institutions in real time," he said. "This could help provide early warnings to affected institutions and broader industry participants, as well as to protect our own systems."

Barr also called for regulators to team up with law enforcement entities to better catch fraudsters that rely on gen AI technology and enforce punishments that serve sufficient deterrents. 

"This includes targeting the upstream organizations that benefit from illegal action and strengthening anti-money-laundering laws to disrupt illicit fund flows and freeze assets related to cybercrime," he said. "The fear of severe legal consequences could help to deter bad actors from pursuing AI-driven fraud schemes in the first place."

For reprint and licensing requests for this article, click here.
Regulation and compliance Politics and policy Bank technology
MORE FROM AMERICAN BANKER