WASHINGTON — The Federal Reserve is paying close attention to how its responsibilities as a regulator relate to artificial intelligence, and it has already identified several risks that advanced technology might pose to banks, according to Federal Reserve Gov. Lael Brainard.
Banks are increasingly interested in using machine-learning technology for a range of projects, and AI innovation is growing at a much quicker pace than expected, said Brainard, speaking Tuesday at a fintech conference in Philadelphia.
“The potential breadth and power of these new AI applications inevitably raise questions about potential risks to bank safety and soundness, consumer protection or the financial system,” she said. “It is incumbent on regulators to review the potential consequences of AI, including the possible risks, and take a balanced view about its use by supervised firms.”
Banks now
However, as the technology rapidly develops, the Fed has identified several challenges that AI poses to banking, particularly in the areas of transparency and how it might explain the technology to customers, or even its own employees.
“The challenge of explainability can translate into a higher level of uncertainty about the suitability of an AI approach, all else equal,” Brainard said. “So how does, or even can, a firm assess the use of an approach it might not fully understand?”
The wide adoption of AI will also present new opportunities for fraudsters, and phishing attempts could become highly targeted, mass-produced and harder to detect. As a result, defensive tools will need to be kept under wraps, Brainard said.
“Supervised institutions will likely need tools that are just as powerful and adaptable as the threats that they are designed to face, which likely entails some degree of opacity,” she said. “In cases where … AI tools may be used for malevolent purposes, it may be that AI is the best tool to fight AI.”
To offset the opaqueness of defensive technology, banks will need to subject AI tools to appropriate controls, Brainard stressed.
“We would expect firms to apply robust analysis and prudent risk management and controls to AI tools, as they do in other areas, as well as to monitor potential changes and ongoing developments,” she said.
Additionally, the lack of transparency around AI tools could make it difficult for financial institutions to explain credit decisions to consumers, which would in turn make it more challenging for consumers to improve their credit score by changing their behavior, she said. But AI developers have already recognized this issue and are working on creating "explainable AI tools with a focus on expanding consumer access to credit."
And while AI could facilitate financial inclusion by providing wide access to a range of services, that doesn’t mean that the technology is free from bias, said Brainard.
“Algorithms and models reflect the goals and perspectives of those who develop them as well as the data that trains them and, as a result, AI tools can reflect or ‘learn’ the biases of the society in which they were created,” she said.
Studies have shown that the image recognition error rate for some AI is even lower than the human error rate, but that doesn't indicate AI is error-proof, which further underscores the need for enhanced supervision and analysis, Brainard warned.
“There are plenty of examples of AI approaches not functioning as expected — a reminder that things can go wrong,” she said. “It is important for firms to recognize the possible pitfalls and employ sound controls now to prevent and mitigate possible future problems.”