AI-driven financial services products are being rolled out at a rapid pace by banks and other financial service providers.
Available tools touch on an array of areas, including humanlike chatbots that can talk to customers, enhanced surveillance and monitoring tools that can take on cybersecurity or other safety threats and increasingly sophisticated algorithms that make credit decisions.
These new tools bring with them the promise of increased efficiency across an array of important areas; but they also carry with them potential legal and regulatory risks. Financial service providers will need to find a way to strike the right balance as technological advancements continue to emerge. Should they jump in headfirst by adopting them, or sit on the sidelines waiting for the rules of the road to emerge?
In emerging areas, technology tends to pace ahead of regulators' abilities to set the ground rules. Companies are often left to their own judgment about how to interpret general regulatory guidance, agency statements or principles. Companies may prefer to tread more cautiously, but, at the same time, may not see sufficient guidance from regulators to feel comfortable they are on the right side of the line.
Just take a recent hot technology topic: cryptocurrency. That area was, and still is, subject to vigorous debate about whether regulators provided sufficient guidance.
Banks certainly can boast of being one of the more (most?) regulated industries, and often have the benefit of precise regulations, guidance, recent enforcement action or other rules of the road. When it comes to AI, regulators have for several years focused on concerns about the potential for discrimination and unfairness in the use of AI tools in financial services, and legal commentators — including at my law firm — began to predict years ago that government enforcement authorities would increasingly focus on this area to combat perceived unfairness, discrimination and a lack of transparency in consumer-facing decision making.
With the onset of the generative AI wave, the pace of guidance has only accelerated. The last few months have seen a flurry of guidance from across the government on issues relating to the use of artificial intelligence in business and banking. In April 2023, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission, the Department of Justice's Civil Rights Division and the Federal Trade Commission issued a joint statement outlining their commitment to prevent bias in AI.
The statement reiterated that existing enforcement powers apply to the use of AI just as they apply to other practices. This was followed by a landmark executive order from the Biden administration in October, which established guidelines for AI safety and encouraged regulators to conduct their own AI risk assessments. In January, the FTC held a summit on AI, in which Samuel Levine, director of the FTC's Bureau of Consumer Protection, signaled the agency's intention to strengthen consumer protection enforcement where AI tools are discriminatory or otherwise cause harm to consumers. At the very least, this indicates that companies that use artificial intelligence will increasingly be expected to hold themselves accountable for protecting consumers when applying AI.
In making sense of these pronouncements, reading the rules and consulting counsel are key. But one common sense rule may stand above the rest: Understand your technology and how it is applied. The failure to understand an AI process or outcome, such as a lack of explainability, was identified by the Office of the Comptroller of the Currency as a key risk associated with AI.
And more recent comments echo that concept. For example, at the FTC Tech Summit earlier this year, Atur Desai, the CFPB's deputy chief technologist for law and strategy, underscored the need for companies not to employ "black box" technologies to make consumer-facing decisions. He noted that, "If a company can't comply with laws like federal consumer financial laws because their technology is too complex or otherwise? Then they really shouldn't be using that technology."
Desai added that the CFPB is committed to developing knowledge of the markets that it oversees and has made integrating technological expertise into its supervision and enforcement teams a key area of focus.
Banks will need to be able to explain their AI tools well enough to defend them. This may be particularly true when AI tools are used to make credit decisions. Recent CFPB guidance on adverse action notifications under the Equal Credit Opportunity Act reiterated this point by noting that the "specific" reasons for credit denials must be provided, even where complex "black box" algorithms have been used. Relying on the generic language in existing forms may not be sufficient.
Michaela Croft, Sol Gelsomino and Pejman Yousefzadeh of Jenner & Block assisted in the preparation of this article.