"Regulators must make clear that mere acknowledgement of a less-discriminatory [consumer lending] model is not, alone, evidence of past wrongdoing," Yolanda D. McGill of Zest AI writes as part of her call for policies that promote continuous improvement of underwriting systems.
On March 30, 2023, Patrice Ficklin, head of the Consumer Financial Protection Bureau's Office of Fair Lending, publicly clarified for the first time that consumer lenders have an affirmative duty to monitor, refine and update lending models in order to ensure that there are no less-discriminatory models available. This statement is critical because pursuit of less-discriminatory alternative (LDA) underwriting models does not happen consistently enough for a variety of reasons, including that LDA searches have historically been cumbersome to pursue and may result in less accurate models. Fortunately for millions of Americans historically underserved by our financial system, new artificial intelligence and machine learning tools can facilitate more effective searches that yield multiple less-discriminatory and equally accurate alternative models quickly and efficiently.
Against this backdrop, Ficklin's clarification seems like a simple and clear affirmation of the Equal Credit Opportunity Act and its implementing regulation, Regulation B. Taken in conjunction with the bureau's warning to lenders against using technologies in ways that hamper compliance, the bureau's fair lending clarification could ultimately prove to be a watershed moment in advancing the use of AI in consumer finance to enhance fairness and financial inclusion. For this moment to be realized, however, regulators must take additional bold action, and more is needed to ensure that American consumers benefit from proper application of a law intended to increase fairness, inclusion and ultimately access to credit.
First, the bureau and other regulators should explicitly recognize that LDA search using AI tools is an advantageous application of the technology for financial services, given AI's ability to rapidly compare multiple models in searching for alternatives that are more fair and less discriminatory. Under the equal credit act, all lenders are required to assess whether current lending models have a discriminatory impact on protected classes, then ascertain whether there are LDAs available that would satisfy their legitimate business objectives. Advances in fair lending analytics are making these searches more accessible and efficient for all lenders, with significant benefits for consumers.
Recent research published by the nonprofit FinRegLab highlighted the potential advantages of using AI tools in complying with LDA search requirements (as well as the risks of using AI without adequate attention to fairness). Advanced, explainable AI technologies for credit underwriting models that include robust searches for LDAs as part of the models' fair lending testing, foster fairness and inclusion in financial services.
Second, as my colleague argued in these pages last year, regulators must make clear that mere acknowledgement of a less-discriminatory model is not, alone, evidence of past wrongdoing. Today, whether due to a lack of sophistication in developing and testing alternative models, inertia or apathy, or fear that acknowledging an LDA may somehow indicate wrongdoing with respect to legacy models, many lenders fail to pursue robust LDA searches. Instead, lenders should be encouraged to perform robust LDA searches and improve models rather than stick with the status quo out of fear of incurring liability.
And finally, as we explained in our December 2020 comment letter, the bureau should issue public guidance regarding LDA regulatory expectations, including how the bureau assesses the robustness of LDA search techniques and methodologies. Clarity as to the material metrics or factors that lenders should consider in LDA search and deployment processes, and options for balancing fairness with accuracy would accelerate alignment with the bureau's express expectations. Coherent, compliant application of AI technology holds real promise for American consumers and the financial services providers who serve them.
The labor market was a bigger point of discussion in the Federal Reserve's most recent monetary policy-setting meeting, but officials were divided about the path of future actions.
Regulators from California, Massachusetts, Minnesota, Nebraska, New York and Texas levied the fine for anti-money-laundering and Bank Secrecy Act violations, which comes as Wise seeks a banking license in the U.S.
U.K. regulators said Monzo didn't properly vet new customers, while Vocalink was dinged for risk management lapses. Also, Paxos launched a dollar-backed coin in the European Union; and more news in the weekly global payments and fintech roundup.
The case has put chief information security officers on notice that they could be personally liable for false statements about security policies and practices.