"Regulators must make clear that mere acknowledgement of a less-discriminatory [consumer lending] model is not, alone, evidence of past wrongdoing," Yolanda D. McGill of Zest AI writes as part of her call for policies that promote continuous improvement of underwriting systems.
On March 30, 2023, Patrice Ficklin, head of the Consumer Financial Protection Bureau's Office of Fair Lending, publicly clarified for the first time that consumer lenders have an affirmative duty to monitor, refine and update lending models in order to ensure that there are no less-discriminatory models available. This statement is critical because pursuit of less-discriminatory alternative (LDA) underwriting models does not happen consistently enough for a variety of reasons, including that LDA searches have historically been cumbersome to pursue and may result in less accurate models. Fortunately for millions of Americans historically underserved by our financial system, new artificial intelligence and machine learning tools can facilitate more effective searches that yield multiple less-discriminatory and equally accurate alternative models quickly and efficiently.
Processing Content
Against this backdrop, Ficklin's clarification seems like a simple and clear affirmation of the Equal Credit Opportunity Act and its implementing regulation, Regulation B. Taken in conjunction with the bureau's warning to lenders against using technologies in ways that hamper compliance, the bureau's fair lending clarification could ultimately prove to be a watershed moment in advancing the use of AI in consumer finance to enhance fairness and financial inclusion. For this moment to be realized, however, regulators must take additional bold action, and more is needed to ensure that American consumers benefit from proper application of a law intended to increase fairness, inclusion and ultimately access to credit.
First, the bureau and other regulators should explicitly recognize that LDA search using AI tools is an advantageous application of the technology for financial services, given AI's ability to rapidly compare multiple models in searching for alternatives that are more fair and less discriminatory. Under the equal credit act, all lenders are required to assess whether current lending models have a discriminatory impact on protected classes, then ascertain whether there are LDAs available that would satisfy their legitimate business objectives. Advances in fair lending analytics are making these searches more accessible and efficient for all lenders, with significant benefits for consumers.
Recent research published by the nonprofit FinRegLab highlighted the potential advantages of using AI tools in complying with LDA search requirements (as well as the risks of using AI without adequate attention to fairness). Advanced, explainable AI technologies for credit underwriting models that include robust searches for LDAs as part of the models' fair lending testing, foster fairness and inclusion in financial services.
Second, as my colleague argued in these pages last year, regulators must make clear that mere acknowledgement of a less-discriminatory model is not, alone, evidence of past wrongdoing. Today, whether due to a lack of sophistication in developing and testing alternative models, inertia or apathy, or fear that acknowledging an LDA may somehow indicate wrongdoing with respect to legacy models, many lenders fail to pursue robust LDA searches. Instead, lenders should be encouraged to perform robust LDA searches and improve models rather than stick with the status quo out of fear of incurring liability.
And finally, as we explained in our December 2020 comment letter, the bureau should issue public guidance regarding LDA regulatory expectations, including how the bureau assesses the robustness of LDA search techniques and methodologies. Clarity as to the material metrics or factors that lenders should consider in LDA search and deployment processes, and options for balancing fairness with accuracy would accelerate alignment with the bureau's express expectations. Coherent, compliant application of AI technology holds real promise for American consumers and the financial services providers who serve them.
Yolanda D. McGill joined Zest AI in 2023 as its Vice President, Policy and Government Affairs, bringing her extensive experience at the intersection... Read full bio
For reprint and licensing requests for this article, click here.
Federal Reserve Vice Chair Philip Jefferson said in a speech Tuesday that despite positive employment readings in recent months, he remains concerned that an economic shock could still threaten the labor market.
Santander U.S. and Webster Financial are doing integration work in advance of their pending merger, including naming business-line leaders, the banks said in separate notes.
The FDIC board voted unanimously to issue a proposal putting a rebuttable ban on bank-issuers paying yield on stablecoins, another narrowing AML requirements and a third prohibiting examiners using reputational risk in exams, outside of operational or financial risks.
The Minneapolis-based company plans to leverage the arrangement to draw more attention to its growing list of national business lines, transcending its regional reputation.
Swiss banking giant UBS tried to get a federal court to reject new allegations that Credit Suisse, which UBS acquired in 2023, had concealed Nazi-linked assets.