With the Consumer Financial Protection Bureau examining artificial-intelligence-based lending for signs of bias, credit unions and banks are under pressure to add transparency to decision-making.
Amid the scrutiny, an Iowa credit union contends AI can help reach underserved consumers. GreenState Credit Union in North Liberty, Iowa, will shortly launch a collaboration with the financial technology firm Zest AI that will use machine-learning-powered decisions to broaden its lending base.
“Our primary reason for the partnership with Zest is to go deeper with lending to women and people of color" in the communities GreenState serves, said Amy Henderson, chief consumer services officer for the $9 billion-asset credit union.
The use of AI for loan underwriting has sparked controversy. Following the CFPB's
GreenState examined its recent history and determined it could approve more loans within underserved communities. "We will be able to use Zest's model to go deeper within credit scoring while simultaneously mitigating the risks posed to the credit union," said Henderson, adding that despite the pressure from the CFPB the credit union is confident in its decision to integrate a machine-learning-developed algorithm into its underwriting instead of relying solely on FICO scores.
The CFPB's stance on AI comes as credit unions increasingly forge
Artificial intelligence and machine learning can expand the data points used in underwriting from about 20 to as many as 400, contends the Financial Technology Association, a Washington D.C.-based trade organization that lobbies for fintechs.
“It's capturing more of a holistic viewpoint of the credit risk that this person might show or not show, demonstrating a full history of their finance and a fuller picture of their financial health," said Penny Lee, chief executive of the FTA. "In turn, this allows credit unions and other institutions the ability to expand into more diverse populations or populations that have been underserved by traditional institutions.”
While many of the CFPB’s concerns of bias within newer lending algorithms are valid, the same concerns could be applied to legacy credit-scoring models, said Teddy Flo, chief legal counsel for the Los Angeles-based Zest. “AI models can have bias in them and they can produce unfairly biased lending outcomes, which is not something that someone could fairly dispute. … At the same time, legacy scoring methods like FICO and VantageScore also embed bias [within] them,” Flo said.
When developing the AI models, Zest uses a process called adversarial debiasing to vet the programs for potential prejudices by building one algorithm to consume test lending data and create scores, while another analyzes the results to detect unjust correlations between scores and race. The second model then "teaches" the first one to create scores that are accurate but less correlated with race. “Before the model is ever used to underwrite a loan, we’ve run this rigorous search for a more fair alternative so the financial institution can be confident that the algorithm it’s using is as [neutral] as possible under the circumstances,” Flo said.
Rohit Chopra, the director of the Consumer Financial Protection Bureau, has cautioned banks, credit unions and fintechs about fair-lending violations that may stem from reliance on artificial intelligence. His comments threaten to discourage financial firms from using the technology to crunch nontraditional data about borrowers, experts say.
Zest uses its race prediction tool to accurately assess fair-lending outcomes while also providing clients with a
But among those lenders that already utilize the AI-powered tools within their underwriting practices is a fear that incorporating new data to adjust the model will lead consumers to conclude that past decisions might have been biased.
“A lot of times lenders are nervous about making changes to their models to improve their fairness because they're concerned that if they make a change to their model, it may be considered a reflection that they had an unfair model before,” when the actuality is that they’re simply enhancing its accuracy, said Brad Blower, general counsel for the National Community Reinvestment Coalition, a nonprofit that advocates for fairness in lending to underserved communities.
By conducting fair-lending analyses and thoroughly vetting potential third-party firms, financial institutions can work to assuage the CFPB’s doubts and increase the transparency afforded to regulators and consumers, according to Stephen F. Hayes, a partner at the civil rights law firm Relman Colfax in Washington and a former senior counsel for the CFPB.
“If you're gonna use a model for underwriting or pricing or some high-risk case like that, it's not enough to get a certification saying, ‘We're not relying on protected classes in the model.’ You need to have some substantive work and transparency” to show consumers and regulators that the model in place is the one that is the most fair, Hayes said.