A version of this post was
Banks have been in the business of deciding who is eligible for credit for centuries. But in the age of artificial intelligence, machine learning and big data, digital technologies have the potential to transform credit allocation in positive as well as negative directions.
Given the mix of possible societal ramifications, policymakers must consider what practices are and are not permissible and what legal and regulatory structures are necessary to protect consumers against unfair or discriminatory lending practices. The country’s lending laws will have to be updated to keep pace with these technological developments, as they are adopted more widely by banks and other financial companies.
When artificial intelligence is able to use a machine learning algorithm to incorporate big data sets, it can find empirical relationships between new factors and consumer behavior. Thus, AI coupled with ML and big data allow for far larger types of data to be factored into a credit calculation.
Many of these factors show up as statistically significant in whether you are likely to pay back a loan or not. A recent Federal Deposit Insurance Corp.
The researchers identified five key variables: borrower type of computer (Mac or PC), type of device (phone, tablet, PC), time of day you applied for credit (borrowing at 3 a.m. is not a good sign), your email domain (Gmail is a better risk than Hotmail) and whether a shopper’s name is part of their email address (names are a good sign). Crucially, all of these are simple, available immediately and at no cost to the lender — as opposed to, say, pulling your credit score.
An AI algorithm could easily replicate these findings and ML could probably add to it. But each of the variables is correlated with one or more protected classes. It would probably be illegal for a bank to consider using any of these in the U.S. — or if not clearly illegal, then certainly in a gray area.
Incorporating new data raises a bunch of ethical questions. Should a bank be able to lend at a lower interest rate to a Mac user, if, in general, Mac users are better credit risks than PC users, even controlling for other factors like income and age?
Answering these questions requires human judgment as well as legal expertise on what constitutes acceptable disparate impact. A machine devoid of the history of race or of the agreed-upon exceptions would never be able to independently recreate the current system that allows credit scores — which are
Policymakers need to rethink our existing anti-discriminatory framework to incorporate the new challenges of AI, ML and big data. A critical element is transparency for borrowers and lenders to understand how AI operates. In fact, the existing system has a safeguard already in place that itself is going to be tested by this technology: the right to know why you are denied credit.
When you are denied credit, federal law requires a lender to tell you why. This is a reasonable policy on several fronts. First, it provides the consumer necessary information to try and improve their chances to receive credit in the future. Second, it creates a record of decision to help ensure against illegal discrimination. If a lender systematically denied people of a certain race or gender based on false pretext, forcing them to provide that pretext allows regulators, consumers and consumer advocates the information necessary to pursue legal action to stop discrimination.
But this legal requirement creates two serious problems for financial AI applications. First, the AI has to be able to provide an explanation. Some machine learning algorithms can arrive at decisions without leaving a trail as to why. Simply programming a binary yes/no credit decision is insufficient. In order for the algorithm to be compliant, it must be able to identify the precise reason or reasons that a credit decision was made. This is an added level of complexity for AI that might delay adoption.
The second problem is what happens when the rationale for the decision is unusual. For example, one of the largest
Is it acceptable for a bank to deny an application for credit because a machine suspects infidelity? If so, the next step would be whether it is right for the bank to inform the consumer directly that is the reason why. Imagine if the bank sent a letter to the consumer’s home with that finding. The use of AI to determine credit coupled with the requirement that written notice be given with the rationale for credit denial raises a host of privacy concerns.
If it is not acceptable, then who determines what acceptable grounds are? While marital status is a
The core principle of risk-based lending will be challenged as machines uncover new features of risk. Some of these are predictive in ways that are hard to imagine. Some are predictive in ways that are difficult to disclose. And some are repetitions resulting from a world in which bias and discrimination remain. Unlocking the benefits from this data revolution could help us escape the cycle using credit as the powerful tool for opportunity that it is. However, the existing 1970s-era legal framework is going to need a reboot to allow us to get there.