-
A growing number of marketplace lenders and other fintech companies say they no longer use FICO scores or are using them in a limited way. The open question is whether their alternative methods will be more effective.
June 14 -
Mobile apps and personal financial management sites that attempt to analyze consumers' spending habits pose plenty of risks, one data analytics expert says.
January 22
What if you had to choose between keeping your Facebook friends and getting a home equity line of credit? What if your student loans could not be refinanced until you cut certain relatives out of your digital life?
Such decision-making scenarios could happen as fintech companies increasingly crunch alternative data to help determine a would-be borrower's creditworthiness.
As discussed in a recent
While not everyone is aware of this emerging scoring mechanism, social credit has been growing in recent years. In some respects, little about this is new. Lenders have always considered reputation as an indicator of creditworthiness. This is just the digital equivalent on steroids. Would-be creditors are collecting and analyzing vast amounts of data to generate the FICO alternatives. Even though we are moving to a world of big data and social networks and away from a world of FICO scores, the same basic variables are in play with an upside. As
Unfortunately, such credit scoring methods — especially the ones focused on social information — also have a dark side. As an exclusive source to gauge creditworthiness, these credit scoring methods' effectiveness is questionable. They involve collecting and analyzing information about people without them consenting to or understanding the terms. Often, they may not even know that it is happening.
Even though innovative underwriting criteria can help more people get credit, the risks and potential harms of mining social media data might just be too great. The
Further, credit scoring methods based on social information use algorithmic decision-making, which shelters the underwriting process under a veil of secrecy and makes it hard to monitor or criticize.
Most importantly, and perhaps most frighteningly, credit scoring mechanisms that are based on social information incentivize financially responsible people to perfect their online images. Artificial acts of online fine-tuning include consumers cleaning up posts about how "wasted" they were last night or tweets about how upset they are for getting laid off. It also includes deleting all records of being affiliated with "bad" acquaintances.
The alternative credit analyzing algorithms do not care if you used to volunteer at a poor community in the past or lived in proximity to a "bad" neighborhood or if a recently bankrupt social network contact was your best friend in preschool. On the surface, such affiliations could be considered financially harming, and consequently, force people to choose between social ties and better credit.
Because of these risks, it would be wise for regulators to limit the ability to use certain types of data for scoring purposes. Such an approach wouldn't be a first, either. Take medical information, for instance. While an individual's terminal illness could significantly influence one's ability to repay a loan, laws restrict the use of specific medical information for credit scoring purposes. A similar view should guide our response to credit scoring methods based on social information.
Even though a social credit model based on "tell me who your friends are and I'll tell you who you are" may prove accurate for scoring purposes, its effectiveness cannot stand by itself to justify the privacy and social harms it generates.
Nizan Geslevich Packin is an assistant professor of law at the Zicklin School of Business, Baruch College, City University of New York. Yafit Lev-Aretz is a research fellow at the Information Law Institute at NYU.