More and more, algorithms are managing our lives.
Sometimes we are not even aware of it. When we use Facebook, the news that is curated and served up to us is courtesy of an algorithm. When we select the next show to watch on Netflix, an algorithm decides what shows will be highlighted for us to choose from.
Business decisions, too, are now being influenced by algorithms — for instance, what insurance premium we will be asked to pay or the resumes we see when we present a new job opportunity to a job board online.
So many things that we now do are shaped by the results of computer algorithms. Part of the reason for this increasing reliance on computers to crunch the data and make decisions for us is that we believe it will reduce the influence of our human bias in any given decision. However, the latest research suggests that rather than reining them in, algorithms maybe be worsening the impact of our biases.
To take a step back, when has bias caused problems in the past? Take insurance premiums as an example. Insurance companies have historically charged higher premiums to African Americans for life insurance. Why? On average, African Americans have, statistically speaking, lived shorter lives, and so an early payout was estimated as being more likely relative to other groups. However, is this a reasonable outcome for any given middle class African American family? Hardly. But the question now is whether data science can produce a better and more objective reflection of reality.
Data scientist Cathy O’Neil has
Yet as O’Neil discusses, 20 years of data from hiring at Fox News would probably tell you that neither women nor African Americans have been very successful in obtaining a position at the company. Anyone training the algorithm to identify potential successful candidates would likely eliminate both women and African Americans from the candidate pool following this model. O’Neil sees computer science of this sort as a technology — rather than as a science. It’s a tool for improving accuracy and efficiency, not for telling the truth.
This problem has also come up in the
These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition. In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women. Researchers at the Georgetown Law School
As banks increasingly move toward using algorithms and artificial intelligence in many parts of their business — from the front office to risk and compliance desks — examining bias in the data is becoming more important. And regulators are not going to be satisfied with the output of an algorithm if they cannot understand what is underlying it.
Banks have recently been pushing into AI for trade surveillance and financial crimes compliance for two key motivating factors. First, there is the opportunity to reduce the amount of time spent by analysts combing through false or benign alerts in both areas. Second, banks are using the technology to identify risks proactively through predictive analytics.
A bank that is reducing the number of suspicious activity alerts that its analysts must investigate will need to convince a regulator that any steep drop in the number of alerts is well-founded.
Who is teaching the machine what type of alerts to ignore and set aside? A human being will review alerts that he or she considers to be false or benign based on his/her own experience and insights and will “teach” the algorithm to recognize and set those types of cases aside going forward. The bank will need to be able to explain this process to regulators — and to demonstrate that it is based on facts and actionable insights identified by the algorithm acting in the place of a human being. Banks cannot assume that regulators will trust the technology o its face so they would be well advised to provide examples of the AI training, encourage a deep dive into its workings and to provide real-time demonstrations of the software and how it works.
More concerning potentially is the case of a bank proactively identifying a risky customer, through patterns of activity analyzed in the case of similar customers who have committed some type of fraud or negative activity. When a bank decides to curtail transactions with a certain type of customer, it is a little like arresting a terrorist before they blow up a building. Suspicions are based on predictions of a customer’s likelihood to take an action, like committing a fraud on the bank. While it may be a significant improvement on making decisions through pure human subjectivity, banks still need to be concerned with the prevention of bias on the part of the programmer influencing the algorithm’s suspicions.
A human review should be conducted of the predictions made by the machine learning algorithm to ensure fairness and protection against bias decisions. This may seem odd when we are leveraging algorithm to reduce human intervention but a sense of fairness and equity is not something that an algorithm can necessarily be trained in.
AI is a powerful tool that can potentially play a useful role in bank decision-making and analysis. However, let’s not make the mistake of thinking that this will remove bias from the equation. Behind every algorithm there is a human being, whose insights, expertise and bias lay the groundwork for the computer’s decision-making framework.
Banks need to make sure they build safety provisions into the use of AI so that the technology is used to limit bias, not accentuate it.