Bank of America spends $3 billion developing and buying technology every year, and about three times that on keeping its existing IT infrastructure going, says David Reilly, global banking and markets technology chief information officer.
As you might expect, some of that goes to artificial intelligence technology. The bank does not disclose how much.
Speaking on Thursday at the bank's NYC Technology Summit, Reilly and other executives shared some of their experiences with AI and some of their concerns about it.
Top uses: Chatbot, fraud detection, trading
Bank of America’s chatbot, erica, which is currently available in 10 states, uses two forms of AI: natural language processing to understand speech, text and intent, as well as machine learning to glean insights from customer data that can be turned into advice and recommendations.
Overall, "in many areas where we traditionally leveraged things like analytics, we're trying to get a handle on how can machine learning and AI help,” said Hari Gopalkrishnan, client-facing platforms technology executive at BofA. “Think of fraud. Fraud management is all about understanding customer behavior, understanding what's normal and what’s not, so we save ourselves from losses and drive customer experience."
An old-school fraud analytics program might see a customer using a card in a place they have never used a card before and block the transaction.
AI can do better.
“Armed with more insight about channel behavior, we're able to run much better algorithms, to understand what is true fraud,” Gopalkrishnan said.
The same type of technology can be applied to billing disputes. Rather than an associate spending six hours gathering data about a dispute, an AI engine could find and analyze the data quickly to render a verdict.
“We're looking at opportunities like that,” Gopalkrishnan said.
Reilly pointed out that AI holds great promise in analyzing trade data.
For instance, the European Commission’s Markets in Financial Instruments Directive 2, which took effect on Jan. 3, calls for additional reporting on trades.
“There are now 2 million more events we're recording than we used to get before,” Reilly said. “Those 2 million events tell you where we won and where you lost. Those events are available to everybody we compete with. We see all the places JPMorgan [Chase] and Goldman [Sachs] won and lost. All that information you could use, if you could get that insight. You could far more refine your offerings to your clients. Analytics in real time, drawing insight from all those data flows, is untapped for us. That's the area — inline, not post-event analytics and insight — where there’s the biggest opportunity.”
Note of caution
In hiring, the bank wants to use AI to help source the right candidates. But executives have reservations.
“There's a chance AI models will be biased,” said Caroline Arnold, BofA's head of enterprise technology (which includes HR tech). “You might say, who's going to be successful at this company? An AI engine could find that people who golf are going to be successful at the company. On the other hand, using those same techniques can remove bias if you have the model ignore some of these things that are indicators of different groups but go on to the meat of the profile of the person and understand it in a deeper way.”
Arnold believes an AI engine can never be the final say in who gets hired.
Mehul Patel, CEO of Hired, a technology company whose software uses AI to match people to jobs, agreed that AI and humans have biases.
“The good news about AI is, you can fix the bias,” he said. “We will boost underrepresented groups. The trouble with humans is they can't unwire their bias easily. Human bias far outweighs algorithmic bias. That's because we humans make quick decisions on people that aren't founded on what you're looking for in the job.”
Aditya Bhasin, head of consumer and wealth management technology at Bank of America, also shared reasons the bank is careful implementing AI.
"We're early in the Gartner hype cycle on machine learning and robotics,” he said. “Everybody that was a big data company a few years ago is now a machine learning company.”
AI is not always the right solution, he observed.
"Technology is only useful for us when it's in the service of meeting a client need to enable them to live a better financial life,” he said. “That's what we constantly work with developers on.”
A robotic process automation solution that automates a loan process so the bank can deliver the loan faster to a client would be great, he said. But using AI or robotic process automation as a shortcut to data integration might not make sense, he suggested.
With the digital mortgage Bank of America recently launched, for instance, "we could have done a whole bunch of robotics to go and pull data from different places and prepopulate the mortgage application, [but] it probably would have been fraught with error," Bhasin said.
"Instead, we did the hard work to integrate data feeds from multiple different sources so when we're creating that digital mortgage application, there might be 170 fields and only 10 or 15 of them have to be filled out by the client, because all the other stuff we already know," he said.
"So we're thinking about not just brute force [in] applying technology because it's available, but understanding what the client’s need is and then applying technology the client needs,” Bhasin said.
"If Google Duplex is talking to erica, we need to understand, what is one robot saying to the other robot? And what decisions are they making without us?"