Utah bank uses gen AI to watch for emerging problems at fintech partners

DALL·E 2023-01-11 11.41.08 - A robotic hand holds a magnifying glass over a stack of papers.png
An AI-generated picture by DALL•E 2 of a robot reviewing loan applications to try to detect fraud. AI is steadily mastering both photo generation and fraud controls.
Carter Pape, via DALL•E 2

First Electronic Bank is using generative AI technology from Spring Labs to analyze its fintech partners' customer communications and identify problems before they blow up.

The Salt Lake City-based, online-only, $429 million-asset institution has several large, national fintech partners with millions of customers. Like all banking-as-a-service banks, it's under pressure from regulators to make sure its fintech partners are not running afoul of any laws and are keeping customers happy. Over the past year, several banks have received consent orders reprimanding them for their fintech partners' compliance shortcomings, including Green Dot Bank, Cross River Bank and Evolve Bank and Trust.

"We've got to figure out when there are issues, faster, so we can deal with them," said the bank's CEO Derek Higginbotham in an interview. "If we don't, they're going to pop up, they're going to grow, and then they're going to pop in a worse way for everybody."

The bank has deployed Spring Labs' Zanko ComplianceAssist to find signals in customer communications that indicate something is off. 

Understanding customer complaints received by fintech partners is "kind of a huge deal for sponsor banks and their fintechs these days, as regulators are looking at issues of, does the sponsor bank actually exercise enough effective control over their fintechs for this [banking as a service] model to work?" said John Sun, CEO and co-founder of Spring Labs, in an interview. "A lot of times, customer engagement is the first window into exactly what's happening between the customer and the fintech, and obviously sponsor banks want to see an accurate view of that."

First Electronic Bank's fintech partners gather all their customer communications — transcripts of phone calls, emails, text messages and other messages — from customer relationship management and case management software and convert them into data files that they share with the bank through application programming interfaces and file transfers. This data is then fed into Spring Labs' generative AI model.

"It is hard for a human to know everything that's in there," Higginbotham said. "We had to figure out how to synthesize the information so that that human agent could be smarter."

The Spring Labs software first categorizes complaints for First Electronic Bank's human reviewers. 

This is the kind of chore that sounds simple, but when it's being done by multiple customer service agents at different companies, "they are going to each interpret things slightly differently," Higginbotham said. Having humans do all the complaint tagging forced the bank to limit the number of categories to a couple of dozen categories. 

"It's a problem that most people don't really think about — we all use categorized data without even thinking about it," Higginbotham said. "When you actually have to be the custodian of that data and create the tags, it's really tricky to get depth and consistency."

AI can categorize to a much deeper level of fidelity, he said. The bank gives the system specific tags to use but also lets it identify trends and generate its own labels.  

Large language models are better at tagging complaints than humans, Sun said. 

In an analysis of data from more than 100 fintechs, his team found that customer-service agents are able to identify complaints and the regulatory risks associated with them with about 60% accuracy, "which is quite low because it's hard to train every single frontline customer service agent to be a compliance professional," Sun said. 

Compliance professionals identify compliance issues in customer complaints at 70% to 80% accuracy, he said, whereas Spring Labs' software is accurate 90% to 95% of the time. 

At First Electronic Bank, once all the complaints have been tagged, the AI model looks for patterns and trends. 

"You can look temporally at how things are changing," Higganbotham said. "If there are certain types of things popping up, you can see the relative values between how things are behaving." 

The generative AI system also generates alerts and reports on specific insights drawn from customer complaints. 

In the future, the system may be used to route the most important or most time-sensitive complaints to experts. 

So far, this system isn't replacing any staff, Higginbotham said.

"We have human agents who are already reviewing complaints and know the joint ventures really well," he said. "So this is just giving them added insights to what's going on in the programs." 

If the system reveals an issue, a human supervisor logs it and makes sure that customer's needs are tended to by one of First Electronic Bank's customer service providers. 

Higginbotham sees this technology deployment as an effort to protect consumers and to address enterprise risk. 

"The biggest driver for me would be making sure that the consumer protection regs, including the principles around unfair, deceptive or abusive acts or practices, are met," Higginbotham said. 

The bank chose Spring Labs for the depth of technology skill and depth of consumer finance business knowledge of its team, he said. 

Generative AI models are good at tasks that require a strong understanding of language but don't require a ton of further logical deductions or fact finding or deep pattern recognition, Sun said. 

"It's a lot of language processing, it's a lot of reading complaints or reading news articles or reading regulations and trying to apply them in various directions," Sun said.

Spring Labs' software leans on small language models, he said, to guardrail certain processes. It uses large language models for some processing and generative generative capabilities. It can work with any model, he said.

Older, keyword-based compliance systems are more likely to trigger false positives and false negatives, Sun said. Such systems can't understand context or code words, for instance.

"If somebody says the word 'Asian' in a totally benign context, that could get picked up as a potential fair lending violation," Sun said, in an example of a false positive. 

An internally built system called Advanced Listening analyzes phone calls, emails, text messages and more, identifying possible compliance violations, systemic issues and opportunities to improve processes, products and customer service.

August 19
Michael Soistman, senior vice president, enterprise complaints data analytics and reporting at Wells Fargo

In addition to categorizing and flagging customer complaints, Spring Labs' system can be used to assign workflows to customer complaints, and some clients already use it this way, Sun said.

Some experts agree this use case for generative AI makes sense.

"I think that the concept and the approach is valid," said Marcia Tal, founder of PositivityTech, a company that helps companies understand customer complaints, in an interview. "All the banks are trying to firm up their sophistication, accountability and responsibilities in this [banking as a service] area."

But she notes that a generative AI system should never be the sole watcher of customer interactions — humans with domain expertise need to be involved. And as data is handed off between entities and systems, customer privacy has to be protected. 

"The richness of those conversations that take place [in customer service interactions], it's real," Tal said. "People are sometimes telling you stories about what's going on with them. Why would you want that to end up someplace else? Or why would an institution not care for that data as any other data asset that it has?"

For reprint and licensing requests for this article, click here.
Artificial intelligence Fintech Technology
MORE FROM AMERICAN BANKER