Practical uses for generative AI in banking abound, and include generating summaries of call center interactions, giving employees co-pilots, complying with loan rules and coding new core systems, according to Accenture's global banking lead, Michael Abbott.
At a time when banks' interest in generative AI is high — 49% of financial services firms are implementing the technology and 19% are already benefiting from it, according to a KPMG survey — questions remain about where to start, how to make money and how to prevent unintended consequences like hallucinations, data privacy violations or answers encompassing outdated information.
In an interview, Abbott shared his perspective on where he thinks banks deploy generative AI most fruitfully and how they are following ethical and responsible principles.
In a recent report from Accenture, you said that generative AI has the potential to boost banks' productivity by about 20% to 30%. How do you think that might happen? What are some examples of operational efficiencies that banks might get through generative AI?
MICHAEL ABBOTT: One of the things I have learned from actually implementing a number of different generative AI efforts is in many cases banks are choosing to do what I would describe as, take the waste out and put value in. So for example, we've seen banks up and down the spectrum, from the largest to many regional and smaller banks, implement generative AI for post call recordings. At the end of a phone call, typically a call center rep will have to summarize what was the conversation with that customer. That might take four to five minutes.
You can now do that in a matter of a few seconds and then have the call center rep look at it to confirm what the generative AI did was correct, but now it can drop it in. So there's four minutes off of a call right there. We've seen mortgage loan origination providers starting to use it to be able to pull in a loan and then pull in information about all the Fannie Mae requirements and then push that loan up against those requirements and quickly get at what some of the red flags would be at a much faster way than simply having to read through everything and go through individual parts of it. So those are just two simple examples, but when you take that and you apply it across all of the operational components inside of banking, you realize in the operational side you could probably get 20% to 30% efficiency, but I don't think all of it's going to go to the bottom line.
Do you see layoffs or jobs being lost because of these kinds of efficiencies?
I get that question all the time. Will there be layoffs? I was talking to one major bank chief operating officer and this person summed it up pretty well for me. They said, look, what I want to do is, I want to take waste out and put value in. I don't need to get rid of any more people. I just don't want them to waste time doing non value-added things. I want to free up their time to have conversations with my customers about cross-selling and upselling. Whether or not a bank takes it to the bottom line, we'll see. But many that I talk to are looking to take more of a waste out, value in approach. If you can take out cost that is not value added and turn it into opportunities to create more income, that's much more valuable to the top line of a bank. And I think people are starting to understand that quite well. If you look at generative AI, one of the things we did put in the report too is that we believe that the revenue opportunity from generative AI far outweighs the cost side.
What are some of those revenue increasing opportunities that you see?
I've seen banks around the world already use generative AI to develop customized save scripts for customer service people to decide exactly what rate they need to offer customers. Deposit beta is perhaps one of the hottest topics out there right now. Imagine now being able to figure out exactly, what does the rate need to be? And if you can optimize that just a few basis points, the opportunity there is enormous. So I've seen banks go from just having two or three or a dozen save scripts to being able to develop a thousand save scripts against the behavioral economics of that particular customer and get to the answer faster. So that's just one example of a revenue opportunity out there.
With all this potential for generative AI, there are risks out there. There's obviously the potential for bias, there's a potential for hallucination, there's a potential for copyright infringement. How do you think that banks and other companies need to think through these risks?
There's a big question here around what I would describe as responsible AI. And every bank already has a responsible AI framework. The question is how do you project that into this class of generative AI models, which to your point, can hallucinate? So far, I have not seen any bank allow these models to just go directly out to the customers. They're all using what I would describe as a human-in-the-loop approach, meaning they're using generative AI to augment, not automate necessarily. And that's a really good way to go until we know exactly all the risks with this, to be able to leverage these models in a safe way that ensures that you're doing the right thing for the customer.
I wrote recently about
I think the co-pilot use case is probably going to be one of the most common ones out there. And I think of it like playing chess and having Kasparov whisper in your ear and telling you, this is the right move to make next. You still have to make the move. It's still your decision. You still have to look at that and say, is that the right thing to do? But it is nice to have somebody who really knows what they're doing or has a second look at it to be able to tell you, you might want to look at something this way. So I do think the co-pilot approach is going to be more the norm right now than anything else.
So I'm seeing, and I think you've alluded to this already, that a lot of banks are taking a fairly cautious approach. They're experimenting with large language models, they're making these models available to staff for internal purposes, but they're taking their time actually pushing this out to customers, or as you said, completely automating things. When you look at these approaches, do you think this is right? Do you think banks are moving too fast or too slow?
I would describe it as a cautious, aggressive approach. And what I mean by that is they're absolutely taking their time in terms of how this is going to impact customers and so on. But I'm also seeing banks be very aggressive in their internal adoption, their experimentation. They're looking at how they're going to scale out these platforms, various different models that are out there and they're being very aggressive in terms of understanding the potential of what they can do with this. But they're being cautious to make sure when they do put it into production that it's going to meet all the standards and requirements they have.
What about banks that have very old core systems? Are they going to be able to do any of this? Are they going to be left behind? Are there ways to replace an old core system with generative AI eventually?
At the macro level, when you look at generative AI, it's going to impact every single part of the bank, from enterprise functions like legal, risk and compliance to operations that we talked about with the call center, all the way up through to your marketing and content generation and how you'll deal with relationship management and so on. So the important point there is, unlike digital, where you could hire a chief digital officer and he or she could develop a mobile app or an online banking site with generative AI, it's going to be diffused throughout the entire organization. So when you look at your supply chain, many parts will adopt generative AI very fast. But to your point, the core system is the backbone of the bank. And what we're seeing is generative AI actually applied on it right now.
And this is, I think, perhaps one of the most transformative ideas that I've seen early in the generative AI lifecycle. And it's the ability to reverse engineer 30, 40 years of legacy COBOL code. Is it perfect? No, it's not perfect yet. But we're seeing 80%, 85% accuracy in reverse engineering legacy code into requirements, and then using engineers to modify that and structure it a little bit better, and then you can forward engineer it back into a modern architecture. So in many ways, yes, a legacy core might be a constraint for generative AI. But generative AI is going to change that itself and make it possible to unlock the key to that core. It's a fascinating turn of events.
So you're letting the large language model ingest all of the code in a legacy core and reverse engineer it, and then it's going to build a new core?
It's not as turnkey as pushing a button. But yes, we've already been using it to reverse engineer legacy COBOL code. Not all at once. You have to break it down to its components and so on and so forth, but reverse engineer that legacy COBOL code into its original specifications. And then once you have those original specifications, you can modify them, modernize them, architect it the way you want, and then you can use generative AI to create the next generation of code. Now again, is it perfect? Absolutely not. It still requires people that understand what they're doing in there and so on, but it takes an enormous amount of time out of the effort.