Podcast

Some banks are seeing 8% gains from gen AI: Kartik Ramakrishnan

Sponsored by
Kartik Ramakrishnan
"People have started to record value they're getting from generative AI, and we've seen benefits in productivity in the 5% to 8% range in some of our financial services clients, an increase in operational efficiency and a variety of other factors," said Kartik Ramakrishnan, Capgemini's deputy CEO of financial services and head of banking and capital markets.

Transcription:

Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Penny Crosman (00:04):

Welcome to the American Banker Podcast. I'm Penny Crosman. U.S. banks have changed their thinking, their attitude, and their investment in traditional AI and generative AI quite a bit over the past year. Capgemini has been tracking these changes and recently released a report that analyzed some of this from 2023 to 2024. Kartik Ramakrishnan, Capgemini's deputy CEO of financial services and head of banking and capital markets, is with us today to talk through this. Welcome Kartik.

Kartik Ramakrishnan (00:37):

Hello, Penny. It's a pleasure to be here.

Penny Crosman (00:39):

Thank you for coming. So one finding of the survey that you did was that 98% of respondents from financial services organizations said generative AI is a top agenda item in boardrooms. From your perspective, why is this so important to bank boards? Is this a fear of missing out? Is it a genuine feeling that the technology will help these banks and make them more profitable?

Kartik Ramakrishnan (01:07):

Penny, I think it's a combination of several factors, and we don't believe it is necessarily a fear of missing out. What we have seen over the past year is that the banks have an increased degree of maturity around their thinking around AI, specifically generative AI, and have increased investments in that sector as they've started to see some of the benefits of their early investments in that space. So what our report did was a study of comparative questions that we'd asked them in 2023 to 2024, and we saw movement across a variety of areas. So if I might just talk about a few, we'll get to why that number makes a lot of sense. The first thing we observed there was there is an increased maturity and investment across the board. What we mean by that, especially for banks, is in an environment where banks have been trying to optimize and reduce their cost of operations to increase investment in, something has to be associated with value and value coming from either increased revenue, greater efficiencies, better speed to market, and all of those things.

(02:23):

And we've seen some of those things happen and that drove the increased investment. Equally, we are seeing an increase in breadth. What I mean by that is when we did the report last year, there was a lot of talk of the use of generative AI in IT, in sales and marketing, kind of the key areas that sound bites that we heard across the board. Now when we talk to our clients, it is across all areas. It's risk management, it's operations, it's HR, it's the deployment of copilots across the board. And finally we're seeing that there are recorded benefits. People have started to record value for what they're getting from generative AI, and we've seen benefits in productivity in the 5% to 8% range in some of our financial services clients, an increase in operational efficiency and a variety of other factors. So I think the evolution of what's happened with generative AI over the past year has led to banks being more engaged and putting more money into it, and it has become a consistent boardroom conversation.

Penny Crosman (03:44):

Well, and I don't know if you can answer this, but what do you think those conversations are like? How do they go? Are the board members saying, what are you doing with generative AI and why aren't you doing more? Or are they saying, wait a minute, what about the risks? Or are they saying, we don't want to spend so much money on this? What do you think those conversations are involving?

Kartik Ramakrishnan (04:09):

I think the conversations involve a variety of areas. Not privy to sitting in any of those, but through conversations with our executives that we interact with, we hear a lot about what's happening. So one thing that is a common thread across all our clients is there is a consistent board update. So this is a topic important enough that every board conversation, every time the board meets there is an update on what's happening in the world of generative AI. A lot of the initial conversation was around governance controls, and that continues because that is the most critical topic when it comes to deploying technology like this at scale. But a lot of the other conversations are around what are the areas that it can be implemented and what are the benefits that can be driven out of it. An indicator of the importance is if you see some of the personal announcements, for example, JPMC elevating an AI officer to the executive committee, so status of somebody who drives AI sitting in the executive committee of the firm. If you see Wells Fargo announced there's a CIO responsible for generative AI across the enterprise, and we're seeing announcements like that across several of our largest clients. That elevation of status of what we're seeing with our clients is an indicator that this is a hot topic of conversation and a conversation that is allowing banks to come back and put some investment behind it.

Penny Crosman (05:57):

Do you think that that's an important kind of best practice, to put one person in charge of all AI deployments across the whole company? It seems like it can be quite different depending on what area you're using it in. I've seen it done both ways. Some companies do have that centralized role and then other companies are letting each department figure out what makes sense for them to adopt.

Kartik Ramakrishnan (06:27):

Absolutely. I think it's about, if you look at the adoption cycle where we see our clients in that adoption cycle, it makes sense to have somebody putting the right governance, the guardrails, the principles that the firm would adopt and helping prioritize use cases. I was recently in a conversation with the CEO of one of our major clients, and their view was every division in the bank is coming back with needs of where they would like to implement generative AI and either get benefit or see an increase in revenue. You need someone to help prioritize that so the investments can be directed to the right place, and having a central function kind of helps drive that and helps drive the strategy. The other thing we saw, and it came out a little bit in our report, is our clients are beginning to believe that the adoption of generative AI is the beginning of an evolution to the operating model.

(07:39):

It is going to impact the way they operate into the future. And when you have such a significant activity for the bank, it makes sense to have a governance function elevated at the highest level to help do that. Now if I fast-forward a few years, I think it's likely that a lot of these things are absorbed into various parts of the organization. There is a standard operating cadence and the need for such a role may not exist. But right now, I think it helps put the right kind of focus and allows these banks to derive what they want to get out of the investments they're making.

Penny Crosman (08:27):

That makes sense. Well, when you talk about changing the operating model, do you mean having more work done by software and less work done by humans, that kind of thing?

Kartik Ramakrishnan (08:43):

It's a combination in our view of both operations, the process of operations and technology. So you will see, especially one of the things our report hits on, is this use of agentic models, which is where the AI acts as an agent, tries to mimic what a person would do rather than a chat bot that is an interactive question and answer type model. And when you start to look at what those agentic models are beginning to do and take decisions similar to what a human would be able to take, we are beginning to see some changes in the way the banks need to think about their processes, how decisions are made, how do they validate these decisions can be explained because they're being made by models as opposed to people and then to deploy them. So to deploy something like that will require change in people, change in processes, change in the technology that comes from these. So it has impacts across the board and will mean an adoption of the way we see banks using AI.

Penny Crosman (10:07):

I think in your survey, 60% of the financial services respondents agreed with the statement, "Our leadership is a strong advocate of generative AI," while 39% said that their leadership is taking a wait-and-see approach. Do you think one approach is better than the other? And do you think that this kind of maps to the size of bank with community banks being more wait and see and the largest banks being more strong advocates?

Kartik Ramakrishnan (10:38):

So our survey went at firms that had over a billion in revenues, so there is a size bias in what we did. So they're slightly bigger firms than would be smaller firms, but when you go above a billion, there is a huge range in terms of what we are looking at. So the wait-and-watch approach, I think in terms of what we are hearing, is one that is not going to last for the long term because of the speed at which all the adoption is happening. Another data point in that survey we saw was 96% or 97% of clients were using AI or generative AI in some form, right? So to kind of break the dichotomy between these two data points, generative AI is entering ways of working in every way possible. There are copilots, there are things that help people automate tasks like emails, calendaring and that kind of adoption is quite pervasive.

(12:00):

And so that gives you that 96%, 97% of people who are adopting AI in some way, shape or form. I think the wait and watch is relative to firms that are thinking about the impact of regulations and how they would respond to regulatory challenges to what they'd be doing. And it makes sense to the extent that the wait and watch is about making sure that whatever they put out in terms of generative AI can be explained and is consistent with regulations. And that is what we think we will see emerge as a path forward. So the wait-and-watch approach is going to be driven more by compliance needs and making sure they're sound from a compliance perspective, than about a question of do we do this or do we not do this.

Penny Crosman (12:59):

Another thing the survey found was that 73% of financial services executives have increased their investment in generative AI from last year. That's pretty high. What do you chalk that up to? Is that all the buzz around models like ChatGPT, or do you see other factors contributing to that need we were talking about in the beginning to keep up and do something with generative AI.

Kartik Ramakrishnan (13:33):

It's interesting in that we are seeing the adoption of AI across, I talked about it a little bit before in terms of the various areas of enterprises that are adopting AI. So whether it be IT, sales, customer service operations, risk management, back office functions. So as banks look at where they can get the value from generative AI, there are real examples of benefits in each of those areas that is driving them to put money behind these investments. So for example, the ability to consistently look at customer's behavior, say a spend behavior, and to be able to detect early signs of fraud and take some action. There was work that was being done in that space for years, but generative AI has accelerated what can be done. Similarly, generative AI gives the ability for, say, a wealth advisor to be so much more productive because they can get summarized research from various different sources that they look at, rather than having to spend hours reading through various research reports published by people so they can decide on their investment strategies. Every area of a retail bank, a commercial bank, a capital markets firm is looking at areas where they can derive some benefit from the use of this new technology. And so that is driving that investment. And as I kind of said in the beginning, that investment is in a context where overall spend is either flat or marginally shrinking. So that investment is coming at the cost of taking cost out elsewhere or is driven by benefits that are coming from implementing such technology.

Penny Crosman (15:44):

I recently did a podcast with Seth Dobrin, who was the global chief AI officer at IBM for quite a while, and he has a point of view that large language models can't be responsible and will always be prone to hallucination and inaccuracy because of the way the models are constructed and because they were trained on all of the internet. Do you agree with that view?

Kartik Ramakrishnan (16:15):

So what we have seen in terms of large language models that are applied, especially in the use of AI agents, we have seen that there are ways and means through technology to identify and reduce or eliminate hallucinations. The way AI was getting adopted initially by our clients was there was always this concept of human in the middle. So it kind of validates and checks if there was hallucinations that were impacting the outcome. So there isn't an exposure out to the world, but if you see increased adoption of the agentic models, to me that is evidence that there is faith in the client ecosystem, in the enterprise ecosystem, that these models can be trained at some point to be hallucination-free. Now, I'm not a technologist to refute that to you one way or the other, but looking at the data that we see coming out of the surveys that we're doing with clients who are actually implementing this, they are getting to a point where they feel comfortable where they let an AI agent make certain decisions without a human in the middle.

Penny Crosman (17:41):

Now I think along with that, about 51% of the financial services respondents in your survey said that they're currently developing guidelines for responsible generative AI use. What do you think are in some of those guidelines or what should be included in some of those guidelines?

Kartik Ramakrishnan (18:02):

The guidelines are actually multitiered from what we have seen with our clients. There are broad enterprisewide guidelines in terms of everything from what AI will be used for to guidelines that are specific to areas of these enterprises. So what I mean by that is there are compliance guidelines that are broadly applicable across everything that is done in the firm. There are guidelines that are applicable to areas that involve the use of customer data. There are guidelines then that are applicable to AI that gets exposed in some way via form to the external world as into a customer or to a partner or to another actor in the ecosystem of that enterprise. So the guidelines are actually quite far reaching in terms of how they work, and they work at multiple tiers within the organization. And it goes back to my initial point about having somebody who is a chief AI officer of the firm with a degree of seniority and access in the firm to be able to deploy something that governs the acceptance of AI across the enterprise.

Penny Crosman (19:21):

That makes sense. Well, Kartik, thank you so much for joining us today, and to all of you, thank you for listening to the American Banker Podcast. I've produced this episode with audio production by Adnan Khan. Special thanks this week to Kartik Ramakrishnan at Capgemini. Rate us, review us and subscribe to our content at www.americanbanker.com/subscribe. For American Banker, I'm Penny Crosman and thanks for listening.