What you'll discuss
- How is your bank using AI?
- Where do you see your bank using AI in the future?
- What are the most promising uses of AI to save bankers time and money, and to allow people to stop doing boring or unfulfilling jobs within banks?
- How can banks gain an advantage over competitors with AI?
Transcription:
Video Introduction (00:10):
The financial industry is ever evolving with many moving pieces, complexity, volatility, and transformation. It takes strategic minds to stay at the forefront of progress. Protiviti proudly honors those who consistently play several moves ahead. The remarkable recipients of this year's most Powerful Women in Banking awards. You don't just anticipate the future though. You design it. You lead through uncharted territories. You set new standards for innovation, you turn challenges into blueprints for success. Here's to celebrating your influence, your contributions, and your dedication.
Chana Schoenberger (01:11):
Good afternoon. Hi. Hi everybody. I'm Chana Schoenberger. I'm the Editor in Chief of American Banker and I want to welcome you to our conference today. We at American Banker. Look forward to this all year long. This is most Powerful Women Week. It's a three day series of events that celebrate the honorees of the most powerful women in banking, in finance, women to watch top teams. Basically, these are the women everybody wants to be when they grow up, and many of them are in this room. So, hi. So over the next two days, we're going to have the chance to learn from some of these women and some other fabulous speakers. We're going to connect. We're going to, I know women don't like to use this word, but network.
(01:56):
Remember, networking just means turning to the person next to you and shaking their hand, that's all. And taking their business card. Hope you brought a lot of business cards and just getting to know each other because the women in this room are just part of the support network that strong women have on Wall Street and in Main Street banks and in finance around the country. And we're really glad that you're here this week to celebrate with us and learn about how we can all ascend together. So I want to welcome up for our first session this afternoon with three fabulous speakers. We have an honoree of the list, Teresa Heitsenrether rather of JP Morgan Chase. She is their AI czar. Christine Livingston of Protiviti and American Banker's own Tech Editor, Penny Crossman.
Penny Crosman (02:59):
Welcome everybody. We are here to talk about a pretty hot topic on a lot of people's minds. Generative AI, what to do, what not to do. I'm here with a great panel. I'm here with Teresa Heitsenrether, as Anna said, is Chief Data and Analytics Officer at JP Morgan Chiefs. She's their AI czar. She reports to Jamie Diamond is on the operating committee, and she was formerly Head of Global Security Services for the entire bank. So if we get a chance, we're going to ask about that transition from Wall Street to third country, which is where I live. So welcome. And we have Christine Livingston, who is an AI expert from Protiviti. She has been following AI for more than a decade, since IBM launched Watson. She works with a lot of banks, healthcare companies, people in other industries to help them do their best with AI. So let's start with you, Teresa. In August, I think JP Morgan Chase announced or didn't announce, but acknowledged that you have rolled a generative AI portal out to about 65,000 employees. Can you tell us a little bit about that portal and how you launched it and what people are doing with it so far?
Teresa Heitsenrether (04:26):
Sure, absolutely. So hello everybody. It's great to be here. So when we think about AI at JP Morgan, we've been at this for a long time, and I think machine learning and traditional AI and analytics are a big part of what we've always done in fraud and risk management and marketing and various uses. But when generative AI came onto the scene with large language models, it kind of opens up a new aperture because it democratizes in a lot of ways how you can actually give end users the tools themselves to work directly with. So that was one of our kind of thesis is going in, how can we safely make this available to people within the organization? So there was a lot of work that had to happen initially to make sure that we had the right infrastructure. We don't want our data training the models, but once we got that all set up, we set up this portal so that we could do a couple of things.
(05:21):
We wanted to first and foremost make sure we had the security around it. We wanted to make sure that we were not beholden to any single model provider. So this is kind of an abstraction layer of sorts that you can swap different models in and out the back. But we also saw a lot of the same use cases coming up. So we had tons of enthusiasm, hundreds of use cases, but when you boiled it down, it was I want to work with data, I want to work with knowledge. It was various capabilities that we thought if we could develop them once and make them available to the firm, it would just make it go faster. So we're at 195,000 people now with the tool that they're using. And basically it's at the moment really think about it like ChatGPT out of the box, but with JP Morgan data, you can work with our data. So it just makes people more efficient. They're finding ways to do all kinds of things, looking at contracts, looking at rules, being able to write your first draft of an email. So it's just getting people's hands on the tools is sparking ideas and creating a lot of efficiency just throughout the firm.
Penny Crosman (06:30):
Do you require people to use it or encourage it, or is it just an option?
Teresa Heitsenrether (06:34):
Well, it's funny because initially when we started talking about it, there was this kind of reticence like, well, we're only going to let a few people in the firm use it. And once we started to roll it out, it became this almost fight within the organization who could get there first. Everybody wanted their teams on the tool. So we've rolled it out to 190,000 people. I would say we've got about 60,000 that are really active users. So we're not trying to force anybody to do anything. It's just make it available. And it becomes one of those things like once you see what your colleague next to you is doing, it sparks that curiosity and that's getting people to actually use the tool.
Penny Crosman (07:15):
And do you use it yourself?
Teresa Heitsenrether (07:17):
I do. I use it myself. I wasn't in the beginning, but I'm getting better at it now.
Penny Crosman (07:23):
So Christine, are you seeing this as a common practice? Are you seeing a lot of banks do this kind of rollout to everybody? Everybody can just try open AI's ChatGPT or Microsoft copilot or another generative AI model for themselves and see what they can do?
Christine Livingston (07:40):
Yeah, I would say we're definitely seeing some level of experimentation with generative AI in particular. And I loved what Teresa said, and we chatted about this earlier. It's a great reminder that AI has been in banks for a very long time. You've probably all been using it in fraud detection and machine learning capabilities, but it certainly sparked a really renowned level of interest. I've heard anywhere from, I've actually talked to one bank who had 2000 use cases identified for generative AI. Agree. They're not actually all distinct when you boil down the patterns, but many, many organizations are looking at how do we create this capability in a safe, controlled, secure way and allow some of that experimentation to take place. And I've seen a variety of approaches to doing that from a technology perspective, but definitely I would say experimentation and some level of pilot or prototype in almost every bank today.
Penny Crosman (08:40):
Now there are some people who are saying, all these pilots are great, but show us the return show investors the return. Where do you think we're going to see real benefits, bottom line benefits from this technology?
Teresa Heitsenrether (08:56):
Yeah, I mean, it's a great question. It's the question right of where is the commercial value? And there are people already who are a little bit disappointed I think, on what they expected to be seeing at this point in time. We look at it almost as a three horizon journey that's going to play out over the next several years. So the initial phase of just having this thing on your desk and making an hour more effective a day or five minutes more effective a day, it adds up, but it's very hard to quantify that because it's a small portion of a lot of people's time in a day. The next horizon that we're looking at though is when you actually start to really supplement the tool with JP Morgan knowledge basis, that's what we think is the differentiator. The models will ultimately, at least in my opinion, start to become somewhat similar in terms of capability, but it's the data that you can use with the model that actually is the distinguishing feature.
(09:56):
But in order to do that, there's work to do, right? You to, I'll give an example. One of the places that we see a lot of opportunities in our call center, right? We are the bank for 80 million US households. And so we have a lot of people that take calls every day about credit cards or your car loan or a variety of things. If they can answer the question more quickly, if it's a better client experience, if they have access to the information across all of those products, that's a real savings for us. Every second on those calls is actual real bottom line impact, but the way that you write policies or brochures or something for your products is not necessarily the way that humans converse. So it's a question of now adapting all of that knowledge in a way that it can be useful and making sure that it's super accurate and it's up to date and all of that.
(10:44):
But we think that's the next level of when you really can use that data, you start to pick up a lot more productivity. And then the third horizon is really this idea of agentic workflows or moving from the five minute task to the five hour task. The models are getting better at reasoning, they can do multiple steps. So if you can basically think of the tool as being this really capable analyst who can do a lot of work for you, as long as you explain to the analyst, these are the steps that you should take to get something done, that's where we think you're going to see a lot more productivity and a lot really the value start to kick in. But that is a little ways off.
Penny Crosman (11:25):
How about you, Christine? Are you seeing anybody get returns on this technology already or do you have thoughts on where it will come from?
Christine Livingston (11:34):
Yeah, I think we're seeing definitely some early signs of an ROI. You have to pick the right use case. Everything is always about picking the right use case intelligently and understanding the value you expect to be delivered early on. And I agree there's a couple different ways we think about value. I would say one thing I've seen, seen, I've been doing this for a very long time and talked with a lot of clients, a lot of banks who experimented with this technology 10 years ago. I remember having a conversation with a very large bank probably 10 years ago, talking with them about using in that scenario Google's AI suite. And they were like, well, let's talk with Google about where their data center is. This is the level of maturity. It's like, well guys, Google's got data centers everywhere. But understanding that those experiments that they did a decade ago have helped to mature their thought process and their thinking and their capabilities, and they're now much better positioned to leverage this next advance.
(12:37):
So I see that there's that continuous learning element of value, and this is the next tool and the next capability that you need to understand as an organization and need to understand how you're going to harness it. But agree, there's a ton of opportunity in a lot of the process efficiency side of things. Many banks still have very manual workflows, a lot of paper sometimes even checks. When you think about all of that manual processing. There's a lot of efficiency opportunities, which similar to call center, you can pretty easily quantify a return on investment when you look at what's the process that we're taking out and saving some time on and able to execute upon more efficiently.
Penny Crosman (13:18):
And Teresa, what about the cost of this? Because if you're rolling this out to 195,000 people, and a lot of these models have a monthly subscription fee around $30 per month per person, and that can kind of add up. How do you address that?
Teresa Heitsenrether (13:34):
That is exactly one of the things that why we built this portal. Because the way that we've constructed this, what we're paying for is only the usage when you are hitting the model, and as we've seen, we had made a lot of projections about what we expected it to cost, and every model that comes out, we're seeing almost a 50% reduction in the cost because the models are becoming that much more efficient. So we are not going the subscription route because I think what we're hearing certainly from a lot of our competitors in the financial services space, that if you do that at $30 per month per user times 108 90,000 people with only a certain percentage of them using the tool, there's no way that you can convince business leaders, trust me, and you'll see it on the other side. You have to have a much lower rate that our rate is nothing close to that, and it is totally driven by usage.
(14:33):
So if you're not using it, it's not costing anything. So that is how we've kind of won over the hearts and minds and why I think everybody in the bank is very welcoming of having it rolled out to everybody's desktops is because they in the businesses actually get to control the usage. And if somebody's running up a big bill, you can kind of look into that and make sure that there's real value being added to your business, or you can put some caps around that. But that's exactly why we kind of took a little bit more time to build this application layer so that we weren't in that kind of a situation because I think at least in the early days, that's going to be a very tough justification.
Penny Crosman (15:15):
Christine, what could smaller banks do to keep their costs down as they proceed down this route?
Christine Livingston (15:23):
Those same principles apply to any organization. There's a traditional buy build decision to be made in technology. So I do see some organizations saying, you're going to experiment with some per user license model scenarios, but the economics can often pay out. And looking at some of the, you're not building the model, you're building the system around the model and you're using the model for its intelligence. And just as Teresa said, then the value delivered is directly related to the cost spent, right? So if you're paying per API call per use case per user, you're value delivered again is commensurate with your cost. So there are some interesting economic models running, people are experimenting with all kinds of things. I think there's some wisdom in the philosophy of do what your core business is, do what you're good at doing, and outsource some of the other capabilities to those organizations that are poised to build AI and poised to build those tech capabilities.
Penny Crosman (16:27):
So I want to ask you both about the risks because there are quite a few risks. There's a risks that our generative AI model will hallucinate and just basically make up stuff if there isn't the right data for it to draw on, there's the risk of a pulling up outdated information. If you've got multiple policy documents that it's been trained on, there's a risk that it can simply make an error and there's a risk of bias if it ends up getting trained on perhaps mistakes that were made in the past, it could pick up bad habits. How do you guys both look at these risks when this technology has been used in such a heavily regulated industry? Teresa, how do you think about that?
Teresa Heitsenrether (17:12):
Yeah, no, that's an excellent point and that's something that we spend a lot of time thinking about. So there are techniques that you can apply exactly as you say, making sure that if you're asking a question, you limit the amount of data that it's looking at or you give it an example of what a good answer looks like and all of that. But I will say that at the moment, all of our use cases are internally facing and they all have a human in the loop at this stage of the game for that reason because there are these things that we have to be very, very careful about. You may have seen that there's a famous case now for Air Canada where they employed the model to actually deal with customers directly and it made a mistake and they were responsible, they were reliable for that mistake because it's just a form of technology that you're using to service your clients, which kind of stands to reason. So we're taking a very cautious approach in that respect and being slow in the way that we do it, but that's how it's been. And there's been a lot of training that we've done to make sure that the people that are using the platform understand what the limitations are and that they need to be able to validate what they're looking at.
Penny Crosman (18:29):
And what are you seeing among other banks?
Christine Livingston (18:31):
Yeah, I would agree. We see a lot of the human in the loop element and really about as you're providing answers and contextualizing information, making sure that you're surfacing the source content for people to look at, here's where it came from. Does that conceptually make sense to me as well? And certainly the other thing, I always like to encourage banks in particular to do, thinking about risk and AI and ML is not a new concept. You guys all have model risk management teams for the most part. You understand how to look at model performance and really this is the next evolution, the next iteration of thinking about that. And one tendency I've seen is to think all use cases are the same. It's not necessarily true if we're going to use AI maybe internally to help our employees find benefits, much different risk profile than underwriting a loan or credit risk decisions or a ML. So it's really important, I think, to lean on the concepts and the frameworks you've already built. You already understand how to look at and manage risk of models and algorithms and AI and start to think about this next frontier and what those models uniquely bring to the table. How do you distinguish? How do you mitigate and manage that risk, but use what you've already built?
Teresa Heitsenrether (19:53):
And I can certainly validate that when large language models we were first starting to work with them or think about them, the natural reaction is this is different. And we started thinking about, well, what kind of a risk framework do we need to put in place? And we came exactly back to what you said, we have model risk governance, we have technology cyber controls, we have data controls. The thing that shifts a little bit here is that it really is the biggest control is how is it being used? And to the point, gen AI is not going to be the tool of choice for everything. If you're making a end consumer credit decision, you're probably not going to use a tool that you can't explain how it got to the answer. You're going to use a more traditional method. It's one tool in the box. So I think you really do have to think about what's the use, what's the application, and then decide what is the best and most appropriate tool. But that's the way we think about it.
Penny Crosman (20:49):
Well, given that, what are the top three use cases in both of your minds?
Teresa Heitsenrether (20:55):
I mean, for us, technology is way high on the list. And I think if you look at JP Morgan, these round numbers we're something in the neighborhood of 315,000 people. About half of those people are in technology or some form of operations or client facing roles. So right there, you see the opportunity. If you can make those people more efficient, there's real value there. So technology seems to be a use case that's pervasive across industries because if you think coding is a language and it's a language that's actually quite a bit more structured than even verbal spoken language. So there's opportunities there and we're seeing that and not just in the way that we do software engineering, but just looking at your kind of how you run the place every day, like scanning your whole run the bank network. It's helpful for that. It's helpful in cyber. So there's lots of applications. The operations space is another area. So whether it's call centers or documents, how can we look at documents and save time there? But then we also are seeing use cases in what I'll call the knowledge management spaces. So legal and credit. Any place where you're constantly looking at data and trying to draw insights to the degree that you can get people those insights in a more efficient way, it frees up time for them to actually do the high value added work as opposed to a lot of research.
Christine Livingston (22:27):
Yeah, I would agree that the process side of things, I think the specific nuance of what that means and every organization is a little bit different because you're going to have specific processes that take you longer based on how you've set them up, the technology available to you. And so really thinking through where are those opportunities where you're making decisions over and over and over again. You've got some scale to that. Typically, you're going to look at things that have some type of unstructured documentation, which is we know is all over banking. A couple other interesting patterns we've seen, there's a lot of interest in, I've called it conversational analytics, but really thinking about how do I use generative AI to make all of the data and analytics that are available to my organization more accessible and understandable. So rather than having to understand how to navigate a report or a dashboard or which one to go look at, what if I create a conversational capability that allows people to ask questions of my data, similar to the point discussed earlier, it needs to provide the source of that information and those responses and the visualization.
(23:33):
But that's been a really popular use case. We're starting to see emerge to help again, provide that level of insight into what's happening within your organization.
Penny Crosman (23:43):
Alright, so final question. What do both of you think we'll see in the next year? We've been seeing kind of a AI arms race I would call it, where banks have been really trying to ramp up their capability and spend a lot and experiment a lot in this year. What do you think we'll see in 2025?
Teresa Heitsenrether (24:04):
I think that there is a little bit of a shift at this juncture from the experimentation phase into really kind of now narrowing down the higher value opportunities. So I think you'll see a bit more of that. I think one of the things that's escapes people sometimes because they get very caught up in the technology, is I think of this as kind of a transformation. So it's about oftentimes what we're seeing is it's like, let's just look at the workflow and does it make sense? How do you reimagine it? So the technology is one aspect of it, but we're starting to see the evolution of teams that are multifaceted with people that really understand the domain, people that understand how to drive transformation and use technology. So I think you'll see more of that and I think that will drive a little bit more of the traction around some really substantive, a little bit less experimentation, a little bit more like rubber hitting the road.
Christine Livingston (25:01):
Yeah, totally agree. I think you'll see the use cases, the winners, right, so to speak, rise to the top and you'll see those use cases operationalized this year. Again, I think a lot of organizations are learning, even in the ones that don't make it all the way through to production, there's lessons learned along the way that are very valuable, sometimes hard to replace. Everyone's talking about it just as there said, AGTech AI. I think that's the next hot topic in this space. We will see how far that comes to fruition in the next year or so. But again, learning how these models work, even starting to think like chain of thought and how are the next evolution of models starting to reason and think and break down their responses is an important step I think you can take today to start to think about where are we heading with a GenTech AI? Understand that breakdown of a thought process is what's starting to pave the way into that next set of use cases. And I think you'll see some really interesting commercial applications of, again, the combination of an organization's data and knowledge of how to use that data combined with this technology. I think you'll probably see some really interesting products come to market in the next year or so as well.
Penny Crosman (26:13):
Alright, sounds good. Well, Teresa, Christine, thank you so much for joining me up here.
Christine Livingston (26:17):
Thank you.