Bank Transformation in the AI Age: Implementation Best Practices

Sponsored by
Today, every bank continues to make technology investments in order to transform. But what should you do in an AI-first world, and which AI should you choose? Can you solve the paradox of building a technology foundation today, that powers the business of the future — without creating even more digital siloes? And how do you do that at the pace of change?   

The discussion will delve into strategies for future-proofing technology investments, focusing on how to extend existing systems to harness the potential of AI while maintaining resilience.  Join us to discuss how to evaluate platforms that deliver tangible value in a matter of months, considering factors such as scalability, flexibility, and integration across various business areas.

Transcription:

Penny Crosman (00:09):
Welcome everybody. We have a panel on AI best practices, which I know is an important topic to a lot of people. Should I briefly introduce you? So we have Kristin Streett, sorry, I don't have my sheet in front of me. Kristin Streett is with ServiceNow. She works with a lot of financial services clients on both customer and employee use of AI, et cetera. Greg Kanevski is also with ServiceNow and he heads the financial group.

Greg Kanevski (00:47):
I work in the banking practice. Let's put it that way.

Penny Crosman (00:48):
Okay.

Corey LeBlanc (00:49):
I heard the term father banking. Yeah,

Kristin Streett (00:51):
I do call him father banking. He knows a lot.

Penny Crosman (00:54):
Okay. And Corey LeBlanc is with Locality Bank, which is in Fort Lauderdale before this was at Origin Bank and he was also in the Air Force for six years. So nice resume there. So I think our first question is around AI governance, and obviously that's something everyone's trying to figure out. From each of your perspectives, what should AI governance look like in a financial institution? You want to start?

Kristin Streett (01:29):
Yeah, so one of the things that in advance of coming here, and if any of you haven't already seen American Bankers Research paper on AI and generative ai, they put out a great paper a while back that I use and reference a lot in my customer meetings. But just reflecting the sentiment of lots of other banks, your peers on how they feel about ai, generative AI, the approaches that they're taking, and most, I think it was roughly 60% are taking an incremental approach or are still gathering information around those best practices. 34% are coming to sessions and trade shows like this, conferences like this, and learning more to take back to their organizations. But of all the things that people are doing as an incremental sort of first foray into this space is to be thinking about a governance council or a governance structure around their program.

(02:25):
So foundationally, if you haven't done that yet, many of the banks we meet with already are in process on a governance model. But in my opinion, I have a change management background. For me, this is a change management exercise. Who are the executives that touch this? Who needs to be governing the decisions around this? How are we communicating with our employees around AI and gen ai? Those types of things are all filtered through a governance council and are filtered into your organization. So from my perspective, those are components of a governance structure that you should be thinking about as well. But I know Corey, you are facing this live and in the moment, so you might have a better perspective on the ways that you're approaching it.

Corey LeBlanc (03:10):
So for us, that committee is the same as our executive committee. We have all of our executives over all these different areas and we bring that in as kind of a side to that weekly executive session that we would normally have. And we're talking about AI and all kinds of other things, but AI being the topic of choice most days because of all the stuff we're getting feedback, but it's been really interesting for me because when we start to talk to peer banks and other people in institutions come to conferences and have conversations like this, the one thing that we're finding is that it's not the same as it used to be. Right? So a lot of times when you had any sort of evolution or modernization that was happening in banking, we could kind of lean on the regulators to kind of dictate and provide some guardrails on how to establish that governance.

(03:59):
The reality is they're not doing that anymore. They're looking at us to figure out how we're going to do it, how we're going to set the policies and procedures, how we're going to control the risk, and then they're going to come back and grade us on it and I don't think it's going to change. So what we're doing is having to take that proactive approach and almost create not just a committee internally, but a committee of our peers to have the conversation of here's how we're doing it, here's how we're looking at keeping ourselves secure, how are you doing it and what can we learn from each other?

Penny Crosman (04:30):
And can you just share what are the top risks that are sort of top of mind for you?

Corey LeBlanc (04:36):
Number one for me is that first it's the wild, wild west. And so there's a lot of AI and particularly generous AI that's available to our employees for free. They can just go out there and utilize it. And I don't know if you've gotten the same requests that I have, but if you're in any sort of management role, your employees are asking you for assistance. They're asking you to hire someone to help them do the work they don't want to do. But then we also have this game where we got to kind of balance that we can't just spend all the money on FTEs. So what you're doing is you're presenting this opportunity for your employees to start to use AI in a way that may not be ready, particularly where I'm most concerned is in I or information that we don't want to go out into the public. So when you start to use public facing AI tools, we got to be very, very cautious there. And then it's also understanding things like bias and everything like that to make sure that it's not impacting the human decisions we're making today until we know exactly what that decision is supposed to look like.

Greg Kanevski (05:36):
Yeah. What I would add in is the regulators appear next year to be poised to do a combined horizontal exam focused in three areas to answer your risk question, hallucination being one, two being biased, and three being capital expenditure. In other words, what's the governance model around or a capitalization of those expenses? And a lot of what I'm seeing are the large regionals now is starting to pull back and have a subcommittee from their investment group to make sure that they're looking forward, but biased modeling around the bias, how they're testing along that hallucination rates, what are your hallucination rates against what you expect, what are you doing to mitigate them? And then third is how are you governing the spend? What's getting prioritized over others? Obviously it's not unlimited dollars and how are those dollars going out? That's what we expect to hear this October for next year's focused exams.

Penny Crosman (06:35):
So as people are thinking about a model that they want to deploy, what are some of the things that they should be thinking about as they are making these decisions about which model, which vendor? Where do I use it? What are should play into that thought process?

Corey LeBlanc (06:55):
Yeah, I think it's the same thing that I hope we've learned through the evolution of digital banking and data and all these other conversations we've been talking about as they've come along to becoming significant in our industry. And it's that we're mapping it back to our business. What is it that is important to your organization? How are you moving forward? How are you serving your customers? How are you serving your employees who serve your customers? And making sure that we're actually connecting the dots here to something that is significant specifically to our unique organizations. The problem that we've seen in most of those other scenarios with digital banking, everything is there's this replication over innovation concept where I go to a conference, I hear this example of this person up on stage talking about all the things that they've done successfully. Let me go replicate that in my organization and this is going to be amazing and it doesn't really work because it's not actually authentic to what it is that's going to drive that company forward. And so that's really significant for me because it's going to be different for each organization.

Kristin Streett (07:59):
Go ahead.

Greg Kanevski (07:59):
That's right. What I would say to what Corey's saying, we talked about this in the prep ahead of time, and one person say to us two weeks ago, well, we've really gotten started with RPA and I was like, RPA is not AI, RPA is automation and folks are so pressured to move forward. They haven't started with a corporate point of view, a corporate policy, a corporate infrastructure of how to govern this, a corporate infrastructure of how to deal with PII. And as a result, the modeling ends up being the cart before the horse. Instead of establishing how do I deal with the data, how do I deal with those priorities, how do I establish modeling at the very highest level and then force a governance process on it that will adhere to what our policy is and what our practice is. And without those two things to compliment one another, that's where they're getting into audit findings, assurance testing issues, and the auditors are coming in, although the auditors are being kind right now, I mean the regulators are being kind dur than other areas. Audit isn't, and assurance teams are, and I'm sure you folks face that every day more than we do.

Kristin Streett (09:05):
No, I think this is what comes back again to the governance model is really stating and articulating quite emphatically, what is it that you're going to be doing with the data that you're using? What use cases are important to you, et cetera, and defining that outright. I know that ServiceNow, for example, has been very explicit with its employees and our internal teams that will be focused on domain specific LLMs. We aren't really going out and we have the ability to go out and leverage public LLMs to be able to bring information in, but we're really focused on helping our customers accelerate the data and the opportunities within their own environments and optimizing that because it tends to be able to get through an accelerated process of use case analysis and assessing the risk against that if it's sort of contained or containerized In a way, there's the suggestion that it's an easier way to get a pilot forward, which is the other thing that we're seeing is a lot of banks are experimenting with AI or generative AI in a piloted way, in a very controlled way, so they can understand and learn and be able to apply it.

(10:21):
Go ahead.

Corey LeBlanc (10:22):
I just have a question though for you about that. So they're trying to pilot, but do they understand the data sets that's required to probably truly pilot that AI scenario in order to get the results that they're looking

Kristin Streett (10:34):
For? It's like learning while you're driving, right? Right. Yeah. I think which

Corey LeBlanc (10:39):
Skews it a bit.

Kristin Streett (10:41):
We tend to see that the use cases that are helpful or that are quickly consumed or text to code in an IT environment that's helping to accelerate efficiencies, but also use cases that are reflected in your research, which are customer facing, just questions that are being answered whereby information's being gathered and summarized like a summarization of data that exists within your organization. Those tend to be use cases that they're looking at and ways to help employees with that efficiency. Greg, what else are you seeing?

Greg Kanevski (11:14):
Yeah, when they go to the next step is when they run into problems, they haven't figured out where all the data sits, how they're going to use it, and then of the results from it. They get to the so what phase, what am I going to do now? One particular institution talked to us about on the client side, the commercial side, I want to generate applications that are specific to the institutional client that we have. So we don't ask for the same data. We don't have that poor experience here. The bannerman rate and commercial banking is far too high for new services, new products. So they want to customize that. Okay, that's great. What are you going to do with it once you get it? What are you going to do with it at that point? How are you going to route that to underwriting?

(11:52):
How are you going to route that afterwards for a risk review and legal? And they want to take an existing poor process with perverted data sets and then put AI on top of it and expect it to solve it. And that doesn't always work. It works for some of the most basic elements of providing, I'm a phone rep, I want to know what's going on with Greg Kowsky account and I want it present it to me right away. That's great. There is no, so what after it allows me to provide better customer experience, but if I want to take that data set and now do some sort of action with it afterwards, that's when the folks are getting themselves into a little bit more of an issue.

Penny Crosman (12:30):
It seems like we're seeing some banks take that kind of, let's pilot, let's experiment with a lot of stuff and see what happens. And then we have people like ANU at PNC who said this morning he's not going to do that. He's going to figure out where they can derive the most value, where they can really get some kind of tangible result before investing the resources. As he said, there isn't always unlimited resources. Is either way better or do you guys have any opinions about these philosophies? And also getting another related topic is getting senior executives to buy in to whichever approach you want to take.

Corey LeBlanc (13:11):
Well, the question I would ask him though is how are you going into this thing you defined as having the most potential success for value without doing a lot of pilots around that, right? Because we don't know what we don't know until we get into it. We believe we know a lot about our data and a lot about our customers, but most of the time when we get in there and we see it from the high level actually aggregated amounts of data, we realize we were mostly wrong in a lot of instances. And so I find it interesting the way people look at it. You mentioned code development. I've had so many different debates with developers who some are saying, Hey, this is actually maximizing my resources. Then I have some saying it's making my resources lazy and it takes me more time to clean up the code than if I would've just wrote it so much junk in it. And so it's this misunderstanding of how we want to utilize it. And I think that that's where it comes down to the committees and that's why having that committee assigned, not just to the regulatory side and all that other stuff, but exploring the opportunities in the business we want to really focus on, and then figuring out who all needs to be involved to have that conversation.

Kristin Streett (14:17):
I think that one of the things that we've observed, I've been to some other conferences where this has been topically brought up and as impact to a customer rises or as you guys put it, as human consequences, rise, trust and generative AI sort of goes down because there's implications to those customers in its very serious consequences potentially from your organization or governance structures perspective. And so these pilots are what are also confidence building. It's it's exposure, it's learning in a safe environment and it's really around your risk appetite. Where is it that, what use cases are you comfortable suggesting within your organization and trying to go live with? There's many partners that have great technologies that are available to help you with those structures, the outcomes. And I think ServiceNow would take the stance at what's most important around this is to focus on what outcomes are you trying to achieve? Are you feeling like you're focused on efficiency? Are you looking for something that's going to help you burn through a bunch of backlog knowledge articles that are sitting waiting to be developed by somebody? I mean, those are very simple use cases that could provide a lot of value immediately out of the gate. So I think it comes down to really thinking about what do we think the outcome could be with AI or generative AI and then moving from there. So that's my perspective. I don't know.

Greg Kanevski (15:55):
Yeah, penny, I just want to make one comment to your question about how to get the CEO to buy off the investment or buy off on the priority. I look at it the other way around where the best case scenarios we've seen where folks that are really making a step forward, the executive committee is leading it with the CEO at the helm. It's being brought up to the board, whether it's the risk committee or the audit committee or the full board every time they meet that is led and should be led from the top with the three underlying, whether it's operational risk or risk committee, investment committee, and then it as being the three main governing factors, layers, bringing it up to the board. And if the CEO and c-suite are not behind it, you can't do it. Only, we were talking about operations in it.

(16:45):
You can't do that separately than HR, right? There's a human impact to this and it needs to be communicated. The banks that at least I deal with, I'm sure that Kristen deals with that have made the best impact, have a full executive committee behind it. It's an agenda. It's a rolling agenda topic to the board. It's also shared, obviously that's minuted and shared with the regulators so they understand it, but that it's led by them as a committee. And then each of the groups prioritize what they're doing under the veil of what's our governance structure, what's our priorities? And then they pilot it. Those pilots then facilitate which ones they actually invest in and go forward with.

Kristin Streett (17:23):
Corey, we were talking earlier, sorry, I don't mean to take over moderator here, but I mean, when we were talking earlier, you said a lot of folks sort of get these ideas from conferences and then go back to their own environments or I'm going to do this, but it's really more about what do you need internally? Do you want to share a little bit more about your perspective on that?

Corey LeBlanc (17:40):
Well, yeah. I mean, just look at anything that we've ever tried to do from the IT or technology side where we felt like we were connected to the business and we weren't, or maybe we were, but we were also duplicating efforts in an entirely different way. So if you're a CEO or your executive staff is talking about staffing or different hiring or changes of process and procedures, but it's not connected to this AI initiative that you're trying to spearhead, you're going to be disconnected from the result, which is a big, big problem. And that's what I'm seeing a lot is that the IT department or the operations teams or the people really play in with some of this stuff, don't want to bring it to the executive team until it's ready to go. They're not willing to do those little test cases, throw a bunch of crap out the wall and see what actually sticks. So it has to be started from that executive because that CEO needs to understand the plan for your SBA department needs to map back to this theory that AI may be able to make an efficiency that you wouldn't otherwise have. And so if you want to have the result, you got to start with the foundation and that starts very human. And that's going to be led by your executive leadership.

Penny Crosman (18:48):
I think Satish Krishnan said this morning that the risk of not moving forward with generative AI is greater than any of the risks of deploying it. What do you guys think? I mean, I'm sure you guys would agree with that, but Corey, what did you think of that?

Corey LeBlanc (19:06):
Well, first of all, how many, just show of hands, how many of you are with a financial services institution, a banker or credit union? Just real quick, raise your hands. Alright, now keep your hand up if you're not using AI at all. Are you sure?

Penny Crosman (19:20):
Yes,

Kristin Streett (19:21):
Sure.

Corey LeBlanc (19:22):
Right. So that's they say no.

Penny Crosman (19:23):
Yeah,

Audience Member (19:24):
A lot of things be AI.

Penny Crosman (19:31):
AI is embedded in a lot of things,

Corey LeBlanc (19:34):
Right? My point is, I think the reality is that you're not going to see a lot of hands that are still raised up and saying we're not using it at all. And then when you ask the secondary question, are you sure a lot more hands are going to drop? And so yeah, the risk of not doing anything with AI is much more significant than the risk of actually doing something with AI because either you're going to have control over it or you're not one or the other. There's no two ways around it. So it's a reality. There's never been anything that has come up through the digital age that has not been dependent on data. And data continues to evolve and it's got to be something we control. And so yeah, you could talk about risk all day long, but

Greg Kanevski (20:13):
Yeah. So did you want to go next?

Kristin Streett (20:15):
No you please.

Greg Kanevski (20:15):
So what I would say is we're on the precipice of probably the biggest advancement since the internet with AI. And I think by any unit of measure, any credible source out there would say the same. The amount of data collected in the history of humanity up through the year 2000 now gets replicated every 18 months.

(20:41):
We now have the computing power to utilize that data and the intelligence within that computing power to actually start making decisioning. That's why I think you saw some of the money center banks really get into this and have huge departments now to the point where they're starting to talk about when's the singularity moment? When is that? Is that a year away? Is that two years away? Is it three years away? And how do we control ourselves when we get to that singularity moment between the machines of processing? I'm not talking about Terminator, I'm talking about the actual singularity moment of the machine processing the decision we will let it do for us versus the ones we don't want it to process for us, and how do we control that? So it's not if it's the board, but banking as a industry as conservative, most folks are fine being fast followers. That's great. There's nothing wrong with that. The only issue is standing still and not in, you don't want to be caught standing still, especially with something of this magnitude of investment.

Penny Crosman (21:40):
Great. And I think we talked a little bit about choosing AI models, but how do you look at an AI model and know that it's going to scale properly for what you want to do? I mean, a pilot is one thing, but being able to use it for all your employees or all your customers is another. Are there benchmarks that you use or anything of that nature?

Greg Kanevski (22:07):
I don't know. I don't know that we could say for modeling their scale as much as it is for the investment to the benefit scale long-term against what you're going to pay to get it. So the top four use cases that get presented to me, and it's a little bit different I think from what we've chartered through American Banker, but the top four here about number one is commercial banking, onboarding application number two is the RCSA process for risk. And that risk and control self-assessment is a painful process for first line of defense, 35% tax on banks to do risk and compliance. How do I find a better way to do it? Three is not so much as if I get the code out, but when I get the code out, what's my number of incidents on the code? I'm losing my track on the word, the faults on the code that I have to have remediated against that person against that application. And then fourth is incidents of the incidents that occur. How do I get out there and figure out what happened? How do I generate enough information so I can avoid that incident from occurring? Those are the top four, but to get to those four takes a tremendous amount of investment to get those to scale, to get the benefit back.

Kristin Streett (23:26):
And even just asking questions, and Corey and I were talking about this, I mean, where are there processes or areas of the bank where there's high cognitive load on certain employees or resource constraints in an area that's significantly important to goals or objectives that you're looking to achieve? So for example, being able to use AI or generative AI and looking at policy changes, which is what we were talking about beforehand, that being able to stay up with and monitor and look at that using a tool like this to be able to look for changes, summarize them and get them back in the hands of your employees quickly. For a small team that's trying to scale very, very fast is a potentially great use case if you're comfortable with using AI in that way. So to me it's about where are there pools of employees that are resource constrained in a really strategic and important area that needs some sort of ability to help take that cognitive load off. And the governance committee is okay with that kind of flow to me. And it sounds sort of all encompassing and overwhelming, but that to me is where you would go. But you were mentioning, we were talking about this, I don't know what your thoughts are.

Corey LeBlanc (24:49):
No, it's spot on. But I would also say is that we want to throw a lot of stuff at the wall. We want to be able to move fast and test things, but once we start to see some stuff stick, we really need to make sure that the things that it's going to have to connect to, we'll continue to talk to each other at scale. So one thing we have to look at in a lot of cases, maybe V1, V2 projects may not have this, but as we move forward, we're going to have AI models having to talk to other AI models. And so we need to understand the decisions we make on what models we use can work with that. Because if you put that thing in a box, you're just going to be rebuilding this again later. Now we're back in the conversation we had with software and hardware way back in the day where we're forklifting things every year or two, three years at best. So that's another thing to consider is look at it at scale, really try to make the best decisions. And that's where those committees come in really, really well is to make sure everyone's involved and understand it.

Penny Crosman (25:42):
Corey, can you just give an example or two of where your bank is either thinking about or starting to use generative AI?

Corey LeBlanc (25:49):
Yeah, so we're de Novo Bank, which means we're still wearing a little bit of handcuffs right now from the regulators until we get outside of that process. And so we're taking a very conservative approach on what we're putting into production, but they know we're playing. So a couple of things that we do is our marketing team has two people. One of 'em is my daughter who's 24, and the other is a guy I got from a college. He always wanted to be a marketing director and he had never been in banking, but I thought he'd be great and he has been. So we're using that to create content and images and things like that at speed. And then we're putting our voice and our feel and our touches on it to make sure that it represents us in a good manner. Other things we're doing today is we're using it for policy review.

(26:34):
So one of the things is drop your glib a policy into an AI tool and ask it to look at any modifications or changes to rules or laws over the past 365 days and then note those for you and then ask it to make recommendations to strengthen your policy. And it'll spit that out in about a minute, which would take me hours to do. And so we have 37 employees, me included at our organization today. If you think how much many policies we'd have to review, how much time that would take away from, I wouldn't be doing anything productive for a good portion of my year. And so we're doing things like that. We're also looking at for internal inquiries, so natural language query where our employees can go find things faster so they're not having to call Greg to find out where our ACH documentation is and things like that. Awesome. That's a good starting point for us. And then the L side onboarding and things like that are to come.

Penny Crosman (27:26):
Thank you. And for Kristin and Greg, what are some of the most popular use cases among your clients?

Greg Kanevski (27:34):
Well, I think Greg,

(27:35):
I think I've given my fourth.

Kristin Streett (27:37):
No, I mean I think I see the same use cases as Greg. I think we've been specializing, our capability has been around case summarization and recommendations for employees that are in customer facing environments where they have to quickly weed through a lot of data between multiple systems to be able to get to an answer quickly for a customer. That's where we've been investing our technology whereby we're helping those employees get to the answer quicker and get to the solution quicker for their customers. So that's the use case that I think is where we've been investing. The same applies for an employee. If an employee is looking for information about onboarding or is looking for some sort of article on how to manage expense reports, et cetera, whatever it is that the employee is needing help with and they're going to an internal agent to get that solved, we're investing in helping those internal facing agents help those employees get to the answer quicker.

(28:37):
And I would just say that just even in my own personal use case, we cover a lot of ground for banking and I had an internal teammate ask me for my opinion on Huda and where can we as a company help a specific bank in that? And admittedly, I just decided I was going to go to ChatGPT, right? And I was just going to look at it quickly. I was exhausted. I had a long day and I just quickly pulled it up and I couldn't believe how helpful it was to just help me quickly summarize everything, remind me of a few key points, and then I was ready to go with my answers. So I think that's an example where employees just have a lot to do in a day. We're doing more with less. And the tools that can help them should be available to them to help you advance the business. And so within your risk appetite, if you're okay with using ChatGTP, excuse me, I think that that's a great use case for getting comfortable with the technology.

Penny Crosman (29:39):
Yeah, that makes a lot of sense. And maybe it's becoming the new, instead of Googling something you chat, you but it is it. Absolutely. Alright, well I think we're just about out of time, so thank you so much, Kristin, Greg and Corey. Thanks Penny. Great group. Thank you having us here. Thanks.