Bank Transformation in the AI Age: Implementation Best Practices

Sponsored by
Today, every bank continues to make technology investments in order to transform. But what should you do in an AI-first world, and which AI should you choose? Can you solve the paradox of building a technology foundation today, that powers the business of the future — without creating even more digital siloes? And how do you do that at the pace of change? 

The discussion will delve into strategies for future-proofing technology investments, focusing on how to extend existing systems to harness the potential of AI while maintaining resilience.  Join us to discuss how to evaluate platforms that deliver tangible value in a matter of months, considering factors such as scalability, flexibility, and integration across various business areas.

Transcription:

Penny Crosman (00:11):
All right. Welcome everyone to this panel. We have a excellent set of experts here. We have Kristin Streett who heads up the Americas banking practice for ServiceNow. So she works with all of the banking clients at the US, Canada, and Latin America. And then Greg Kanevski is Global Head of Banking for ServiceNow, if I got that right. And Grant Karsas is with Travis Credit Union in Washington DC So

Grant Karsas  (00:45):
I live in Washington DC I do the commute to Travis in Northern California, though where our credit union is. So if somebody can beat the cross country commute, let me know.

Penny Crosman (00:54):
I'm sorry.

Grant Karsas  (00:55):
No, no, no, no. That's all good.

Penny Crosman (00:56):
That's impressive. Well, maybe we can start with you, grant. When you think of generative AI and some of the things we've been talking about the last couple of days, where are you at and what are you thinking about doing or what are you doing today?

Grant Karsas  (01:12):
Sure, absolutely. Is anyone overwhelmed about AI yet? Because these sessions have been fantastic over the last day, and I think I truly from this conference can see and feel it from how overwhelming AI can be. And that's what we're experiencing at Travis right now. We're trying to dip our toe into it. So two areas that we are really wanting to try to focus on from the get go is what are some of the problems that we're trying to solve internally that can be solved with ai? And then where does the knowledge gap happen that we need to bridge with our executive and senior leadership teams? We talk about the words like hallucinations in ai. I guarantee you at least three fourths of our leadership team has never heard that, and they don't know what that means. So we're really trying to be thoughtful and take the time to educate our leadership. And I call these kind of my road shows. What is artificial intelligence from the baseline? What is it in the financial spectrum of how we could possibly leverage it? So we're trying to go about that on the knowledge area with our internal team members and then dip our toe into a couple areas. First internal facing around fraud and some backend on the lending side.

Penny Crosman (02:31):
So just to elaborate a little bit, how are you using generative AI in fraud?

Grant Karsas  (02:36):
So this is one, we're not live yet with it, but what we've seen over the last few months is really kind of accelerated. Something we started into last year was a lot of account takeover in fraud. We launched a brand new online banking platform last fall and went over very well, had a great rollout in implementation. But in Northern California, there are some very, very tricky fraud rings going around right now, not just with us, but our competitors as well that we're all talking about. And so we need a better front end to help prevent the account takeover some of the transfer issues that have been happening with fraud. So we found a partner that we started speaking with, and what their solution will help us do is that their model will learn for a number of days, about 90, where we can set up different risk levels to when it detects things within online banking of irregularities. For you as an individual in your online banking platform, it'll start shutting aspects down. It might start by presenting you with an MFA question out of nowhere that you have to validate two things to where it completely locks you out of your online banking account or stops transfers. So it goes systematically through this to help us prevent that. So something that's completely transparent to our members, but that is us protecting them in the long run.

Penny Crosman (03:56):
Yeah, that's really interesting. I think that fraud is a really hot topic for a lot of bankers right now. There's so much of it, and I had a Zelle customer reach out to me recently and tell me her saga trying to get her money back, and it's really a brewing issue because there's going to be another hearing in Congress, I think later this month or next.

Grant Karsas  (04:23):
I think that's right. Yeah.

Greg Kanevski (04:26):
What I would say to that is post pandemic consumer behavior change is real, and it's real because everybody's more comfortable now, or many people are more comfortable now buying online the card not present type situations than they've ever been before. And that those number of transactions, I mean it flooded over 420 billion transactions have flooded from brick and mortar to digital just post pandemic. So the CAGR on fraud is 20 to 25%, and it's expected to continue through the next three years. So you've got a gentleman in the crowd here from Visa that he and his business have seen just a tremendous growth, primarily because of folks like yourself and teams are overwhelmed today. The alerts being generated out of the proactive alerts, let alone people coming into the branch or into any of your operation centers. They're struggling to keep up with those events, alerts, cases, whatever the vernacular want to uses, and they're looking to tie those together. And it is one of the top four areas we're hearing from in general, not necessarily related to ai, also ai, but in general it's one of the biggest areas for us.

Kristin Streett (05:40):
But I think some of that sense of being overwhelmed, one of the ways that our company is sort of working and thinking through generative AI is really in case summarization when there's so much data coming at someone just as a use case, as a potential use case for consideration is case summarization. What data is coming in from your fraud systems? What's coming in from your agent team or your virtual bot? And that's a lot of data to sift through and an extremely sensitive moment that matters for that customer. And it's an opportunity to help those agents lessen that cognitive load and give them a boost of confidence that some of the data and the information is available to them in a way that feels comprehensive, but then simplified in a way that they can digest it and quickly help a customer. So just tying that through for you, I think that fraud is a use case that you see that we've been building towards and thinking about as well.

Penny Crosman (06:43):
So it might produce a bullet point list of these are the most important things out of all of this

Kristin Streett (06:49):
Data, multiple things. I think being able to say, this customer is called in by the way, they've called in six or seven times on a similar topic, or this is the first time they've ever called in. This is perceived fraud. And then even self-serving up a customer, here's our policy, here's our policy on fraud. This is how we manage it as a resource for that agent who might be early career or brand new or is trying to do the right thing legally responsibly in that moment. And it's just additional support that AI can go grab those relevant use cases or knowledge articles that step by step help them in that moment. And those are the ways that I find that AI is helpful. It's like I pride myself on being able to handle these mature moments where I've got to kind of show my expertise, but it's really nice to be able to lean on something that's doing a lot of work for me on my behalf,

Greg Kanevski (07:47):
Especially when every moment matters, right? Every moment you're not able to shut that card down or find the other card that might be compromised in a batch costs you money. So the experience and it's the hard dollar savings,

Grant Karsas  (08:01):
The summarization is very important. I think that's something that we actually have not taken a look at yet for AI and need to, I mean, I don't know of you all that are in fis, but our contact center alone has between five to seven different applications that they have to touch in order to service a member. They're already annoyed with how much they have to do follow up after the call. And so we're actually looking at how much does it save us if we save that call rep 30 seconds at the end of the call. It's pretty substantial for a firm of our size a lot of time. So if we could roll out a call summarization tool that they see that auto generates that for them, the payback would probably great for us. It's a huge takeaway that at least I'm taking back with me to.

Kristin Streett (08:45):
We'll talk to you about it after.

Grant Karsas  (08:47):
I Know. Yeah, right. People make.

Penny Crosman (08:49):
Yeah. I think a lot of banks here are starting to do that or they're thinking about it or something like that. So a lot of people are thinking about AI governance. What are some things that should go into AI governance? Things to think about, think guardrails to put in place?

Kristin Streett (09:06):
I can sort of want to go around you. Oh no, we've shared some thoughts on this earlier and I think that for something as overwhelming as this new frontier called AI and generative AI can be very overwhelming, but putting things very simply down on paper, who's governing this structure? Which executives are accountable in this experience? Is our CISO accountable? Is our product team accountable? Who's accountable in this authorization of what could be prioritized or used internally? And then who needs to approve it? Who needs to touch it? Who needs to model it? Who needs to be exposed to it? Those are just lists of people that need to be involved. And that's a very simple question of just asking who needs to govern this? Who is the governing body? Who is the executive sponsor? What use cases are appropriate? How is generative AI using our customer data or not using our customer data? They're very simple change management related questions that you can ask and be able to think through as you're formulating a governance committee. Most of our customers that we speak to or banks that we speak to are already in process with. Either they've got a governance structure or they're putting one together. And then second are experimenting with pilots, small use cases, grant, to your point where you feel comfortable in your environment that you can look at some different use cases and sort of test them out. Greg, what else have you seen in the governance?

Greg Kanevski (10:37):
What I was going to add is that we had a session like this yesterday and afterwards we had a few folks that walked up and chatted with us and this one person came up and she was just, I felt badly for her. She's in the modeling department for risk. I dunno how the regulators are pushing me. The auditors are pushing me. I've got this governance council I've got. And I kept saying to her, what is your policy state? What is your governance policy state? What is the framework that your company has established that said, here are the guardrails. Here's how we're going to do this and here's what your responsibility is within that governance. She said, well, we don't have that. I said, well, how do you know what success looks like if you don't have a policy that sets up what you're going to do?

(11:23):
How are you going to do it and when you're going to do it? If you don't have that framework, how are you going to go to and tell the auditors or your assurance team or the regulators, here's what we've done and here's why and here's our risk tolerance within it. And for whether it's hallucinations, here's where we're testing against bias. And she says, I guess we need to put a policy together. I said, absolutely, because otherwise you're out to dry. And I hear a lot of folks where they're getting pushed really hard. The board wants to know what we're going to do for ai. We're trying to generate some opportunities. We push it back up. We've got a couple of committees, but we don't know how to get started and we don't know what success looks like. I think that's really where we're seeing folks in a maturity curve. Obviously the big banks have been doing it for a couple of years now, the money centers that is, but otherwise, if they don't put something down on paper, it's doomed to fill.

Grant Karsas  (12:15):
Yeah, I couldn't agree more. I think data governance in itself is hard. And if you're an financial institution that if you're lucky enough to have a data analytics team or data science team even, what is that team defining as their data strategy and their data governance? These should be questions that you should be asking them. And if you don't have a data team, who is in charge of setting that? Right? These are some of the questions. If we just actually started rolling out a more substantial and broader data governance team internally and are trying to weave AI into that so that we can have a set process for this because it takes time. And what we're also trying to look at, and I love our vendors that we work with, we're the areas with this that we can say internally we can do versus where should we lean on our vendors for advice. In the credit union space, we're known as mostly being behind in technology of the banks. We're trying to change a lot of that at Travis and are accelerating quite a bit with things and trying to be ahead. But I can't tell you a credit union right now that I've talked to that's, oh yeah, we're so far ahead in ai, so what are these simple steps that we need to do, especially from the governance side to put us in the right place so that we can start in a better foot.

Penny Crosman (13:40):
And what should people be thinking about as they're thinking about where to use generative AI and we're a more traditional form of AI, like machine learning might work better. How do you look at that?

Grant Karsas  (13:56):
I still try to get not too deep into the AI everything, AI everything with it. And I love the two use cases that continue to come up, which is on call summarization and also on strengthening internal policy. If it's areas that we have a use case for internally that is a problem that possibly can be solved by ai, that's where we need to start testing it. We are a ways away, and I don't believe I've heard a great example of how a company yet has broadly expanded AI to member customer facing in the financial space because it's scary. If you get that wrong, you're going to make a lot of your customers or members not happy. So internally, what are those areas? That's where we're going to start looking in from those two that I just mentioned. Obviously the fraud side, that is imperative for our members. They don't need to know that it's being managed by ai, but that's what we're trying to look.

Kristin Streett (15:01):
Greg, I think you should share your story about RPA.

Greg Kanevski (15:06):
We at another conference recently on the west coast, we were chatting, we had an AI session for them. They had three separate breakout sessions going at the same time. One of 'em being AI that we were hosting, 98% of the people were with us, how much people wanted to chat about it. And while we were talking, the question came from the crowd of, well, we've been doing AI for a while through RPA, we're going to get into generative AI now. And I said, well, RPA, that's not really not ai, that's just automation. It's an unintelligent bot that's processing something. It's very, I said, you really need to sit back and think about what is AI and making sure you folks have knowledgeable experts there, even if it's a small number of people before you launch out. So that was what is AI first to you as an individual institution?

(15:58):
What I will say is, is that I'm going to give the top four that we have, but the top four that of use cases where folks are coming to me anyways and talking about, one is, especially on the commercial side, is applications, customized applications for the customer. So I don't have to ask for repetitive information that I already have. How do I do that? Two, our CSA process, the risk and control self-assessment, that process is extremely cumbersome. Risk is risk to compliance about a 35% tax on banking. How do I find a way to have that data cumulatively generated, incidents issues, access issues, all for that first line of defense owner so that then the assurance teams can go out and test it. Three is much like we talked about case summarization and call centers. How do I pull data together? And then four is not so much text to code in the example that I think Kristin will give, but when I'm coding and it generates an error, how do I test that error against other errors that have happened before or that I have created before? And the second half of that is incidents of technology incidents that occur. I want generative AI to go out and pull out other incidents and issues and put that together so that I can learn and update the playbooks so that the coding errors improve and so that the incident issues go down. And that's really the top four that are coming to us at this point.

Kristin Streett (17:27):
The only reason why I really wanted Greg to go through the RPA definition is that in my opinion, with things changing so quickly, should never assume that everyone knows and is on the same page. And it's never a bad idea to just ask really basic questions. And even just to define, this is what this means, especially with executives who are focused on what they're focused on and may not be tracking to the nuances of definitions. But for ai, most of our customers have obviously been in AI for a very long time. They're looking at it, they use it in various ways, but a very, very simple use case where AI is being used quite expansively in our environment is inbox automation, right? We're servicing teams are using email to service a treasury customer or a mortgage customer or a branch customer, and it's the call center or the contact center is not able to serve that person.

(18:29):
So that email data or the data that's coming into the call center is literally emailed to a downstream team for them to work. But those teams are looking at AI to read those emails in mass, look for key terms and keywords to identify those customers or those particular requests and auto generate cases in a case management platform to be able to work them very quickly. So that's just AI intelligently reading email, helping a team to discern specific types of problems and get them on a workflow for resolution. And then applying generative AI to that is that as those, gosh, it got really loud as applying generative AI in that case, as the work is actually getting completed, being able to identify particular patterns and problems and auto-create knowledge articles that typically sit in a backlog and inventory or a library that have yet to be written. Nobody wants to be the one expert who writes the poorly written knowledge article. So those are two for me. I'll kick it back to you, grant, as you

Grant Karsas  (19:46):
No, I think that's super important, especially on the knowledge article side when I'm thinking about readiness for us to be deploying more of AI in other areas at the credit union, it's data first. How are we in our data structure today? How organized are we? Everybody's data's a mess. It's siloed here and there, but do we have a strategy around it and know how we're going to approach it? Our second area is our content management. If something is going to inform internal data models based off of, are we strong in that, right? Do we have enough? Is it updated? Are there articles that have been 10 years outdated? Right? We have to go through this kind of grind to set the stage first for us. And then the third is around process. Which part of the organization do we feel like we have good change management process that can be that initial pilot for us to work with in these areas? And so we get those together and we can actually set an example for the rest of the organization from really isolating with a team. With that,

Penny Crosman (20:54):
Kristin, what are some of the biggest mistakes that you see banks make when they try to do some of these things?

Kristin Streett (21:01):
Well, I don't know if this will be controversial, just having a conversation with our own industry, CISO, who comes from banking who spent many, many, many years in banking. So the mistake in my opinion, might be trying to lock everything down and prevent people from trying to use the technology. And I was refreshed by his opinion and perspective where he said, I would rather know what my employees are wanting to use and experiment with so that I can test it and be aware of it and monitor it, because I'm never going to be able to control the pace at which those technologies are evolving and coming into the environment. I'd rather my employees point me to where their access points are and what's helpful to them. So I thought that may not work in a banking environment. I fully recognize because you really need to be aware of exposing company data employees at risk employees to some sort of nefarious actors that might be downloaded into the environment from some sort of app. So I understand that, but I found it refreshing that his perspective was don't lock it down. I mean, I've been at sessions with him where he has shared that perspective. Greg, I don't know if you've heard him share that before, but

Greg Kanevski (22:23):
Oh yeah. The other thing I would add on top of it is of the institutions we deal with, if the C-suite is not aligned behind it, I mean it is the enabler of course, but business has to be aligned to it because actually deeper into the process than ever before. So for it to work appropriately, the c-suite has to be aligned to it. HR has to be aligned to it. A human impact, compliance and risk have to be aligned to it. And if it doesn't establish itself from the c-suite, you can write all the policies you want. You can buy all the technology you want. It's not going to be implemented. It's not going to be implemented with the best effect to the business. It's not going to meet the risk and compliance standards, and then the employees are going to fight it. We've seen some deployments where the employees have fought it because they see it as a threat, a threat to them versus communicate.

(23:13):
Because if you don't communicate, you're still communicating. And if you're not out there saying, this is an enabler to take that work away so that you can now do this stuff, which you've always wanted to do more of anyway. And if you're not communicating that it's going to fail, if the business isn't driving to make sure that the deployment of what's being generated, what's being rewritten and provided, whether it's playbooks is available to the staff, it's not going to work. It's not going to have the intended benefit. So the c-suite alignment to me is the pivotal that and a good governance framework with a policy is pivotal beyond anything.

Kristin Streett (23:48):
What do you think of my controversial statement Grant? Does that scare you as a banker?

Grant Karsas  (23:54):
No, no, no, not at all. I think you're hitting it right on the nose as far as getting the C-suite alignment, right? Last year when we did our online banking rollout, we established a brand new process to put this through from research to evaluation to implementation. And then we put that into an entire enterprise model for how we evaluate vendors going forward. And has it caused some strain with new projects? Yes, but that's very deliberate to do because people have to come to the table initially with what their business case is for wanting to do something. And as part of that process though, when we're in our vendor evaluations, we bring a line of business lead to that meeting with the vendors we're evaluating so they can have their, say we're doing a new online banking platform. I have head of marketing there. I have our compliance, our legal, everybody, including the obvious ones, like IT partners, but they all get their say and they're bought in to what we're doing from that point. Because to your point, the things that we need for readiness and post go live from those teams, if they're not on board before launch chances they're going to be on board after launch are very slim.

Kristin Streett (25:12):
I would just responsibly go back to the governance structure, which is the reason we start with that in the beginning, which is who's governing this? Who needs to know what the use cases are? How is it getting to market inside the organization? What are the rules around it? How do employees use it? How do you make modifications to it? It's just that how do we run the business using generative AI and AI is paramount. And so I just really wanted to reiterate that, and that's what most folks are working on. They can't get their hands on enough information and ideas on how to build a good governance structure,

Greg Kanevski (25:43):
Especially when the regulators don't even have a policy to operate from the regulators. I think next year, most folks in here probably know they're going to do a horizontal exam on this area specifically, and they don't have legislation to guide them. They don't have a mandate. They're trying to figure it out real time as well. So that's really important for the governance structure.

Penny Crosman (26:05):
Grant, do you have any sort of advice or best practices for communicating AI projects out to the rest of the organization?

Grant Karsas  (26:15):
So we have monthly town halls with our entire organization, and we have specific topics that we go through, some consistent, but I think what we are going to be coming to add onto those topic lists is ai, because it's becoming such a wide topic and everyone's asking questions, they're seeing us advance digitally as a credit union where we rolled out the online banking, we're about to roll out new digital account opening, and it's on the precipice of everyone's mind. So educating the organization as a whole, starting from the top, what's that road? I'm sorry, what is that deck? That knowledge deck that you're going to use with senior leadership that they can take to their teams? And then what are you projecting out to the organization about ai because it's on everyone's mind. So how are you speaking to it? And we're going to do that through our town halls right now as our initial next step.

Kristin Streett (27:12):
Can I just reference also, we referenced this in the other session that we did, but American Banker put together a really nice paper on generative AI and AI in February. Did it publish in February? February or March? But it's very, very helpful and it shares a lot of the banking sentiment around your peer groups. So depending on whether or not you're a credit union or tier one, tier two bank, it sort of segments the responses to how banks are looking at AI and generative ai. What's worrying them about those two topics? Where are they okay with looking at use cases in those various areas and what are they looking to implement? And I think it's really, really helpful. And one of the quotes in there that I literally have burned it into my mind, and I use it quite a bit, is trust and generative AI decreases as human consequences rise. And I think that most of the discomfort around these topics comes up immediately around, first of all, internal facing, is my job safe? And then second, is my customer safe? And so

Grant Karsas  (28:26):
I couldn't agree more. I think it's a really, really exciting time right now between AI and then the CFPB 10 33 rule that's going to most likely get approved and go here in October. Timing those two together could completely revolutionize banking for decades to come. So yeah, start thinking about those a lot.

Penny Crosman (28:51):
We'll have some more conferences on. I was actually just suggested today a conference on 10 33. Would anybody go to that? Alright, so we have one minute left. So you guys have brought up some of the risks that happen, hallucinations, errors, irrelevant information, gen AI telling people to put glue in their pizza sauce. What are some of the ways of dealing with all of that risk and uncertainty? Because it is just predicting the next word and the results can be uncertain. So

Greg Kanevski (29:36):
There's risk in anything you do. There's going to be risk in this. There's going to be risk in flying home after the conference. There's risk in ai. What's important is are you measuring it? What are your tolerances? How are you reporting against it? Hallucinations. Somebody asked yesterday, well, hallucination hallucination rate is 4%, so that's fantastic. She looked at me and she said, well, we want it to be zero. I was like, well, how could you expect that you don't have a zero hallucination rate for humans today or the exact, so you have to measure it against what you're, and then what is your tolerance? That's okay. What is harder, in my opinion than hallucinations is bias. How do you measure it? What's the tolerance to it? It can be very subjective, especially in deep rooted modeling issues, especially when it relates to customer and access to funding and money. That one's I think harder. That's the biggest, in my opinion, risk issue that's here. Otherwise, help yourself by putting everything down as to what's going to be your tolerance.

Kristin Streett (30:43):
I would just say for my advice is your employees are already using it. They know where they have challenges and where they need help. Let them harness their brain power and help find solutions to solve your business problems instead of trying to solve it in a tower. And they're using it already to take the cognitive load off. I'm using it when I need it every once in a while. And I think those are the use cases that get surfaced very easily that you can discern if there's a high or a low risk tolerance for

Grant Karsas  (31:10):
Yeah, I think the keynote, the gentleman that did the keynote yesterday morning really said it best where he was talking about his CEO saying, your job is to push innovation. My job is to evaluate the innovation you're telling me and the risk. So from a technology lead, I say we want to continue to push the innovation, show the value that we can get out of the various AI models that we want to start with, and then work as the collective to establish the risk and where our risk tolerance is as an institution. But we got to keep pushing the innovation side of it.

Penny Crosman (31:42):
Alright, excellent. Well, we are out of time, but thank you Kristen, Greg, grant, and thank you all for joining us.

Kristin Streett (31:49):
Thank you.

Grant Karsas  (31:50):
 Thank You.