AI in 2025: Lessons and Predictions for Bankers

Past event date: February 11, 2025 1:00 p.m. ET / 10:00 a.m. PT Available on-demand 30 Minutes
WATCH NOW

Dr. Scott Zoldi, Chief Analytics Officer at FICO, sits with American Banker's Daniel Wolfe, to take a fresh look at the ever-changing AI landscape, and what's most important to watch in the year ahead.

Transcription:

Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Daniel Wolfe (00:09):
Hello and welcome to Leaders. I'm Daniel Wolfe. I'm an editor with American Banker, and I'm here today with Scott Zoldi, chief Analytics Officer at FICO. Scott, thank you for joining us today.

Scott Zoldi (00:19):
It's my great pleasure.

Daniel Wolfe (00:21):
So before we jump into our topic today, which is about AI and lessons and predictions for bankers, Scott, why don't you tell us a bit about who you are, what you do. You are also one of our innovators of the year for 2024, so congratulations again on that. Thank

Scott Zoldi (00:34):
You. But

Daniel Wolfe (00:35):
Tell us a bit about your background.

Scott Zoldi (00:37):
Yeah, so I'm Chief Analytics Officer at fico. I've been with the firm FICO for 25 years, and chief analytics officer for almost 10. And my role at FICO is to look at AI machine learning and how best to apply it to industries like financial services, both responsibly, efficiently and in ways that can be operationalized. And prior to that I was a physicist and I was at Los Namos National Lab. I studied chaos theory where I spent time trying to understand patterns and data, and that's what I still do today.

Daniel Wolfe (01:08):
It makes me feel very self-conscious about my meager English degree. But no, it's a pleasure to have you here. Thank

Scott Zoldi (01:14):
You.

Daniel Wolfe (01:14):
And so before we jump into our questions, which are all about your predictions, which just for anybody who hasn't been following Scott on social media, Scott kicked off the year with a series of predictions and we're going to go through all of those since those came out. So did the elephant in the room or the blue whale, as we should say, deeps seek? And I'm wondering what your thoughts are on that. What should companies in finance be thinking about deeps seek? What is it to begin with?

Scott Zoldi (01:43):
Yeah, so deeps seek really hit a chord. And that chord was it demonstrated to in a very, very public way that organizations that are innovative and they can leverage some of the new optimizations, can put on relatively small budgets, the power of developing their own language models in their hands. And so I think for the industry, when deep Seat came out with their results, that demonstrated that it had very, very, very performant behaviors with similar sort of performances from a evaluation perspective compared to other benchmarks that are out there. What I take away from it is for a much smaller budget and a much more focused way of looking at how to build these language models, that there's a lot of optionality where organizations can build these technologies themselves. And I think that's fundamentally really, really important because that's how they have control of what goes into these models and how to responsibly develop them. And so I think it's actually a very positive thing for the industry. I don't see people running to go replace their models with Deepsea, but I think it just demonstrates that this concept of focusing down and using resources efficiently is getting us to a point where we can operationalize these technologies appropriately for the business problems that each industry wants to solve. And each bank, let's say, wants to address.

Daniel Wolfe (03:02):
Okay. And part of the concern, and actually I should mention that my colleague, penny Crossman technology editor, American Banker just this morning published an article on what bankers are thinking about deep seeking. One of the concerns is of course about security. It's comes out of China instead of the us and there are concerns about what data it might be sending back. Where does that fit into your assessment of it?

Scott Zoldi (03:26):
So from my perspective, my assessment is more around the ability to build the core technology. What I would really love to see is most financial service organizations building their own small language models. And in that regard they would a hundred percent control what data flows in and out of their organizations, maybe none if they host it locally. And I think that's where we're kind of getting to with these sort of technologies, is our ability to go down that path so they don't have to necessarily address security issues with data flowing in and out because maybe they'll build it themselves and they'll host it themselves and the data doesn't leave the premise and they can go call these small language models for the problems that they want to solve. It might mean, Daniel, that they might have five or 10 different focused models versus one great big large model, but it's going to remove a lot of those security concerns that banks have if they can manage to do that.

Daniel Wolfe (04:15):
So it is less of product to consider and more of a proof of concept, a demonstration of what's possible.

Scott Zoldi (04:19):
Yeah, a very visible one. And I think one that everyone was pretty impressed with. In fact, many people called me up and said, is this even possible? And I said, yeah, it is possible. The same sort of optimizations that they use, I use myself these technologies within fico.

Daniel Wolfe (04:33):
Cool. Alright, so let's jump into the predictions you have here. And your first prediction for the year ahead is that companies will figure out that all AI is not gen ai. So can you talk a bit about what that distinction is and how that distinction has blurred in recent years?

Scott Zoldi (04:52):
So we've gone through this time right now, many people view this as the golden age of ai. And it is in many ways, there's so many capabilities that we have to apply algorithms, bring data to these trainers and have the AI do really amazing things. And so our focus, many focuses in the industry have been about generative ai. And if we're not using it, why are we not using it? How quickly can we use it? What is the right model? Right? We're starting to pivot back to the fact that we're trying to, yes, they're interesting technologies, but we're trying to run businesses and we're trying to solve problems. And so this kind of focuses us back on, let's not just admire the technology, let's look at what is the right technology to solve the problem that we are looking at. The vast majority of AI problems are not generative AI problems at all.

(05:41):
And I think this is one of the things that we have to focus back on is getting to back where we were before generative AI kind of burst onto the scene and say, well wait a minute. We were using AI to better understand customers. We're using AI to fight fraud. We're using AI to make better credit risk decisions. We're using AI for a host of different things that is more appropriate where we don't need the power of generative AI to solve each and every problem. So I think this will be just us getting back to basics as an industry and saying, well ultimately what we're trying to do is solve problems. And so what are those problems and where is the traditional ai smaller models more appropriate than generative ai?

Daniel Wolfe (06:23):
Okay, so generative AI may be drawing a lot of attention, but it's not necessarily, it's drawing attention to AI as a whole and folks who are considering or refining their strategies should be looking beyond that.

Scott Zoldi (06:37):
Correct. I mean, we saw this when AI became very popular about a decade back. So analytics company rebounded themselves as AI companies and now AI companies rebound themselves as generative AI companies. And these are all just capabilities. And so it's pick the right tool for the problem we have to solve. And for many of the folks that are in the financial services area, it's a heavily regulated environment. And so there are other concerns where we say, well, it'd be nice to use this technology, but this is more appropriate for how we want to build responsibly and make sure we can explain the technology.

Daniel Wolfe (07:09):
And before we move on, I just want to remind folks, you can submit questions while we're talking. I'll be looking at them as they come in and you can interrupt the flow of the conversation. Totally okay to do that. But I'll move on to your next prediction right now, which is operationalizing is AI is a challenge, but it will get easier. So what do you mean by operational? Excuse me, operational on his blog post, he says it doesn't exactly roll off the tongue and I think I just proved that operationalization. Explain what you meant there please. Yeah,

Scott Zoldi (07:44):
So operationalization is around the fact that how do you make sure that we develop an AI or machine learning capability that you can seamlessly bring it into software and deploy it and start making business decisions on it. And this has been a struggle for many companies for a long time. So they tend to have groups of scientists in one area and they're building models without any cognition of how the model will get deployed or what are the constraints of that sort of software environment. And then it gets thrown over a big fence and then the software engineering team has to go and figure out how we're going to deploy that. And usually what happens is means AI does not get deployed at all. And so there's this sort of mismatch where you have two groups of teams that don't really interact with each other. So operationalization is just removing that fence and throwing the model over the fence to get deployed and say, actually if our focus is on making a decision that impacts the customer and we have to do it within this software, well then what does that mean for us?

(08:43):
How we change the way we work? And what does it mean? It means that you need to have a talented group of scientists that are working, that have knowledge of software engineering and machine ML ops. You need to choose the right algorithms or invent the right algorithms for the business case. So if you have to make a decision in 10 milliseconds, well that's going to really restrict the sort of ways you solve your problem. But it's allowing the data scientists to understand the deployment environment so they can make the right decisions from algorithm perspective and then from there have that sort of really tight interaction with software. So they're part an extension of the software engineering team focused on things on latency and throughput so that you can make the decisions at the right time and ensure that it's going to be reliable. And then finally make sure that it's responsibly deployed, meaning that we choosing algorithms that we can explain that are interpretable, that are auditable, so that once you get all that software engineering done, it all meets the governance or regulatory environment that you're operating in. And that's a lot there. That's a whole ton of work, but it changes the way that we think about groups like analytic scientists or AI developers. It's all one team and you can't develop a model without understanding exactly how it's going to be deployed.

Daniel Wolfe (09:53):
Okay. We actually did just get a ton of questions come in. Anything else you wanted to say on that?

Scott Zoldi (09:58):
No, that one I'm good there.

Daniel Wolfe (10:00):
This one's kind of a zoom out question. What are the most popular use cases that you see for AI or generative AI in the banking industry?

Scott Zoldi (10:09):
So I think hands down the most popular is fraud detection. So that's where we got our start at fico. We're analytics software company as you know, and more than 30 years ago, we deployed our first machine learning models in fraud. And so this sort of concept of understanding customers and their transaction history and then being able to make that decision whether it's fraud or not, fraud is probably one of the major use cases. Interestingly enough, Daniel, I'm seeing that the same sort of transaction analytic understanding is moving into all kinds of other areas like credit risk and customer journeys and what have you. And so one major use case is really this personalization. We use the term hyper-personalization very often, but understanding that customer having a set of models or scores that would say these are the next best decisions and when are the right times to make those decisions are really, really key. So I think customer management fraud are all really strong examples of where AI is used within financial services today.

Daniel Wolfe (11:13):
We had a follow-up question to that, which is about smaller community banks. How can they take, and this is with regard to also the expense

Scott Zoldi (11:20):
Of

Daniel Wolfe (11:21):
Implementing any new technology, how can they take advantage of artificial intelligence?

Scott Zoldi (11:25):
So one of the ways that we do this today is, for example, within our company we have something called a consortium model. And that consortium model is built once and it's used across many banks. And so the best way to do that is to kind of take advantage of what we call these standard AI models. These are models that are generally performing out of the box and you can kind of license it from the size of your business. And so you don't necessarily have to have your own team at that community bank. You can go and talk to a processor or partner in the ecosystem who will have that sort of capability that they could make available to you. But fortunately it doesn't require that everyone build their own models. And in fact, there are things like what we have at fi ICO, like an analytic platform where these models are available. And so you can use it out right out of the box.

Daniel Wolfe (12:14):
Before we get to the next batch of questions, I wanted to get back to your predictions. One was data domain specific gen AI use cases will flourish. And I was hoping you could explain what you mean by that and where are you seeing that happen?

Scott Zoldi (12:29):
Yeah, so we've gone through this sort of phase where we talk about the biggest largest models and we've kind of exhausted all the data in the internet and we have people creating synthetic data and it's about bigger models, bigger, more and more data. That poses a real challenge because with the size of the data sets, you don't really understand what the model's been trained on. You don't know if it's appropriate for a business case. And so these focus language models and domain specific is simply saying, listen, I have a business case. I want to basically have one of these large language models that focuses just on rule creation around fraud. Well, I don't need to train it on Mozart and Taylor Swift or whatever else. What I need to focus it on is just the corpus of data that's relevant for the problem that we're trying to solve.

(13:19):
And so it gets to this sort of concept of responsible generative ai. And the first thing and responsible generative AI is understand deeply the data you're going to train your model on. And the secret is you don't need the entire universe of data. You need just that focus because if you have that, then you have full control over what that model has learned and what it's been exposed to. The models are going to be smaller and cheaper, and then you can go with full confidence say, this is exactly the data that I use to train this model. It's not a guessing game. It's important that if you're using that model that you can justify on what data was built. If you don't have visibility to that data, then you're kind of guessing or taking more risks than you necessarily need to.

Daniel Wolfe (13:59):
Okay. I also, that does kind of echo my experience just working with the large language model products that are available to consumers. When I tried working with them on just giving 'em access to the entire internet to answer my question, I didn't like what I got. When I limited what they could see, I found it way more useful.

Scott Zoldi (14:21):
Absolutely. Right. And I think you brought up a great point. When you have the large language models to the consumer, they can ask a whole bunch of questions, but when we're in business together and we're solving a domain problem, we know exactly what it needs to do, right? There's no need for that ancillary information.

Daniel Wolfe (14:35):
Okay. We're getting a lot of questions actually, but before I go back to those, I wanted, because it actually does help segue into a question about large language models versus small language models. You actually do have a prediction here. Companies will roll their own small and focus language models. So where do you see the momentum happening there and how big will that trend get?

Scott Zoldi (15:01):
I think with, for example, we started off talking about deep seek. I think this is just a step in this direction. Most firms that I talk to today have development teams that are going two different directions. One direction is to use the publicly available models and try to fine tune and adjust, but many of the forward looking ones are simply saying, no, we're going to build sets of these small language models that we can control in how they're developed and use that for our responsible AI strategy. So they'll curtail the data down to what they need to focus on, and they'll develop specific models. And so they may have 12 models or 14 models that they'll maintain, but each of those models are specific to the tasks that they need to solve. And it's really getting down to that sort of focusing on what that business problem is.

(15:47):
So any organization that's out there today needs to understand that it's not hard to build these language models, the small language models, for example, we have interns that can do it in 30 days. And so these are things that are within your grasp. And the hardest part of it is to make the business decisions around what data you want to include in that. And that's why we call focus language. We're going to focus it just on that specific domain problem and then generate these models. And these models may be 1 billion parameters, 2 billion parameters, 8 billion parameters, but they don't have to be 370 billion parameters to solve a very specific set of task. And that will continue, I think to these genic capabilities. Same thing if you think about genic capability. It's a very, very, very focused agent that is going to be focused on just making a few decisions based on a certain amount of data. And so I think this is where the trend comes in and allows organizations to be in control of what data they use that they can justify how they did it, and then building those models so that they know exactly how it should respond based on the data that it's been trained on.

Daniel Wolfe (16:51):
You have interns that can build this in 30 days.

Scott Zoldi (16:53):
Yes,

Daniel Wolfe (16:55):
I need to know where you get your interns. I certainly couldn't, again, just an English degree here. Your final prediction before we get back to the questions is AI trust scores will make it easier to trust generative ai. So can you elaborate on just what a trust score is? How do I see what the trust score is? Is it on the package like the surgeon general's warning? Is it something different?

Scott Zoldi (17:18):
So the trust score is basically an understanding of the fact that we're never going to be able to eliminate hallucination. And in some sense, for the longest time we've had these sort of conversations about hallucinations and there's been a lot of discussion about look how bad the hallucination is. If we just simply accept that hallucinations are part of any AI and they've been highlighted for generative ai, well then we have to look at the problem differently. And this goes back to operationalization. We take risks all the time and no model is perfect. In fact, one of my favorite quotes are all models are wrong and some are useful. And so we need to figure out how to make the model useful. So what the trust score is basically doing is it's going to examine what was the query that was made to this large language model or a focus language model, what was the response?

(18:13):
And then an independent model that was trained on the same data that language model was trained on will basically say, Hey, the answer aligns with vocabulary. It aligns with what we call knowledge anchors, which are essentially the sort of facts that get learned on this very, very focused view of what it's supposed to answer and what it's not supposed to answer. What is the data coverage meaning that do I have enough statistics to support making that decision and then provide a score? And that score would go from one to 9, 9 9. The higher the score, the more trust you have in it, the lower the score, the less trust. And so what that means is maybe we'd operate at a hundred to one in terms of or say a hundred being the true positives without hallucination for every one halluc hallucination. So we can calibrate that score to take more risks or less risks.

(19:03):
And what does it mean? It means if that trust score is low, you don't use the output of the LM if it's high, right? You do. Because the problem is when we use these technologies, most people that detect hallucinations know the answers, right? And that's the problem. So we need this independent technology to say, you don't know the answer, but I'm telling you that this trust score doesn't have confidence in the answer either. And then we could either not present it to you or you could use that in how you want to work with the answer if you do at all.

Daniel Wolfe (19:31):
That's a fair point. I only know what I know. So we went through your predictions for the next year. One of our first audience questions is looking out five, 10 years. Can you say how AI will evolve in that time span?

Scott Zoldi (19:48):
That's a great question. So I think in the five, 10 years, I think we're going to continue to see traditional AI the way we do today. And in some sense, I would hope that things like responsible AI will be well codified. I was thinking about this earlier today, even concepts like analytics scorecards took a couple decades for the industry to accept. So things like interpretable machine learning models, which are really key to traditional ai, making it explainable and fair and ethical, I think within five to 10 years would be a standard. I think within the five to 10 years we're going to see changes from, I think the small language model trend will continue down to genic ones. And those genic ones are going to be useful in the fact that we will see them becoming much better built in terms of the trust that we'll have in them. And then on top of that, will we see the general intelligence? I still don't think we will, but I think what we will have is we will have blockchain and other mechanisms that will help be the traffic cops for how AI will interact with each other. There has to be a framework that governs all this. And so I actually see AI and blockchain coming much more close together in terms of how we make sure that the AI stays on the rails or within its permission domain.

Daniel Wolfe (21:14):
Can you just elaborate on that for anybody who's not familiar about the applicability of blockchain to ai?

Scott Zoldi (21:19):
So we have applications where we use blockchain to talk about how we develop these models to a standard. But beyond that, if you look at the genic AI direction that's popular right now and will take several years to develop out properly, these agents have a certain permission to make decisions on your behalf, but there has to be an audit of how far can they go and what access do they have to certain information and what sort of decision rights do they have. And the blockchain will basically be this source of immutable truth that says, okay, you're allowed to make this decision up to $500 for Scott and no more. And that would be on the blockchain. It wouldn't be something that another AI could overwrite or let's say a human could go and adjust. It would basically be this is the truth that sets the thresholds, it's immutable, and that's where the agents would have to go and understand what their rules are or the span of control that each of them have. Otherwise, we could run into these sort of scenarios where these AI technologies might go a little bit haywire if they're both informing each other or setting thresholds.

Daniel Wolfe (22:28):
And this is actually a good follow up question to an earlier one we had on smaller banks is where to go is just so I lost it, but I'll do it from memory here. What can you recommend in terms of AI that's not generative AI that smaller banks can use for fraud or other purposes?

Scott Zoldi (22:53):
So I would recommend that everyone who's interested in AI start to leverage neural network technology. So neural networks is one of the most fundamental sort of technologies that are used. I'm actually taking college students through a challenge right now and an analytic challenge teaching them how to build these models from scratch. And so the concept of a neural network is essentially one where you have this machine learning model that resembles a very basic mind, and in that mind is these sort of latent features that combine features to come up with a better outcome than you would if you just looked at those features independently. And I think getting experience with a basic neural network technology and understanding how it can combine data and find relationships that you can't anticipate as a human, and then leveraging technologies like interpreter neural networks so you can see exactly what it's learned is a great way to leverage the power of AI machine learning.

(23:49):
And you don't need supercomputers. You can do it on your laptop or on a corporate machine very easily, but the insight you can get and you can get 15, 20% improvement over let's say a traditional model right out of the bin is really, really important. So I think that would be my first step are those neural network technologies and just get a little bit of exposure to what that looks like. And fortunately for us in this golden age of ai, there's all kinds of tools available to everyone. And so it's not a matter of getting access to those tools, it's just a matter about getting a little bit of education on how to use those technologies.

Daniel Wolfe (24:23):
And what you just said there about these tools being available to everyone, this is now an age where everybody has now whether we want it or not, it's built into our phones. So every employee at every organization here now has some form of AI in their pocket. And there may be some power users who have downloaded specific apps for that. There may be people who are just using what's built in. How do financial companies address this? Is it a threat? Is it an opportunity? Where do you see this new dynamic? How do you see that playing out in the banking industry?

Scott Zoldi (24:58):
So I think it's a risk because each of these employees have access to ai. They could ask questions and those questions could be potentially sensitive in terms of the business. And so what I see happening and happens at FICO is we have a committee that basically says, okay, corporate wide, what are the tools that we're enabling and how are we enabling access to those tools within the company and what tools are not going to be allowed until we have a better control over it? And so from a corporate perspective, it's one of these things where one needs to really understand what employees are using and how they're used. Here's an example. There are concerns around uses of generative AI for code generation. And why is that? Well, because very often when you develop software, you apply for a copyright and the copyright has to be related to human writing, the code versus machine.

(25:50):
And so you have to clearly delineate what was can maybe where you got help from a co-generation program versus what was human invention. And so companies need to grapple with that. And so I think having that committee to kind of look at what is our generative AI or AI policy is important, but it is true. Things are showing up on phones and devices that maybe companies are not yet ready for. They haven't gone through and made decisions. For example, is that getting outside the corporate firewall or not? And questions like that have to be answered, and that's why CISOs are so important to this conversation.

Daniel Wolfe (26:26):
So there's a question here, and we're very low on time, so I'll try and get to one or two more. And I'm sorry to folks, if I haven't gotten to your question, the biggest underwriting problem is regurgitation of information instead of analyzation. I don't see AI replacing a seasoned credit analyst or banker. What are your thoughts on that? And if you want to zoom out to just AI replacing expertise anywhere, what are your thoughts?

Scott Zoldi (26:50):
Yeah, well, so I'd agree with the question. An AI is simply a different representation of the data. So a regurgitation, and in fact, one of my biggest concerns with AI is around memorization. So human in the loop is critical to my business and the way that we approach the use of ai, I think that think it's going to replace seasoned experts that understand the domain are kind of kidding themselves. I think AI and generative AI is great for automating tasks that are mundane and we wouldn't want to do. It gets informed and it gets taught by the experts and it can be a more useful tool. But I see experts and AI partnering together to be more effective. I don't see it replacing. And in fact, as things change in the business, we need that critical thought. I mean, just look at things like anti-money laundering and how we have to go look at those sort of cases. There's a human that needs to go sort through that, right? And there are very serious consequences to being right or wrong about how you judge, let's say a suspicious activity report. And so AI won't replace those people, but hopefully those people in AI will work better together.

Daniel Wolfe (28:01):
And the final question that we have time for, and again I apologize that we haven't gotten to all of them, is what are your recommendations for specific strategies for banks to implement or integrate generative AI into their existing systems,

Scott Zoldi (28:16):
Into the existing systems? So I think the first thing that organizations do is, number one, I think they need to take a stock of what data do they have available to develop, let's say a focus language model. And so it's important that it's related to your business and therefore you have to either collect or understand what data you have available within house, right? So that's one. It's always the typical issue with ai. Do we have enough data to build an AI or machine learning model? Two would be what are the parts of our business that would benefit from a generative AI helping making decisions or providing guidance. And again, not every problem is a generative AI problem, but if there were groups of people that wanted to use a more complicated tool, in the old days we used to have these books called numerical recipes.

(29:06):
And so if you want to look up an algorithm, you'd go to the book, you'd open it up, you see the code, and then you could know how to use it. And so we have to go look at those sort of business cases where somebody wants examples and best practices and they knew they exist, they will exist, they'll be in your documentation, but you want to make that easier so they don't have to go find a document or talk to their colleague to go get that. Those are great use cases because it's going to make them much, much more efficient getting the information they need. It's not going to create it out from scratch though. It's going to create from that source data. And so those two things, I think are the two pieces to go figure out how do I supplement how well people work with what I have today in terms of the existing systems and what would make them more effective in that. And a lot of that is coming back to that other question, learning more about what those experts already know, making it available to those novices that are just starting maybe. And so that you can start from the best practice. And if the generative AI can help put that in front of someone that's new to the business and get them to level three where they might just start it, that's a great use case.

Daniel Wolfe (30:05):
Alright, well we're over time. Thank you so much for coming in today. My pleasure. And it's always fascinating to talk to you about these topics. I appreciate your time. Thank you to the audience for participating and very active level of participation. If we didn't have time for your questions, I know you're active on social media, can folks reach out to you there?

Scott Zoldi (30:26):
Absolutely. Yeah. No, I love getting things through LinkedIn and it inspires me. So yeah, please do

Daniel Wolfe (30:32):
LinkedIn. Find 'em on LinkedIn. All right, and thanks again.

Speakers
  • Daniel Wolfe
    Daniel Wolfe
    Content Director, Payments and Credit Unions
    American Banker
    (Host)
  • Dr. Scott Zoldi
    Chief Analytics Officer
    FICO
    (Speaker)