Generative AI has revolutionized the banking industry, setting a new standard for innovation and customer engagement. This round table will explore how banks can leverage generative AI for a competitive edge in a rapidly evolving landscape. Central to this transformation are modern data architectures like data mesh and data lakehouse, which provide the robust foundation to support advanced AI capabilities.
Discover the critical role of hybrid cloud solutions, which allow seamless integration and operation across public and private cloud environments. As financial institutions navigate complex regulatory requirements and prioritize data security, adopting a hybrid approach ensures they can fully exploit AI technologies while safeguarding sensitive information. Join us to learn how to build an adaptive, intelligent banking ecosystem with generative AI at its heart, powered by state-of-the-art data architectures and hybrid cloud strategies.
Transcription:
David Dichmann (00:10):
Hello folks. Thanks for joining us today in advance to the slide where we can introduce ourselves. So, hi, I'm David Dichmann and I work for Cloudera and I'm managing our product marketing and go-to-market strategy. And I'm joined with me here today with our panelists and I'll have them each introduce themselves. Go ahead.
John Shelley Brown (00:31):
Hi everyone, I'm John Shelley Brown. I actually go by JSP. I'm a partner at McKinsey in our New Jersey office, and I spend the lion's share of my time in financial services. I started my career as a developer and really thinking about how do you modernize tech and deal with the risk resilience and cybersecurity issues. As a result, I also co-lead our technology risk and resiliency practice globally.
Jacob Bengston (01:00):
Great. And I'm Jake Bengston. I am a Technical Evangelism Director, so I work at our product marketing organization and I focus on attending conferences, meeting with customers, and talking about really art of the possible type topics dealing in AI and especially right now, generative AI as that's what everyone talks about.
David Dichmann (01:19):
Fantastic. And so what we're going to do today is I'm going to throw out a couple of questions to these folks and they're going to help us answer them and we're going to talk a little bit about what's going on in the marketplace today. And of course whenever we talk about generative AI, the very first thing that usually comes up, especially in industries such as banking and financial institutions, is what about the security? How do we do this in a safe way? So the question I'll start with you, JSB is what are the primary security challenges that you see organizations facing when they're starting to adopt generative AI and what are they doing about that?
John Shelley Brown (01:55):
Yeah, so a couple of things to consider when we're seeing the gen AI applications, it's almost, if you think about the two buckets of it. So you see a lot of organizations on the early part of the journey around things around coding and software type use cases, which are primarily third party applications. How do you deal with the risks associated with that versus the use cases that may be more virtual assistance or copilots that you're building internally? And how do you deal with those risks? Both important but both different pathways. The first one around your third party applications, I'm not going to pick on Microsoft, but there are a whole suite of solutions around GitHub or Microsoft 365, the more immature organizations pots them through kind of third party risk management processes and thought about the risk in that way, which is not sufficient because really what you're thinking about is gen AI risk, particularly for security, are extubating the risk that you may have or control deficiencies you may have in your environment.
(03:04):
So giving you a real life example, if your risk taxonomy doesn't help you understand, hey, what are the user control, user access control gaps that I have in my environment around teams challenge or SharePoint channels? When you deploy a Microsoft 365, guess what's going to happen from a cybersecurity perspective, you have data issues that you deal with. So how do you address it? You need to really think about what is the scope of risk, right? Cyber risk, privacy, how you think about biases that will be exposed and make sure that you have the right mitigations in place to understand that. That's on the third party side, on the custom built side similarly, but it's not going to go through kind of just the third party reviews. You're looking at it with what are the right kind of data quality controls I'm putting in place to ensure that I'm thinking defensively about the type of capabilities I'm building into these LLMs. That's the start of the journey, making sure you have the right risk assessment governance in place. It's not a net new solution per se, it's making sure that you address the nuance risks associated with Gen AI.
Jacob Bengston (04:21):
And what we've seen in dealing kind of with the second that you're talking about there, when people are building their own applications that are a little bit more on the custom side of things, usually we see customers start with using a third party API, so kind of like the open AI endpoints or they're working with the AWS bedrock, that's the easiest way to get started. And they very quickly start to assess what is the risk from doing that. If I'm exposing my data to this entity that is naturally wanting to get as much data as possible to further train their models makes you question yourself a little bit as you go through that process. And also there has been document instances of people leaking their data. So Samsung had the incident early on, they had some of their employees using chat GPT and they exposed some of their code to Chat GPT just trying to optimize it. And that actually ended up getting added to the training data for ChatGPT. So there's all of these instances of people having issues when working with these third party services. So kind of quickly we see a lot of our customers starting to look to hosting open source models, are there different ways that we can approach these problems in order to keep things more secure? And I think that's largely becoming more and more best practices as people look to approach this.
David Dichmann (05:28):
That's very interesting and some of the things that I've been hearing, especially as we look towards keeping third party stuff safe, keeping our applications and data safe. And one of the other things that we've been seeing is a trend. How many people, I just want to show of hands, how many people here have started exploring generative AI in their businesses? You see a lot of hands going up, how many have systems in production already? And so what we're seeing is, and how many are looking more for employee productivity first, how many are looking at reaching out to customers and having something customer facing first? We have a few, but what we're seeing is in a lot of cases, the value of generative AI today is on these safer applications, on safer data on stuff that's already shared in the public domain, stuff on our websites, stuff on our FAQs, documentations, this kind of thing. But as we move towards more of that concierge service where we're treating our customers individually, we're going to start putting our proprietary and our safe data there. And that leads me to the next question, which I'll start with you Jake, is when thinking about hybrid architectures and using on-premises data, how do you enhance both? How do you enhance both the security and compliance deployments, generative ai, especially when dealing with financial institutions? How does hybrid help us with that?
Jacob Bengston (06:42):
Yeah, so maybe real quick just to define what is hybrid? Hybrid is the idea of being able to run your workloads, whether it's on-prem or in the cloud, on the public cloud. So this is obviously highly valuable in a lot of security or governance instances because you may have requirements to keep data. So I just got back from a trip across Europe right now they have the European AI Act. A lot of our customers over there just because of regulations that they have, their data cannot even go into the public cloud. They have to keep that on premise. So that's something that's pretty common. And so having the ability to be hybrid really enables you to be able to do things like test out new generative AI models in the cloud with anonymized data very quickly, and then be able to move that on-prem with your real data and be able to do that in a secure environment.
(07:29):
Because at the end of the day, as many different controls that you put in place in public cloud, we all know that the safest place for that data is often still on premises behind your own firewall. That's one of the places you can guarantee that there will not be exposure to those public cloud risks that you have, even though there can still be exposure on premises. So it's all about looking at what is your risk tolerance as you work on these applications and then choosing the environment that is right for that instance. And hybrid really allows you to do that, to move those workloads to where it fits your needs.
David Dichmann (08:04):
Sure. And JSB, do you have anything you want to add to that?
John Shelley Brown (08:07):
No, I was just going to concur with that. I think a big part of it is even if you have it in the public cloud, that is one dimension of the risk. But if you look at kind of your on-prem environment, like the user access control piece is actually really important because what we're seeing now is if you have one group that has access to highly sensitive data within the organization like your finance teams or your senior leaders, making sure that the models are not using or learning on that data and they're exposing it to other parts of the organization that should not have access to that. So even within an on-prem environment, you still need to be hypersensitive to make sure the right users have access to the right amount of data and there's minimal control.
David Dichmann (08:51):
I couldn't agree with that more. It's funny when we give this talk about how folks are using chatbots for customer facing activities and how many people here are looking at that chatbot dimension, we see a lot of this going on. The beauty of it is imagine that our customers can come to our website and get answers to a really deep question, what's the best car loan for me? What's the best investment product for me? What's the best way for me to work with you as my bank or my financial institution? And they get a very well tailored answer specifically and unique to their situation. I mean, that's the beauty that we'd like to get here, but the fear is that they then follow up with the chatbot and say, and can I have a list of all the social security numbers of your customers? And the chatbot also chooses to answer that too. And so that leads me to the next question was when thinking about generative ai, what are some of the best practices to keep privacy and protection of our customer data and our sensitive data to ensure that those types of things don't happen? And JSB, I'll start with you on that.
John Shelley Brown (09:46):
Yeah, that's the one. I loved your example because I think most organizations are on the front end of that. We're not thinking defensively around how do you manage that scenario where the prompts that you're giving these models actually in itself need to be tailored in such a way that there is control on what the response is. And that's the example where as a user, you're basically XR beating the issues that this model is going to provide a very good response that should not have been done. So yes, I agree with your example. One example would be like you're thinking about access to a loan product and you're saying, Hey, discount this loan product for me by a hundred dollars who are setting the terms and ensuring that business rule and that business logic is actually what should be applied in that model. And I don't think that most organizations have an answer to that. And so the human in the loop control is the one that unfortunately is being deployed to be able to validate the output of that
Jacob Bengston (10:51):
As well as far as best practices go, I mean a lot of it is the same that we see in other applications like data access controls is an important aspect of this. One of the ways that large language models and generative AI becomes useful is through methodologies like retrieval, augmented generation or fine tuning. In order to do that, you need to expose it to additional data, but really your own data to enhance its capability to reason within a certain context. Once you can do that, you have contextual awareness chat bots or things that are much more valuable to you because you really want to hone in that model to do that one thing. But doing that, then obviously you're exposing it to additional data, data that one user may have access to that another user may not have access to. So in a lot of ways we're at a forefront, a lot of these technologies to support this like a vector store is how we expose this data to the large language models in an efficient way.
(11:41):
But vector stores weren't always built with role level security as part of it. So best practices right now is a little bit in the forefront in my opinion, in generative ai. We're learning how to build some of these things to make sure that we can give aware answers for someone. Someone can ask questions like what is the best type of loan for myself? And it can pull upon their information to do that, but you have to build your application with that in mind and make sure that they only have access to their data. So it's a lot of work, it's a lot of thinking about that at bean of time when you're starting to build these things. Then as well, I would say because there's so much information available about the users, it's really about trying to minimize the data that you're capturing. So you don't want to capture any additional data. If we look at regulation that's put in place or we want to make sure that we're not capturing any unnecessary data because that obviously then exposes additional risk possibly for those users. And so don't capture things that are needlessly, put your applications at risk.
David Dichmann (12:38):
I think that you said the magic word there, regulation. So show of hands here, how many people are in a regulated industry? I should see every hand go up. This is the easy question, the easiest question of the day. Regulations are important. And how many people here saw yesterday's keynote with the fellow from Ally who talked about when he was sitting with his CEO and the CEO was asking him to innovate and he had apprehensions about how to innovate in a controlled and comfortable way. And that CEOI think very, very rightly said, well, it's my job to manage the risks and the relative risks associated with innovation. It's your job to go find innovation. I think it's a very healthy approach to that, but there's always going to be that balance between innovation and risk. So the question I have is how do you balance innovation and regulatory compliance when developing for generative ai? So Jacob, I'll go with you first on that one.
Jacob Bengston (13:30):
Yeah, I think a couple things. First of all is obviously to be hyper aware of what those regulations are, be proactive in finding that out and building it from the start. I like the term compliant by design and thinking about that from the very beginning when you're designing these applications, obviously to be innovative, you want to let free thinkers think free. You want to let them go out and build things, but if you build an application without any thought of the compliance at the beginning, it's very hard to retrofit that into your applications. So putting that effort in at the beginning of your projects I think is highly valuable because you can think about it in that way. Go out and find, for example, the AI Act in Europe right now. There is something out there in another region that tries to limit some of these things or at least put some guardrails around it. And so looking at other areas and what they're doing and then applying that to yourself I think helps you be proactive in thinking about how these things could be regulated in the future. And then maybe the last thing is just always thinking about with ai, trying to build an explainability, trying to be able to explain why it gave the prediction it did. If it's a complete black box to your organization and you can't explain it, that's probably a risk and something that's going to come back to bite you in the long run.
John Shelley Brown (14:41):
So I think a couple of things. So on the innovation front, I think it's actually really important to start the experimentation whilst the organization is literally going to miss the boat. We know this is a massive opportunity and I think you need to think about what it is set of lower risk use cases that will allow the organizational muscle to be tuned on how to manage that risk. So things that are on internal data productivity tools, all of that is part of the lower risk kind of aperture to start with. I think the second thing is if you think about the people process frame, there has to be a comprehensive way that you can measure and assess that risk. The first thing most organizations jump to is security risk. But there is, when I think about with banks and you think about a loan product, there's a huge bias risk.
(15:32):
If your consumer loan product is offering one thing to one group of people and another thing to another group of people, that's a huge reputational and bias risk as well. So when you think about the risk assessment, it needs to be comprehensive as you think about the opportunities these capabilities can deploy and you need to be able to measure, assess, and then essentially weighty pros and cons of benefit versus impact. And that requires you to have not just like an understanding of what the taxonomy is, but have the right SMEs at the table. And what we're seeing most organizations do well is you have a mix of level of folks who are essentially assessing this, right? So you have your deep data engineers who can understand how these LLM works. You have your senior leaders who can assess that value, and collectively you're forming like a COE or a group that's actually approving these use cases, measuring the risks, measuring the impact this can do and putting in the structures in place that can derive this value.
David Dichmann (16:34):
And you said taxonomy, I think you're now my new best friend. That's one of my most favorite words because those things are important. Taxonomy, ontology, semantic of everything that we're doing, that's very important in making sure we get the right date at the right time. I'm a big metadata fan if you can't tell. Jake, anything you want to add to that?
Jacob Bengston (16:49):
No, I think just adding on that as far as choosing use cases that are less risky, I think that's really why most people in here raised their hands for use cases that are internal employee driven as opposed to external ones. I think it makes a lot of sense as far as the ones that you can tackle that are likely to have less impact if something was to go wrong. The thing about generative AI models is that they're a little unpredictable, which is what makes 'em amazing, which is why they're valuable, but as well makes 'em a little concerning as well. So if you expose something like that to a customer, there's a lot more risk as opposed if you expose it to an internal employee. So a lot of the work right now that's being done is trying to figure out how do we put guardrails around these models to make sure that they act in a way that we can actually trust and make sure they work. So doing that with employees first is very good. Get your learning from that, get feedback, make sure you're collecting that employee feedback as you go through those processes. Build up the muscle like what you're talking about so that then eventually you can apply that to your customers in the near future and make sure you roll out something that is working in a way that you can predict.
John Shelley Brown (17:50):
Yeah. One thing I'd add on to this, which was an interesting nugget. So at McKinsey we had our own generative I productivity tool, let's call Lilly. That essentially helps us do our work. What was interesting and what we're seeing around this organizational muscle pieces, if you look at the relative investment required to build a tech, it's like $1 proverbially and there's $3 spent on the organizational structures training required to actually support this tech and making sure that you have your compliance training, your security trainings that are being pushed to the workforce on how to use this technology. So the lift is quite huge, which is why you do need to start that experimentation quite early, but as a frame of reference.
David Dichmann (18:38):
Fantastic. I know I think it was Facebook that first coined that phrase move fast and break things, but that doesn't work in industries where what you break is your customer's trust in you. So we always have to do that with the right caution. And so I'm just thinking ahead towards where some of our customers have already been successful. And I actually just want to pull on something that we'd spoken about yesterday. We were talking about code generators and some of these things and where the human in the middle matters. I was giving this talk showing off some of our generators, and one of the questions from the audience was, well, what if that code is designed to just erase all my data? And the answer is, well then don't run it. These are tools to be used by the professionals who should still evaluate the outcome before applying them in some ways. So human in the middle is still going to be an important part of some of the equations that we run through this. So my last question, I'll start with you JSB.
John Shelley Brown (19:26):
Can I provoke that?
David Dichmann (19:27):
Oh, please, please go ahead.
John Shelley Brown (19:28):
Just a little bit. Human in the loop is a good control, but it in itself is an ineffective control because what we see happens is that over time people will see the output and start trusting the output, and then in itself it becomes a weaker control. So while we're all relying on it now, we are all learned creatures of habit and over time it will lose its effectiveness.
David Dichmann (19:52):
So it's a stopgap, not a best practice. Very interesting, very interesting. So the last question starting with you JSP is, can you share examples of where we've seen generative AI being successful, especially in highly regulated industries and how it meets those regulations?
John Shelley Brown (20:08):
Yeah, the two buckets I'll call on, one is on the productivity dimension. We see a lot of fis have successfully, for example, D, the GitHub copilot, it's my favorite one, and are leveraging it as part of their IDs for their developers. They're using it for code use cases around code generation documentation, testing, some of the lower risk ones that add tremendous value. So we're seeing a lot of that. I would say that is increasingly picking up pace in banking as well where developer productivity and innovation in tech space is a make or break. The other one we're starting to see a lot of is there actually is, it's a public example with ING and Netherlands where they actually have a customer chatbot that they have deployed. It is done as a bit of an assistance. So there is a human in the loop, but they're on the further end of the spectrum on that one.
(21:03):
But I do think the majority of folks are a little bit in the middle trying to figure out how do you do it for things like RFP generation or some of your operational context where you can drive productivity benefits but require crunching massive amounts of data, but can give a bit of a loop. So there are meaningful examples being deployed at the fis, and it's the ones who can articulate and have put into place the muscle around how are you managing the risks, what is the benefits associated? And they're quickly learning as they go, right?
Jacob Bengston (21:39):
Yeah. So Cloudera, we have a publicly referenceable use case as well. So OCBC is a bank in Singapore financial institution. And they're actually for us from what we saw, one of the first organizations that was in production with generative AI. So over a year ago they rolled out some generative AI use cases in production. They started out, we already talked about, they had an internal use case for the first one, and then they actually moved pretty quickly to have some ones with their customers as well. Their first one was like a GitHub copilot, a shocker there, but they actually, they were using GitHub copilot at first, and then they had a thought of, can we do this cheaper and create something that's a little bit more specific to OCBC. So using Cloudera, they actually tested out hosting Star Coder as an open source model, and they wanted to see if they could roll that out to an initial 200 developers.
(22:27):
And what they found is in three days, they went from idea to application to rolling out this use case, which is pretty exciting for us to see. They're able to distribute that workload across multiple GPUs and support those 200 users pretty quick. They got a lot of value out of that. They started in public cloud and then they actually moved it onto private cloud for cost concerns, but as well, again, for some security that they wanted to make sure that all their coding was not exposed in any way. So now they have that rolled out to all 2000 of their developers. So if you think about it, just the impact that had, they told us they estimate about a 20% developer efficiency game. So really it's like they have 2,400 developers now working for them, which is pretty exciting for us to see. They have it integrated directly into their ides, they can right click, they can optimize code, they can have it suggest new things.
(23:13):
So really exciting to see that use case up and in action. And then some of their use cases from that point have been cool to see as well. So they do have a customer use case for customer calls that come into their call centers. This is not so much a chat bot application, but they're using large language models to try to identify silent complaints. So as they have customer conversations, they do a language translation to be able to capture that Singapore English, and then they do really a sentiment analysis of the overall conversation to see was actually the customer concern solved. And if it wasn't solved, they will then follow up. So then they use that to retroactively look at the overall conversation, the performance of those support techs, and then they follow up and then can get to the bottom and actually solve that.
(23:56):
So really cool to see LLMs used in ways that are beyond just like, Hey, I have a chat bot, I'm going to use it. But using it in interesting and new ways was neat for us to see. And actually, like I said, to see that in production, for the most part, we've all talked about a lot of people are still in that pilot stage. They're trying to understand they've had these use cases, they see some value of it, they've also seen the cost of it, and they've seen a lot of, sometimes it works good, sometimes it works bad. So a lot of people are still in that stage, but there are some that are pushing through our in production and it's neat to see.
David Dichmann (24:28):
Well, thank you both. I want to thank the folks that have been sitting on my panel, JSB. Thank you so much, Jake. Thanks for being here. So JSB, if you'd like to introduce folks to where they might be able to follow up with you, we're going to be here after the lights go out. We'll be more than happy to answer any of your questions you may have in this room. But if folks want to come find you, where can they find you today?
John Shelley Brown (24:48):
So there's a McKinsey room not in this hallway hall before it. We have a hallway that you can stop by and we'll be there.
David Dichmann (24:55):
Fantastic. And Jake, and I'll be over at the Cloudera booth, which is just right outside this door. So thanks again for you joining us. I think we have some great opportunities in generative AI. We have some great ways to manage the risk and the privacy and security that we need to do to do that. And so I hope you enjoyed our session and have a great show. Thank you.
Building Secure and Compliant Generative AI Solutions in Banking: Hybrid Architecture Approaches
July 17, 2024 3:17 PM
25:24 Sponsored by