Improving Business Outcomes with Data and Generative AI

Sponsored by
AI enables seamless data integration across the financial eco-system improving innovation and business outcomes. Oracle and Nvidia will delve into the successes of an AI-centric strategy to help overcome challenges within banking.

What you'll Learn:

  1. How financial services firms are succeeding with Generative AI
  2. Where GenAI has proven to improve efficiency and revenue generation
  3. How other Financial Services firms are leveraging AI governance models
  4. Partnerships that to lead to successful generative AI implementations

Transcription:

Chris Harrison (00:10):
All right. Good Morning everyone. Pretty intimate crowd, so this is good. So we have a little bit of things that we want to discuss. My name's Chris Harrison. I'm an Industry Executive Director for Financial Services at Oracle. I've been at Oracle five years. Prior to that, I spent 30 years in the industry. I was a banker, I was the President, Consumer Banking President of First Financial Bank in Cincinnati, Ohio. So I was a named executive officer. So I had the pleasure of working with the board with the OCC, the FDIC, Finra, SEC, you name it, right? So working on strategy and such. So my role is to work with our partners and to work with you, our customers, on kind of anything industry trends and solutions that Oracle is providing. We have one of my peers, Richard Sachs, on the far my far right, and then Prabhu is with NVIDIA, which is one of our key partners. And so we're just going to talk a little bit about some of the things that we're seeing in the AI space. You talk about the future of banking, and obviously in our opinion, the digital bank, the future of banking cannot exist without AI, right? So without further ado, let me show you. Welcome you with a true digital AI.

Digital Twin 1 (01:33):
I am the AI digital twin of Chris Harrison, who may be standing at the front of the room or sitting right next to you. This technology is powered by Oracle Cloud Infrastructure and built by one of our partner organizations, gemello.ai. That said, welcome to the Oracle and NVIDIA sponsored breakfast, where we will discuss generative AI and how the banking industry is embracing use cases and the evolution over the past year. When you think about digital banking and the future of banking, you must embrace artificial intelligence, Oracle and NVIDIA, our leaders in data computing and generative AI use cases for financial services. Today, we will be discussing industry trends, use cases, governance models, and the strength of our organizations in partnering with banking institutions like yours and consulting integrators to enhance operational efficiency, customer and employee experience, and most importantly, revenue generation and brand differentiation.

Digital Twin 2 (02:29):
O. And one other thing, my AI twin can be modified to meet your customer where they are with different dialects and languages. We see many use cases to improve operational efficiency, but are also seeing organizations leverage tools like these for internal training knowledge-based tools, along with customer interaction with chatbots and client interaction through self-service on products and solutions. In addition to book of business managers such as investment advisors, mortgage originators, private bankers, and commercial bankers. Imagine being able to discuss impacts of Federal Reserve interest rate decisions immediately through a tool like this with your employees and relevant customers. We look forward to meeting with you during the remainder of the event. Please visit our booth and attend BU's keynote later this morning to learn more. Make no mistake, the future of banking must include generative AI. Thank you for your attendance and please let us know what questions you may have. Now I turn it over to my donor, the real Chris Harrison.

Chris Harrison (03:28):
Alright, so there is a little fun AI opportunity that is powered again by OCI, Oracle Cloud Infrastructure, and then Gemelo, which is one of our partner organizations, they're the ones that built this digital twin. And we are starting to see this ramp up in financial services use cases, particularly on the internal side if you think about training knowledge based dissemination. But we're also starting to see it emerge in with chatbots. And then, like I said, if you think about book of business management with investment advisors or private bankers, what have you, being able to address an entire customer base kind of face-to-face via their digital twin. But NVIDIA has GPU Power. So this is all built between the companies of the people that you see sitting on the stage. So one of the other things that we just wanted to touch on, we have seen over the last nine months, Richard, myself, Prabhu others have been traveling across the country.

(04:28):
We've hosted various round tables to discuss kind of data and artificial intelligence. So it started last year in cloud world, which is Oracle's main customer event, which is held in Vegas. It's going to be in September. It was in September last year. So as you kind of think about what you might be doing, there's a lot of things that happen at those events. That round table in September of last year was really a discussion around trends. And we got onto the topic of AI and generative ai. And at that point in time, everybody was kind of like, yeah, we're thinking about it. And we had Chase, we had Citibank, we had Bank of America, we had Wells, global Payments, other large companies in there that were starting to think about it, but they really hadn't done much yet. They were still apprehensive around the compliance and governance models.

(05:20):
They were apprehensive around use cases. And so everybody was saying, yeah, we understand the future is there, we've got to figure it out, but we're not sure where to go. Fast forward into April, we did a round table in Atlanta. We were already starting to see people that were already using AI in everyday business, whether it was on efficiency plays or if it was not yet on the revenue side. We hosted one in Charlotte then shortly after that and then ended up in Toronto in that eight or nine months, the acceleration of, yeah, we know AI's out there, yes, we understand, we got to get involved. No, we're not doing anything. Anything had migrated to people with full-blown enterprise wide programs around AI use cases or specific enterprise or specific functional area around efficiency plays. So we're going to talk a little bit about the evolution of it really all started out around efficiency.

(06:17):
How do we take AI and put it into our processes and eliminate redundant activities, eliminate the potential risk around compliance of manual data entry, manual, manual data aggregation, and to create that process efficiency to now we're seeing it really migrate into user experience, whether that's the employee or the customer. And then we're starting to see it on the revenue generation side. The other thing that we would talk about is the governance model That's top of mind and we have a little bit of information we'll share with you around that is how are organizations cross-functionally managing artificial intelligence? What is that governance model with compliance with line of business owners, with your IT folks that are securing the technology? And then talk a little bit about partnerships because we got NVIDIA sitting up here. If you guys are following the industry, NVIDIA stock is the most valuable stock in the world today, and that's all because of Prabhu, but mean a lot of really interesting things that they are doing in this space. So we're fortunate. And I also, your keynote is at 10 15?

Prabhu Ramamoorthy (07:27):
Yes, 10 15.

Chris Harrison (07:29):
So Prabhu is going to be doing a much deeper dive into generative AI at 10 15. So I'd encourage you to attend that. So it is a small room. We're going to kind of move forward with some discussion, but I'll just tell you, if you have a question, throw it out there. It's a little bit better for us sometimes if this is more interactive. We got some other Oracle folks sitting in the room that I'm sure can talk to you at your table if you'd like. But I think I've already kind of talked about this, the explosion chat, deep GPT was announced in late 2022. A lot of experimentation and now we're just seeing organizations are moving this into full scale production at this point. So I want to talk a little bit about where it's impacting the organization at this time. I'm going to turn it over to Prabhu and Richard and let them speak to it. Go ahead.

Prabhu Ramamoorthy (08:18):
Thank you, Chris. I mean, as everybody knows, I mean generative ai, you have to find the applications for it. And as to how NVIDIA sees it, we have been working along this with Oracle. We see three areas. The first is financial revenue, and I work with many hedge fund customers, the buy side customers as well as global banks, as well as asset manager firms. So a lot of what it has to offer is you look at it from those three buckets, financial revenue. So we see the most application of generative AI as well as GPU parallelization technologies has been in quant finance where obviously it makes a lot of money. You can do back testing and all that. Next underwriting analytics, we call it underwriting with alternative data. And the biggest area, if all of you are familiar, is something called alternative data. So you can only tap 10% of information with tabular data. And all this is with all the banks. So there is no real information asymmetry. So people, organizations are working for that asymmetry. They're trying to mine their own models. So an example of that would be Fin GPD, Bloomberg, GPD. So we have seen progress with major players who are looking at alternative data as a source. And Richard, I mean, please add on me now.

Richard Sachs (09:43):
So we were recently in New York at a meeting that was hosted by a team from Bloomberg, and they were presenting an analyst dashboard with augmented data. So the analyst typically would be researching financial 401k reports, financial reports, and they're trying to combine all this data into an aggregated view of a company so they could better provide analytical responses. So augmenting data with news release data, other data, geospatial data around companies moving offices, data around executive movements and so on was really, really, really relevant. The other thing I would just mention on the fraud side, we recently conducted a panel with a business school and they were bringing in a number of executives from financial services firms to talk about where they were seeing real financial benefits driven by AI. And the KYC stuff was really interesting. One bank referenced that had a 20 x improvement in fraud detection by applying gen AI in terms of the KYC portion of the evaluation of onboarding new clients, loan applications, commercial lending and so on. So some of the companies are really, some many are focused on what's the connection to the balance sheet of the result of the gen AI solution. And those are a couple of examples, the KYC one for sure, the augmented data as well. Yes.

Prabhu Ramamoorthy (11:03):
And the KYC process, we have figured out, it's a very manual process as you see, it's been in the news, North American colleagues, the big bank. So we are looking to change that process. Gen AI really has that. Every bank is sitting on thousands of documents, and these are rules based alerts. It's not generating, it's generating 10,000 false positives and your agents are searching all of that. So with Know Your Customer anti and those solutions, we are able to narrow down the number of alerts. They're really able to identify the anomaly detection and the fraud cases. And those are the ops one, right? It's equally as important as the revenue one, except here you're saving the fines, right? And you're also making sure that the banks can grow, right? Everybody knows about CA, the banks were prohibited from growing. You don't want to be in a place where you get into know your customer, anti money laundering fraud, like your strategy. You cannot set the strategy as in the case of TD. So the other areas that we see are margining and settlement.

(12:13):
NVIDIA is actually at 300 plus banks right now. We actually work with a company called Murex. I don't know if you know them. They were the other company. Calypso was acquired by NASDAQ Adenza, but we are there at many firms and margining and settlement, CVA, counterparty, derivative margin calculations and all those. The last is intelligent automation. Going back to another area, you have Oracle CRP platform. Everything has to be digitized and stored into that. And that's where the contracts, we have seen customers telling that a lot of their information, good information is all in these contracts, which they have to solve via intelligent automation. You get them out and put it in the system. So we are working with various insurance players. People are doing a lot of applications like RAG generative air that you've heard about. The one thing that's been a stumbling block is finally, I'm a developer by background.

(13:11):
You have to get the accuracy. Accuracy is the key. And that's why we have shown, because NVIDIA brings that accuracy on top of a partner like Oracle, and you have to make sure that the use case is fine tuned towards those. And last but not the least, conversational AI, it's a segment. It comes last in the bucket, but it's also both customer experience. For those of you who don't want to deploy it externally, it's being deployed internally for your own employees. People want to go through financial reports, see a reports, capital planning reports. So you could say it's either customer experience, if you do not want to deploy it externally, you can do employee experience, you can deploy it to your own productivity for your own employees. And that's a use case that we have seen that banks have gone ahead relatively. We see that banks are caught up with model risk and validation. The other industries, hedge funds and asset managers, they're less constrained. So they're more heavy on usage of generative AI. But we'll give it back to you for feedback.

Richard Sachs (14:17):
I would just add the one on the right side on document extraction or intelligent document. So we're seeing in call centers where reps are having to go in and research a policy or research a rule because clients are saying, oh, we're redeeming this money. Oh, what's the rule around that? Well, we're going through a divorce. What are the rules around that? All of these kinds of challenges that a call center rep has to face, right? We're using document extraction to pull out. So they can basically type in the question, it can go across your thousands of rules and reports and to extract that information. So we've seen significant productivity improvements and time service improvements to the end customer where they can go in and basically you're using a retrieval augmented generation model, which is selecting the information from your data. So you're taking applying an LLM to your specific data to extract that information so the rep can answer the question accurately. You think of the customer service improvements, but in turn of the speed of accuracy, the speed of time to respond, it eliminates all that stuff flipping through hundreds of pages of information and are going to a more senior person, which typically they do tell me what's the rule around this thing? So it just eliminates that. So these are samples of kind of use cases, Chris, that we've been seeing.

Chris Harrison (15:36):
Yeah, no, and I appreciate it. So let me say one other thing too. So Richard and I are with our tech team, which is kind of the licensed database, the generative AI cloud infrastructure piece. We have folks in the room here that are with our SAAS side. So when you think about our ERP EPM HCM CX pieces of the business, so Oracle has done a nice job of building in AI and machine learning into the apps place where, and that's continuing to evolve where it will do some of that automation automatically around report generation and articulating reports for financial statements. But where we're also seeing it is we're seeing it play in adjacent with data that works outside of an app space where you're trying to pull in other data. That's where our OCI and NVIDIA's power comes into place. And the thing I would point out, Tom Donahues here from our apps team, they're going to be hosting a lunch later today as well, where they're going to be talking more about analytics and how some of these things tie together as well. So I don't know if you want to add anything, Tom or you good? No. Yeah,

Tom Donahues (16:39):
We'll be having a lunch and we'll be sharing the space outside. So when you think about it, right, we've got a great partner in NVIDIA who provides the processing power. We've got the technology that we built the platform for on AI, and now we're enabling both of those foundation foundational aspects to build AI into all of the applications that Oracle is providing to our customers today.

Richard Sachs (17:06):
Cool.

Chris Harrison (17:06):
Okay. Yeah. So Richard, do you want to touch on this AI factory?

Richard Sachs (17:09):
Yeah. So kind of like, okay, we are where we are today. The question is where do we go or where's some of the players going? They're going towards this direction of AI factories. We kind of at Oracle view it as industrializing AI. And the idea is we spoke about organizations becoming digital first some years ago. Now we're seeing a transformation into kind of an AI first strategy. And really this kind of touches on two or three things. One is enabling the entire organization with computational power because gen AI models are extremely power requirement. There's huge power requirements in building and delivering these models. So you have to have that. But starting the foundational part is the data and the data typically, again, we're seeing siloed data because the way the organizations have grown over the years, but you have to have data that's accessible through things like vector search and capabilities to extract the information to build these models.

(18:04):
So this concept of an AI factory is really kind of the future state. It's building an organization that's AI first, that's building the end-to-end process, ensuring that every piece of information is ultimately driving the right outcome, which is intelligent decision making, improved decision making around. That's where the kind right hand side is. You're driving towards that. How do you do that? Well, it starts from the data, the computational power, and then basically extract the information that every business user in your organization can get to make the decisions they need to make. Because what we're doing here, and recently Aiden from cohere was mentioning this, we're trying to augment the capabilities of human beings. We're not really replacing people. We're trying to improve the way they can work with better information. Because at this stage of the game, AI is really an enabler. It's not a replacer. So what we're trying to do is really present a future state here that allows an organization to think of where are we ultimately trying to go? We're trying to make better decisions as organizations. I know Prabhu

Prabhu Ramamoorthy (19:11):
Yes, I completely agree. Even the research analyst use case, they could be focusing on more companies and going back, why do you need a factory? When we went to banks, we got a list of 200 use cases, right? I mean I'm pretty sure, but then we initially picked the top three because AI is so expensive, but you want to recreate that same process. You cannot only do it for the top three ROI use cases. You want to use it for all the 200 use cases that people have come back with. I mean, how many people do you think here have only two use cases or one use cases for AI? Raise your hands. Yeah, hundreds, hundreds, hundreds. So you need a factory. And that's what we have really been extending it. And if you know the GPUs you need also the networking and other areas because it's a very scarce resource. It can be just 20% utilized, which is not great. So you want to have it on the greatest network so that you can get the maximum out to get to your accurate use cases.

Chris Harrison (20:15):
So who remembers these fancy terms called share of wallet and cross sell? So you think about the old MCIF systems back in the day, I'm only making the point. It's about the data. If we were back when we were trying to cross sell and grow share a wallet, it was like how do we pull in the data from our wealth management solution? How do we pull in the commercial banking data with the personal banking data to know that super household, right? Well, with AI, it's a step beyond that, but it really does truly start with the data and with the GPUs and with the infrastructure that our organizations have, you can really harness that data across the entire enterprise and then leverage this manufacturing tool called AI to help augment your decision making. Yes, I love it. Question go for

Audience Member 1 (21:03):
I'm in the front and I can't tell if who's looking at me or not. So I'm good to have a conversation.

Chris Harrison (21:06):
You want that mic so everybody can hear you?

Audience Member 1 (21:08):
The point I would add to this, I agree with the vision. Awesome. It makes perfect sense to me. What I would say from a human standpoint, that data AI factory component to turn it into intelligence still requires a lot of human ingenuity, creative thinking, critical thinking, because it's not just throw a bunch of raw ingredients into this magical sausage grinder and it sort of churns out intelligence. You have to think what is the end state? What is my goal? How am I going to use that intelligence? What's the intelligence for? So if you have that, I think the vision is great. That's my goal.

Chris Harrison (21:42):
Hundred percent. So you hear the term human in the loop, right? So human still in the loop. We see that. We feel like that's where it's going to be remaining because you can't just solely rely on it. You hear the term hallucinations where AI is creating maybe sometimes, occasionally would create some answer to a question that who was the first president of the United States? It was Donald Trump, right? No, that's not right. So there's still some of that, but with I think the accuracy improvements and all that going on,

Prabhu Ramamoorthy (22:11):
Yes. What made banks the key? Why is Wall Street, because of the data, right? It's data that you have, it exclusively sits with you. It's not with other firms. And going back, I think you are very right. I mean at the end of it rag, you hear about all these things. You still have to do the engineering. Just like how you had to fight for a tabular data model, you had to get, use much of PhD statistics. So it still needs the hard work to your use case. You're just substituting a different class of model and you have to do that hard work to convert that data. But finally, we feel that one day the banks, you yourself as a company can monetize this data, not others have the data. Not ChatGPT, open AI. So we believe in creating your own model where you tap onto that and it becomes your own intellectual property. That's what NVIDIA believes in that we create each GPT's for open AI so that you can make money out of it. And it's a data that sits only with you guys.

Richard Sachs (23:15):
Yeah, I would just add to your point, just we recently in New York with one of the banks, we worked with them on the scoping document of their gen AI. It required humans to sit down and actually define the problem. It took humans to sit down and define the outputs, and I'm going to call it the metrics that would satisfy that this model was in fact meeting the test. And the reality is before you even start to implement the technology, you really got to kind of set it out and agree as a team what's going to be the outcome that you're going to be satisfied with delivering to the organization. And that's where all of this technology is awesome, but the thing is, it still requires a huge amount of human intelligence and capacity to say, this is not acceptable at this level. For example, the models delivering 50 or 60% accuracy. They said unless they have 90% accuracy or better, they're not going to apply the models. But it took a lot of iterations, it took several months to get to that level of output. So that's ultimately, as Chris said, the human in the loop piece.

Chris Harrison (24:19):
No doubt. So governance models, I put this slide up. We stole this from our partners at Deloitte because we think it tells the story fairly well and be clear, both Oracle and NVIDIA work hand in glove with Deloitte from a consultant integrator implementer standpoint on our SaaS side as well as helping us. And they've been at many of our events. But other thing I will say, at the risk of having to listen to me talk anymore, it wasn't that long ago we did a no American Banker webinar with global payments. So it was with NVIDIA, myself and Global Payments. It was with Kevin Levitt, one of BU's peers where we talked through their global payments evolution and how they got started and their whole governance model, and to the point I think probably was making, they started out, their first question was they didn't think anybody was using AI in their organization.

(25:11):
So they asked the employees, are you using AI? And guess what, they found out, oh yeah, we're using it here. And they're like, well, holy crap, we've got to harness this because there may be things going on that we're not aware of. So then they surveyed their 27,000 employees and got ideas of where they started. But this pictorial really does kind of harness what they did it. Look, it is a cross-functional team. It does take business minds because you guys know it. The business strategy is what's key. You got the overarching C-F-O-C-E-O strategy, five-year plan of your organization, but then when that get rolled down into your individual line of businesses, they've got their strategy and they're going to come to their platform administrators or their CIO or their CTO and say, we need upgraded in the technology area, or I need generative AI and you can't have all these people doing all these different things. So they took those functional people from the line of business as well as the technical side and created this committee. And obviously it starts with the executive team and having a clear vision and understanding it. But the only reason I bring up that webinar we did, I think it's still out there, it's on our social media.

(26:22):
Russell Moore from Global Payments really goes through and talks through global payments experience of how they went from infancy to starting out on the efficiency play, low splash, what he called low splash challenges, put it in someplace where if it screws up, you're not going to be able to make a mistake to the very quick evolution of we've got to make money with this. Not just cut costs, not save, create better processes. We've got a way to find ways to monetize, which is where they're heading with it. And Richard mentioned another, so KPIs are clear, having a clear understanding of what the problem is, what your current metrics are around it, what your desired future state is. You kind of touched on it. We have a customer that we're working with that was working with, I think open AI, right? A different provider, and they were only getting in that 65, 60% accuracy, and they were frustrated and thought we're going to be done. So we met with them partners like NVIDIA Cohere, which is a partner of both of us, sat down with them, understood the problem, and went in with programmers from both our side and NVIDIA cohere side and created a new model in a matter of three or four weeks and hit the mark. But where you could have those business level consultative conversations around use cases, which we're going to talk more about, that's where you could really have an impact.

Richard Sachs (27:54):
Yeah. Okay. Here's again, an example from one of our partners, McKinsey, on how to decompose a use case and apply different risk strategies to that particular use case. So you take something like a customer journey, we're the highest probabilities around risk compared fairness, effectively the accuracy of the model to define the output. And as you see here, they've identified this as kind of the primary risk. So not every use case has the same level of risk associated with it. So this is again an exercise humans would conduct. They'd effectively categorize the use case, identify the primary and secondary risks, and then really zero in on the most important risks. Different use cases will have different ones. I dunno if you had any experience around this. Yeah,

Prabhu Ramamoorthy (28:41):
Richard, I agree. Finally, you have to map the common sense back to the technology rate generative air. If you really look at it, you apply it to your domain, great. We are able to ask questions about Game of Thrones and dragons egg to ChatGPT, but I'm like a financial analyst, my own life as A CFA, I like to analyze interstate risk and all that. So you have to just take that use case and your leadership is very key. You just map it. And even you would be able to look at those areas. You can ask a simple question to see if the model is giving the output that you want, and you map those simple areas. Not everything has its own mapping, right? Yeah.

Richard Sachs (29:23):
The legal team will say, well, everything's risky, right? Of course. But then the thing is you've got to kind of build a pyramid and say, okay, here's the highest and here's the lowest. I'm sorry, there was a question.

Chris Harrison (29:31):
Yeah, no, I was going to say, look, we've got about 15 minutes left and I'd rather hear from some of you. So perfect.

Audience Member 2 (29:37):
Sorry to interrupt that discussion. Just considering where you're going in this journey and you're giving us the big picture of everything, I think the biggest stumbling block and the most stubborn is the data, particularly when you're talking about gen AI data or even when you are using open source as your modeling tool. How do you get into that data to make sure it's correct? This is no good if we've got bad data and seeding good data is incredibly hard. And I don't even think the MITs of the world have answered that one. So Dunno.

Richard Sachs (30:10):
Yeah,

Prabhu Ramamoorthy (30:11):
I can take that. I mean, a lot of what, like you said is the data strategy, right? Yes. That's there. But companies have been rushing out. You have a lot of PDFs. We have to get it organized, and I know banks have embarked on this data strategy, and NVIDIA recently also released a tool to generate synthetic data for large language models. We are even supplanting that it could be trained. I think of NVIDIA as the rate. I mean we are also helping on the software part to generate synthetic data. There are different ways to do that, but ultimately, yes, I think you pinpoint right, you have to figure that data piece first until you embark on the agent. Yeah,

Richard Sachs (30:53):
I would just add a lot of our conversations with customers are starting with what is your architecture? What is your data strategy? The chief data officers of companies are extremely important people in this story. They are the ones presumably, that have keys to the kingdom and are challenged with the idea of structured unstructured, semi-structured new types of data, all this video data, text data, documents that are basically PDFs and other forms. They sit in silos all over the darn place. Part of the hardest part is sitting down with a client and getting them to share that architecture so we can collaborate with them on what are the sources of data. And then again, assessing the accuracy of the data in each of those silos or in each of those things, who's responsible for managing and updating that information so that ultimately you can feed it into a data lake, a data repository where you can then effectively create a single source of truth really where you're trying to get to get to.

(32:04):
But the challenge of course is our organizations are effectively de organizations. At the end of the day, there's commercial lending and retail lending and wealth management and so on. So even though the customer data may on the surface that be unified below the scenes, a lot of the information that supports, it's still not unstructured, but basically siloed. So I think that is the most significant challenge for us modernizing our businesses is really because really, and this is kind of the metaphor that's used data, is really the oil of the future. At the end of the day, your organizations that are entirely dependent on how effective your data sources are and how accurate they are. So that is the number one challenge I think for organizations, is how do you modernize the data sources, the structure, create a single source of truth, and then apply that to these modern technologies.

Chris Harrison (33:03):
Questions back?

Audience Member 3 (33:05):
So when we speak to managing and governing the risk of the data, are we to the point that we're beginning to see a bit of standardization across banking and financial services as far as what is an acceptable level of confidence slash risk? And if we're not there yet, who drives that? Is there regulators, competitive marketplace? What do you guys see?

Chris Harrison (33:33):
Yeah, I think, go ahead.

Richard Sachs (33:34):
Well, I would just add, so I was recently at a regulatory conference. The number one question I heard at our booth was traceability of data, right? Regulators were concerned as, and particularly now with AI, that they can effectively understand the sources of that information. So I would say one of the number one challenges is ensuring that data architecture is really clearly displayed, and you really need experts to come in to work with you to ensure that all those data sources are clearly mapped out. And for a lot of organizations, that requires, there's a lot of redundancy in that information too, right? You've got the same client information in multiple forms. So the data, the cleanup is really critical. Ultimately, the regulators that I spoke to, they were concerned in their audit process, understanding what the sources are. And particularly now as we augment our services, we're building networks, we're building connections to other services. We're building loyalty programs to different sources of data. We're trying to basically engage with that customer in more forms, which again, adds more complexity and exposes ourselves frankly, to more risk because exposing that kind of digital platform to more what they would call points of penetration, places where a bad actor can get into. So it is a complex process.

Chris Harrison (34:55):
Sorry, but I would also agree. So I think back to global payments, that story that they told and other customers that we talked to we're all dealing with safety and soundness or compliance examiners. So they're still examining the data and the processes, how you store the data and how you aggregate it, the data. Now you're trying to do something a little different with it in terms of leveraging this tool to help create outcomes. To Richard's point, it really is certainly if you were using it for lending right now, you're going to have a fair lending concern. Where is the traceability of the data? How is it creating the answers that it's giving you? So it's really their approach was we've got our internal structure in place around how we manage the data, how we are managing our safety and soundness and the compliance and the cybersecurity, all that stuff. It's no different when you're adding AI on top of it, it's then how are you leveraging the AI results or the use cases, what you're going to use there? How does that impact the customers? Really what we're hearing more, and it's around the traceability side, does that help?

Prabhu Ramamoorthy (35:58):
And I don't think we have to leave it to the regulators, the first line of defense, second line of defense, we have already projects going on risk data aggregation and all that. I think the challenges, I mean, we set the platform, the regulators always want what they want. And if you heard of something called cat consolidated Audit Trail, nobody uses it. I think you have to lead it from the strategy, and NVIDIA actually works, even on the tabular data side. People have been struggling. But if you add on unstructured data to that, to what Richard meant on different sources of data, you need the different kind of compute. And you have to do the same thing with unstructured data. You had the same problems. Even with tablet data, you would have that with unstructured data as well. It's not an easy question.

Richard Sachs (36:49):
It's ultimately the organization's assets. It's really your business. So it's really up to you to define that strategy. Obviously you can get collaboration with other professionals, but that has to be kind of the mindset of the organization. If you want to build that intelligent organization, you have to make that investment around the data upfront, and ultimately then the outcomes are going to kind of flow from it. Once you build that foundation, you're really on your next journey. You're really driving the transformation of your organization. Yeah, perfect.

Chris Harrison (37:23):
Not much to really say here. We just, Oracle and NVIDIA both are partnering across the universe. So when you think about, there are some niche players that are having Gemelo with the digital twin that I opened the event with. That's specifically what they do. So we're partnering with a lot of different organizations to expand use cases, and it's just we want to make sure you understand that in our opinion, and I think of the opinion of external reviewers, Oracle and NVIDIA are at the top of the game in terms of generative AI and use cases in the financial space. So I don't know if you want to add any comments around, we talked through most of stuff.

Richard Sachs (38:00):
No, I think we've really talked about, these are kind of the differentiators. You'll get a copy of. You'll have access to this deck, I believe that the recording will be posted and so on. So these are the things that we really zero in on. If customers says, well, what differentiates you from AWS or from Microsoft or from those organizations, these are the areas that we would typically zero in on. So again, I won't cover these now in detail, but yeah.

Chris Harrison (38:22):
And one last

Prabhu Ramamoorthy (38:22):
I can add on to that. Go ahead. Fine tuning. That's really very key. I mean, finally, AI is sitting at 55% accuracy with these horizontal models. Let's say you finally, you have to get it to your use case. And a lot of people are into cost. The cost of generating tokens, the cost of output feeding in the context model and water output. But it doesn't matter. It doesn't matter until you've got the use case, the accuracy is the key, and then you worry about the cost. I think that's why you find that Oracle's platform is offering it, right? You finally have to take all those, I mean, the industry is going through a position where they're starting with some cloud experimentation, but ultimately would've to do all those steps, including the fine tuning support to get to the end use case.

Richard Sachs (39:10):
Yeah, the predictable price performance is a really big aspect. When you start to get into these monstrous models and you're going through thousands of transactions a second or millions of transactions, the scale of the pricing becomes a real CFO challenge. They start to say, just a moment. This is $10 million a month. Are you kidding me? So all of a sudden these become very important. So really the engineering teams in your organizations have to do a very good job on the diligence side and the business case side of adapting, understanding who you're partnering with because the numbers get crazy. The numbers get crazy very fast on the building of the LL model side.

Audience Member 4 (39:58):
Does that make a big difference when you're accessing the data, which also improves, you can then get to more data in a cost effective manner. So it's a huge difference for your analysis.

Richard Sachs (40:07):
Yeah, I think this is, we're seeing, again, not every financial institution's going to have obviously the same scale challenges, but when you're starting to build these models and so on, and you're using them on a regular basis, and of course you're doing testing, so having to be able to turn on and off the spigot of access to computational power is really important. And of course, the volumes of data, when you really start to aggregate all of that information, it can become very,

Prabhu Ramamoorthy (40:34):
How far have you have gone ahead in the journey started using large language models? Maybe I'm also learning from you how far, maybe raise your hands and can

Richard Sachs (40:44):
Ask, let's say you have 10 under 10, let's call it projects and LLMs today, 10 and 20. What would be under 10? Anyone have anyone

Audience Member 4 (40:54):
Under 10?

Richard Sachs (40:56):
Any 10 to 25 projects, let's say that are building gen AI models? Just roughly few. Okay.

Audience Member 4 (41:06):
Gen AI

Richard Sachs (41:06):
Yeah.

Audience Member 4 (41:08):
So what actually further gen AI?

Richard Sachs (41:12):
Yeah, I'm more gen AI. I'm more interested in the gen AI stuff.

Prabhu Ramamoorthy (41:15):
I think, yeah, there are two different quant finance workloads and machine learning, predictive analytics. I know predictive analytics is the credit card models. Underwriting specifically, we wanted to understand whether banks, we know we have been using big fan of predictive analytics, whether we have started using generative AI as well.

Chris Harrison (41:36):
So we just have a couple minutes. Again, Prabhu has a keynote discussion here, 10 15 would encourage you to go hear more where he'll get into more fascinating technology detail. But it's really helpful. And then Richard and I, we down at our booth in interaction, we have some demos. We can show you some of the AI use cases that Oracle has. Tom, Donna, you over there again on the side of the room. It'll be hosting a lunch later today with the Oracle apps team is right here in the same room right here. So there'll be more discussion around analytics in AI within the Oracle app space. Anybody have any questions? We've only got about a minute left. We want to make sure you're And the scan, oh yeah. And then you can scan here to get connected, to get additional data, get access to the slides and some of our homepage. Correct. Any final questions?

Prabhu Ramamoorthy (42:30):
Yeah, feel free to ask any questions. Even I am, as you right, this entire thing we had to relearn.

Chris Harrison (42:37):
I don't know if I can take a question from a tie, a guy in a tie today. That'll be better.

Audience Member 5 (42:47):
So maybe this isn't a fair end question, but I talked a lot about LLMs and the projects that you're working with on there. Have you started to work or seen any trends towards a small language models that banks are using for very specific internal projects? Yeah.

Prabhu Ramamoorthy (43:00):
Yes, definitely. Yes. So it's not like one class, small, medium, large,

Richard Sachs (43:05):
No. Yep.

(43:07):
The short answer is yes, yes, absolutely. A for scale two for four, I'm going to call it building scope around something that's more manageable, frankly. And that really, I mean, there's no real definition of large, small, medium, as Prabhu said. But the reality is you're just working, or the smaller data set, you've got a more, I'm going to call it specific use case that you're trying to deliver on and also your team's capability. We ran a workshop, I had five or six, I'm going to call it Gen AI leaders in a Virtual Roundtable. And they were saying that's in fact, one of the project focuses is what they call small gen AI models. And typically associated with a very specific, and particularly on the, I was going to say external side internally, if you're doing something for HR or something around gen AI, that's different, right? If your model is somewhat inaccurate, it's not as problematic as doing something with customers. So the ones I heard was they're doing small language model buildings for external projects, customer support, customer access, something like that.

Prabhu Ramamoorthy (44:12):
I mean, the big tech thinks of everything as Google search. It's one model. But as you know, right in a bank, everything is set of models working with each other. Fraud is a set of models, and it really goes on to addressing each part of that model with that specific need, with that LLM tool summarization, entity recognition, all that. Yeah.

Chris Harrison (44:35):
All right. So we're actually a little bit over, but any other questions, we're happy to take 'em. If not, thank you for your attendance and we'll be around, you can ask us questions.