Scaling GenAI Trends and Implications Panel

A dynamic and insightful panel discussion on how to effectively scale GenAI at your organizations - these panelists will be industry visionaries Vinod Devan, Global Head of Partnerships at Cohere and Yaron Haviv, Leader of Technology at Iguazio, who will share their lessons learned and best practices to scale GenAI focusing on LLMs, platform architecture, MLOps and culture and change management to effectively scale GenAI across the enterprise.

Transcription:

Antonio Castro (00:10):
Welcome everybody. We have a very exciting discussion today. We are talking about Scaling Generative AI. I think a topic many of you have probably been exposed to different frames. I'm Antonio Castro. I'm a Partner with McKinsey. I spent the last 25 years working primarily with financial institutions on scaling data and AI capabilities.

 Yaron Haviv (00:34):
I'm Yaron Haviv. I'm a Lead Technologies for a company acquired by McKinsey that handles all the mops and gene AI technologies. I've been a VP at Mellanox, which is now part of Nvidia leading all the data center practice, including financial services as part of it, and author of a book around MOPS for enterprises.

Vinod Devan (00:55):
Great. I'm Vinod Devin. I'm responsible for building the AI ecosystem for Cohere. If you're not familiar with Cohere, we make foundational models including generative models that we've all been talking about for a couple of days. A fun fact about Cohere for those who are not familiar, our co-founder and CEO Aiden Gomez, he was one of the eight authors that was credited with writing the Seminole paper on Transformers, the T of GPT. He was literally one of the creators of the technology. Prior to Cohere, I was with a different startup, helped him take go public. We are currently listed on the Nasdaq, and then before that I was a consulting partner at a large consulting firm and we were helping companies integrate emerging technologies into strategy, products, operations, et cetera. So I can tell you that from my career standpoint at least, I've never seen a more exciting time to be in this space than with AI now. So excited to be here.

Adam Fish (01:58):
Okay. My name is Adam Fish. I'm an Account Executive at Glean. If you don't know about Glean, you could think about us as an internal Google or an internal ChatGPT. We started off as an enterprise search platform and then kind of lucked our way into the generative AI revolution. And it just so happens that having a great search foundation helps with internal chat, internal ai. So we're kind of in our rapid growth phase as a company. I lead our business here in Florida, and prior to Glean I was at Google.

Antonio Castro (02:31):
Thank you. All three of you guys are at the epicenter of this topic, which is very exciting. So thank you for being here with us. I think the first question I'd like to start with just to get the conversation going about more than half of the clients we speak to in the industry and the surveys that we've done, have gen AI somewhere at the top of their agenda as a leadership team, right? Something that they see as a very board level down. We got to figure this out. It's important to us. How do you see this just changing the industry or shaping what the next few years will look like for us? Vinod, maybe we start with you.

Vinod Devan (03:13):
Yeah, look for,

(03:17):
I'll talk about the finance sector in particular, but in general, I have the vantage point of being able to see gen AI being adopted across multiple industries, and I can say unequivocally that the highest degree of adoption and ramp and scaling is happening in the financial sector specifically I'd say banking and insurance, but more broadly, and there's a reason for that. The thing that GenAI is able to do really well today is to take large quantities of data, process them, analyze them and make sense of them. That's what they do really well. And there's no industry out there that has more data, more fast moving data, changing data, volumes of data, different types of data than the finance industry does. So that's why there's a lot of connection, there's a lot of correlation. It makes sense now in terms of how AI might be starting to play a role already, there are three major dimensions.

(04:09):
First, on the strategy side, you have to make the smartest decisions you can because it's a very competitive space. And so in that with ai, you can get much greater insights into market trends and market dynamics and customer sentiment, regulation, policy, et cetera. And if you're able to combine all that information in a meaningful way to make the right decisions for your strategy, it's incredibly powerful. You can do it faster and you can do it in a more reliable way. Secondly, and this has been discussed over the last couple of days, repetitive tasks that are done by analysts and that are done by folks who take documents, process 'em, make sense of 'em, summarize 'em, who are analyzing financial reports, creating financial reports. All those activities are already being done at a lot of companies by LLMs and by Gen AI models. And then finally, from a customer standpoint, again, I've been attending a few sessions, there's a lot of focus on personalization.

(05:08):
And when you're talking about personalization, there's no greater power than having the data that the models are trained on. Having access to all the external and public data that's available in all the domains and being able to combine it with your own information through what we call rag retrieval, augmented generation, having it combined with the information you have about the customer, about your own offerings in your services, in your databases, combining all of that to be able to give a really insightful response and a very customized experience to your end customer that is personalization at its best. So these are the areas that we're seeing evolving significantly today. I'm sure there'll be a lot more coming up, but that's where I'd say the role in finance is really coming to the front.

Antonio Castro (05:55):
Adam, anything you want to add? Yeah,

Adam Fish (05:56):
I would add on that I think generative AI is decision intelligence technology. So it helps people make better decisions faster. And if you look at all of the knowledge inside of an organization and you weigh it by its mass unstructured information, so your messages, your emails, your documents, your diagrams, your playbooks, that's 90% of the information mass inside of a company. The other 10% or so is structured data. I think in the last 10 years, organizations have made great strides in understanding their databases and their structured data and putting BI tools around that building dashboards. But now with generative AI in chat, we can get that same level of decision intelligence out of unstructured data. So rather than having to synthesize across five, 10 different pieces of information to form a perspective on an investment or prepare for a board meeting, you can now go to Gen I and do that. And specifically if you have a great retrieval architecture, if you've been able to build a bridge between your company's private data and these powerful AI models, that's what gives you that capability. So we see it as a decision intelligence, accelerating workflows, accelerating productivity, saving time, and then taking that a step forward and moving into ag agentic workflows as well.

 Yaron Haviv (07:19):
So I may add a couple of things. I think you should divide it to front end and backend. I think on the consumption on the front end, we have a younger generation that wants to consume things in a natural language quickly without waiting for and getting as much personal as possible. So I think this is where gene AI can go into audio and text, et cetera, which is what the younger generation want to do, not go through forms and definitely not go to the bank. On the backend side of thing, again, AI introduces a great efficiency for all those things We discussed processing documents, I wouldn't undermine structured data. Most of my clients or the clients I engage with through McKinsey are financial services as well. We always find the augmentation between structure and unstructured. You always have context of a client or of a user that sits in a database and you have some other documents. And some of the challenges is really how to mix those, how to mix the traditional ml, those models that classify your preferences with those models that know how to generate text and get all the data. I heard just some examples here of people saying, you know what, I will take the 10 Ks and we'll search the value of the last revenue for last quarter. No, you just put it in a database, it will probably hallucinate less. So think we need to think about that as well.

Antonio Castro (08:43):
I love that last example. I think the image of every role, having a copilot that is also an apprentice and helps front office, back office or whatever has been I think a fantasy for the industry for a long time. And so it's great to see some real progress there. On that note, and maybe Adam, we'll start with you. We also been serving who's made real progress and I would say out of from our research, maybe one out of 10 have really committed to long-term bottom line impact. So I'd love to hear maybe from each of you guys a little snippet of an example of an organization, a financial organization being pretty bold and scaling something right. And I think that'll help everyone kind of ground what this could look like.

Adam Fish (09:39):
Yeah, of course. And I think some of the early use cases for generative AI are things around software engineering, productivity and customer support operations. We see financial services companies having those functions as well. And so we've worked with one that found they're saving three to five hours per week on some of the core r and d that they need to do to enable their company and build technology. But specifically for financial services, I could highlight Sequoia Capital who's a storied investment firm. They're an investor in Glean and one of their partners just came and spoke to us following board meeting and just to prepare for that board meeting, right? They're going into a generative AI solution like Glean. They're asking questions, it's an interactive dialogue. It helps 'em get to the information, the answers that they care most deeply about. When folks are new and onboarding into a firm, a lot of financial firms have deep tribal knowledge and process and understanding on how they make decisions or do certain things.

(10:40):
And so rather than having to go search across five or six different places to get to truth, you can now have a system that connects horizontally across all of that and chat with it. It's a colleague or like it's a person. So we see great use cases for onboarding, getting people into productivity much faster, retaining tribal knowledge within the firm, building specific agentic workflows around how do you take a board deck and break it down into something that's easily summarized and digestible. So it's really a productivity driver, but going back to what I said prior, I think it's also a decision intelligence tool because now you could actually make sense of all the unstructured data in some cases, convert that into structured data and make better decisions faster.

Vinod Devan (11:32):
I'll build on that. So everything that Adam said, it's funny, we didn't compare notes before. So we have a lot of commonality because it's clear that there are some common themes that are playing out in the financial industry. In addition to what Adam said, there's the part about generating reports. A lot of the reports today are manually generated. There's the summarization of existing reports that are already happening, but there's a generation of new reports with a human in the loop, always a human in the loop, at least for now, which is getting a lot of traction in addition, more complex use cases. For example, this large global bank that we've been working with, they have had some fraudulent activity, significant fraudulent activity in their online banking. And so once they deployed Cohere, what they did is instead of analyzing just the transaction in and of itself, they started analyzing transactions in the context of what's happening in the macroeconomic environment, including information that they have access to, for example, device id, IP address, even the user's social media activity and behavior to determine whether or not a certain transaction is valid, is it legitimate or not?

(12:40):
And then they took it to a next level where they were finding even more complex orchestrated attacks and being able to identify those. For example, there's this one instance where there were multiple parties that were coordinating and working together in order to exploit a vulnerability in their online payment system. And individually each of them looked legitimate. But when the AI was applied to look at all those transactions in parallel together, it was able to identify trends and correlations and disparities, and that's how they found certain plays playing out. And they were able to stop further damage happening and were able to take into account and their fraud detection team started putting that into their pattern recognition as well. So there's a lot of lower order productivity efficiency related activities that are already being taken out by gen ai, but more importantly, the higher order, the larger loss prevention strategies are also deploying AI now.

 Yaron Haviv (13:38):
Yeah, maybe I can add, again, working with a lot of clients on finance and we see two main patterns because people are really afraid from the potential risks, hallucination, guardrails, et cetera. So people don't really trust the technology enough to put it. Some airlines and others tried it wasn't always too successful. So you're trying not to put the agent in front of a client these days, especially in banking. So we find other applications where are more safe, like for example, reg application for internal documentation. Worst case it's not as accurate, but it's not facing decline. Another area that we've been very operative is around call centers. We have several cases that we've deployed, for example, analyzing calls from audio generating a lot of insights that help either improve the efficiency of agents, understand things about the product or things that work and don't work. So nothing happens if the analysis wasn't that great, nothing really happened.

(14:33):
But in those cases, we managed to reduce the cost per call analysis by 60x from moving from the first version to the last. So talking about operational challenges, most people just go implement something, they don't think about cost, risk, scale, and then we have to essentially refactor everything. Another use case, which is very dominant, is what we do now in realtime copilots. So essentially we can listen on the conversation and put popups to the agent exactly what to say. So again, you have a man in the middle that can essentially just go and see the result, not necessarily go and answer directly as a bot. So these are things that we've been focused on initially ago. We work with clients on setting up the foundations, how to build the data strategy, how to build automation so they can scale to multiple use cases next. And the first use cases that we go for are the ones that are more safe, less concerns about guardrails and regulations and so on.

Antonio Castro (15:30):
That's fantastic. I think one of the themes I heard through all of those stories was almost this ability to further embrace what is already bespoke about this industry. Whether it's a bespoke analysis that an investor wants to make and wants, is curious about anything and everything about a specific topic, whether it's tackling bespoke fraud or interactions. Several of the clients I work with, that's their differentiator. It's to be able to say, Hey, we're going to generate a new credit product or a new transaction based off of bespoke things. And so I think what's exciting from what you guys all said was these are all real examples of embracing some of the bespoke ness that I think has been painful in the past, but clients are demanding more and more. I don't know if that's resonates with what you guys are saying with that. We hear a lot about the constraints in challenges and roadblocks that keep folks from actually getting to the full value. And I'd love to hear maybe arn, since this is where you've probably spent a ton of your time, what are some of those roadblocks? What is preventing firms from going, Hey, this is an interesting pilot within a small group of people to something that has true enterprise grade outcomes.

 Yaron Haviv (16:57):
So what we usually see is about most people go read some medium article, they download link chain, they build something nice, and more than 90% of those never get anywhere because they built some nice thing and sometimes it answers the correct answers many times not. And when they go to the executive, they just showed the correct answers, everyone is happy, go funded, let's go. But then they're starting to think, you know what? The versions of the document may change. So how do we handle that? How do we handle scale or how do we move from 40% accuracy to 95? We don't really want to ship something that's only 40% accuracy. And then they understand they need to go to the basics. And the basics are usually we have this sort of model of four paradigms, say first the data engineering part, data management, you can't just throw a PDF, you have to analyze it.

(17:45):
You need to extract data from the tables, you need to look at the headers. You need to build a pipeline like an ETL pipeline for structured data and version IT and metadata and labels and all that. You have second practice of build and test, CI, CD, all those things. Some people take the app before into production. You can do that just like you can do that with AI. You do accuracy analysis and all that. Okay, so you need to test say it with label data and you to compare it. We know stories, even open AI changed the version and suddenly the prompts got broken and they found out only in production. So you have to go back into how do you really develop software for automation and all that? You have application pipelines, all those things with the line chain, et cetera. You need to build them for scale.

(18:33):
You need to think about GPUs if you're using and utilization, you need to think about observability, telemetry, how do you meter all of those, how do you budget those, et cetera. And all the aspects of the third one is what we call LiveOps internally is all the command and control. How do I really control and see what's going on on those things? Do they charge me too much? Do I have hallucination? Do I have bias? Do I have risks? And even get a feedback loop. I actually did the webinar today on showing how we can continuously improve model performance and lower risk through continuous tuning. So we look at all those things. So these are the technology challenges, but in addition, there are also political and structural challenge. So I went with this demo application of a copilot that we built to a CTO of a bank and he says, oh, that's amazing, but you know how we work. The agent in the call center has four different screens and he has four different logins. And in order to what you just showed me that you augment data from four different real-time databases to give the client a proper information about his balance and recommended answer, et cetera, I need to actually type four different things in four different screens and I don't have a way to solve it. So there are also those kind of challenges.

Vinod Devan (19:58):
Challenges. I think it's a problem of plenty on multiple dimensions. So when you're trying to, everybody wants to do ai, it's become a buzzword. It's become hype, call it whatever it is, it is what it is. So everybody wants to do it in your company, in your organization, anywhere you ask, they're eager to adopt it. And if you're not going to start a program for them, the developers will and should rightfully start their own projects and that proliferates on its own. The first one is deciding where to start. I think that is a very important attribute of what a good strategy looks like. Somebody has to decide where to start, what you want to do and then grow from there because trying to launch a whole bunch of projects is not helpful. Even if half of them are successful, you don't know why and what made them successful.

(20:50):
The second I'll put in the same umbrella of problem of plenty is it's a very noisy space. Everybody claims that they have the best models. Different cloud players are saying that they're the best. LLM players ourselves, part being part of it are saying we provide the best and it'll be death by a thousand cuts if you start analyzing every single one of these permutations and combinations of potential solutions that you can bring to bear. The reality is for your industry, you already know the things that matter. You know that some of security is important for you. Data privacy is important for you. Some of your data is on-prem, some is in the cloud, some might be pced. So you want a way for the AI to come to your data as opposed to having to take your data to ai. So make sure that you always are aware of what your restrictions are.

(21:37):
Most of you're global companies or many of you are, so you care about multilingual, you care about your data or your customer's data not being used to train models. These are considerations that exist in the financial sector today. So if you are going to pick your spots on which parties to invest with, work with the ones that meet the criteria that are important for you, because otherwise it's a massive challenge. It's analysis paralysis that's happening quite a bit. And the best way to avoid that is to start with knowing what your parameters are and select players that work within those parameters.

Adam Fish (22:13):
We must have shared notes because I would say having clear success criteria before you start an AI project is the most important thing we've seen. And that comes with use cases. And so if you're clear on use cases and you try to be quantifiable about understanding the use case, then you could drive success against that. But I think if you're just getting excited about the technology and generally trying to apply it, it's easy to get lost. And so some of the best case studies we have are folks in large engineering departments that have a developer productivity team. So that team is exclusively focused on developer productivity and helping people save time and work smarter and more efficiently. And so they'll run pulse surveys every quarter and they'll ask people, where's the pain? How are they struggling? How much time are they wasting? Looking for the right information? Where can AI help them? And then when you bring in and implement an AI solution, you actually see those metrics move and it's a very clear value story. So before we do deployments with customers, we actually sign off on the success criteria and just make sure that there's understanding top to bottom in the organization. And that helps us. It helps our customers, it helps leadership. So use cases and success criteria are the two areas I would focus on.

Antonio Castro (23:32):
I heard at least five things a herd. It's a very different animal in your environment, so you got to be ready to manage that. A herd focus matters. It's a fast moving space. So it's complicated, but kind of knowing your own constraints and your own guardrails. I'd say for my clients, often their own view of what are the risks to even think about is evolving. So this whole, how do we go about that? And I love where you ended it, knowing what success means, knowing how you're going to measure it, being confident in what it is that you're trying to achieve.

(24:10):
I know we're running low on time. I want to maybe skip this last question just to get us to the end here. Maybe just to pivot off of this, I know we talked about out, what are some things that folks can do? I think you answered the next question, but I was going to ask was what are people doing that are great? But since we only have a few minutes left, given everybody in this room is probably somewhere in this journey, is there some piece of advice that you would like to leave them with that when they leave here today, they can do something with when they go back into their offices and into their environments? And maybe Adam, I'll start with you.

Adam Fish (24:53):
Yeah, sure. I could kick that one off. So I think what's exciting about the enterprise right now is folks know that the general foundation models the Chat GPTs, they're just trained on information on the open internet, and so they don't know anything that's going on inside of your company. And so the real value unlock is bridging that gap and having the connective tissue. So these very powerful reasoning engines, which is really what they are. They're great at solving problems, understanding data, thinking through these things. Now these can run on your own internal data and help your company run faster. I think understanding what that looks like in implementation and how hard it is to get the data layer right for generative AI is a big challenge. When we serve up an answer to a user on glean on average, that answer is a synthesis across three different data sources and maybe looking at 20 or 30 different documents, meeting transcripts, emails, PDFs, presentations.

(25:56):
And so if you're building an AI application that has a very narrow focus, maybe it's just looking at your ITSM or just looking at your email, it's missing a lot of the broad context that an AI needs to have to help get to truth and make the right decisions and understand what's actually happening in the business. So I would encourage customers and folks in the audience to think about how to connect all of your systems horizontally for ai. And that gives the models more breadth, more context, more depth, and I think you'd be surprised at the quality of the answers that you'll get from them. And moving forward, some of the ag agentic workflows, some of the automations you could build with AI systems that have true broad context inside your company.

Vinod Devan (26:44):
I will add to that, avoid vendor lock-in in any part of this business. In the gen AI space, it's moving way too fast to get locked into anybody evaluated by use cases. So as Adam was saying, I think reran as well. Every use case is unique and the set of solutions that work for one use case may or may not work for the next one. In fact, if you have a use case that you evaluated different solutions for and you pick the right solutions and then you try to put the same use case into other business units, it may or may not work. If you try to expand within the same business unit, but multiple use cases, that particular set of solutions may not work. Eventually you'll have some degree of scalability that'll start playing out. But initially, vendor lock-in is dangerous. Whether it's your cloud partner, whether it's your LLM partner, whoever it is, avoid locking into one.

(27:37):
As you scale, you'll start figuring out the common parameters that matter to you and which ones can deliver the cost versus performance curves that you're trying to hit. Because what works in your POC or your pilot may not work at scale because the cost now becomes prohibitive or the latency becomes too high. And so you're working through all these learnings right now that it's very dangerous to lock in to anyone. Just start with different kind of players. We focus only on enterprise, so we may be good for some use cases. Then there are others that are consumer applications that may be useful for you. So depending on which partners are working with you, you can work across the different use cases. And if you take the number of use cases and the number of business units, you got to fill that whole space. As Adam said, you have to, at some point you're going to have to fill the whole space, but it won't be one or two players. I think it'll be a collection of players each working in their areas of specialization. So that's my only suggestion.

 Yaron Haviv (28:31):
So from my perspective, first focus on production. There's no question that gen AI will be very dominant in the future. There's still a few gating factors. The models will improve, the risk will be addressed and so on. So you need to first everything that you do, don't think about building a quick prototype. Then it's going to get stuck in the lab. Think about the end goal of productizing, which means that you actually need to invest in the underlying layers. Again, the data layer is very fundamental and the nice thing about it, it's not so moving like the other libraries and LLMs, et cetera. So start investing in automation, in data engineering for gen ai, start building the use cases, but thinking about production, telemetry, monitoring everything. And that's my two sense.

Antonio Castro (29:17):
Makes sense. I realize we only have a couple minutes left. I'd like to maybe turn it over to our esteemed audience. Any questions for these panelists who've been at the center of this dialogue?

Audience Member 1 (29:42):
So I'm not sure what size financial institutions you're working with. What advice do you give to someone who's small, mid-size or community bank? How do they get started in this? What should they be looking at? Who should they be partnering with? I've been hearing a lot of really good stories about ai, but they're coming from large fis with a lot of funding and deep pockets and innovation already baked in. So what do smaller banks do?

Vinod Devan (30:12):
I'll start, and I'm going to borrow from shamelessly, borrow from a session that I attended. I think it was on Sunday or something. There was a round table, and I believe it was Live Oak, somebody speaking, sorry, I'm co-opting their lines here completely. But somebody asked a question in that audience and said, does this create an even playing field now? Does a playing field get leveled as a result of ai? And I thought he'd give a really good response where he said, not so much that it gets evened, but there will be a haves and have nots. There'll be some that survive and others that don't as a function of this. And I think that lends itself to your question. From my perspective, the ability to leverage this technology gives you certain capabilities that as a smaller bank, that a large financial institution would've had the leverage their degrees of personalization.

(31:00):
Otherwise that would've cost you a ton of money that would not have allowed you to compete the ability to react faster than others, or at least alongside others that you would not have had because you don't have an army of analysts who are building this for you. You don't have a lot of data that is available to you, but now that to some degree is being evened out. I think the important thing will be where you deploy ai. If you're able to have deployed for efficiency and efficiency and productivity, fantastic. If you are deploying it in areas that are very small and the incremental gains are very small relative to the larger companies, I think that's where the challenge comes in. Again, it's a use case by use case basis, but from our perspective, just to be candid, the early adopters happen to be the big ones, the large companies. But lately we've been seeing an influx of the smaller companies that have said, we see what you've done for those. We had to wait on the sidelines until we saw that play out. Now we're ready to go as well, and now let's adapt. So now it's coming based on the learning so they don't have the hiccups that the others had.

Antonio Castro (32:07):
And maybe building on what you're saying, Vinod, we used to use this term, we don't use it anymore. There's makers and shapers and takers of this technology. What's also interesting to see as you're just describing, some of these tools are evolving pretty quickly where you can actually be very much a taker. And so I have several small institutions who are basically asking, which are the use cases, that are mature enough that I can just focus on how do I adopt it in my enterprise and not necessarily have to go build an engineering capability. They still have the data problem to deal with, but they have that problem anyway. Right? It's not a new challenge necessarily. Right. But I do think the answer is a little bit different by,

 Yaron Haviv (32:50):
Yeah, I can change first. There are some SaaS solutions. You honestly have to build everything, especially in the rack space. But also what I've seen, sometimes the smaller players will get the application much faster than the bigger ones. They have a lot of meaning and consensus. What's a big organization? Lots of global already, and you're probably more agile and you could build certain things internally. By the way, what we're doing, we're developing a lot of open source tools, not through common to McKinsey, but we're now part of that and essentially giving people templates. You can just tune it and shift things to production. Large organization will need a lot more consulting, but smaller organization can take it and build off of it.

Antonio Castro (33:33):
I realize we're out of it. We have time for one more. Sure. Somebody in the back.

Audience Member 2 (33:42):
Hi. So how do you see the role of a data scientist change now that we're really focusing on LLMs and not building the actual models but utilizing those out there?

 Yaron Haviv (33:54):
I think I can take it, I wrote some articles on that, but building the actual application pipeline or the model is no longer a data scientist problem taking L chain needs a software programming problem. Even writing a prompt. You don't need to be a data scientist for it. But on the other end, things that are typically data engineering roles, data engineering on machine learning is like join group by, et cetera. But today, when you take documents and you analyze them and you need to do things like entity extraction and NLP analysis, et cetera, it's a data science problem. So essentially you see more and more in unstructured data that data engineering is a data scientist problem, not necessarily the traditional data engineer. The other area that you'll see more and more steering the data scientists to is the areas of how do I analyze risk, how do I analyze hallucination, et cetera. Again, NLP problems, how do I identify toxic language? So the data scientists will need to steer more to NLP and text voice analysis than traditional pandas, but it's not necessarily a data engineering role.

Antonio Castro (35:03):
Thank you guys very much from the center, from the front lines. Really appreciate you guys making time for us today. It's our pleasure. Thank you. Pleasure. Thank you.