How can Generative AI Help or Hurt your Career?

Transcription:

Daniel Wolfe (00:10):

All right, it's me again. Hi, I'm Daniel Wolfe, one of the Editors at American Banker, and our topic actually is one that I think is very interesting and very relatable, just like the one you just heard. And I think those are, what's interesting is these two topics have a lot in common. The hybrid workplace, regardless of who you are in an organization, regardless of where you work, what type of company you work, everyone went through the lockdown. Everyone is in a position where they are experiencing what this new type of workplace is and everybody has a story to share. And I think AI has become the next big topic like that because regardless of who you are, regardless of what your organization does, whether you're leading a major AI project or you're just somebody who uses chat GPT to help you draft emails, AI is in some way, shape or form changing the way you work. So we're going to get into just the details of that and how this is transforming the payments industry. Would you like to introduce yourselves? Katie?

Katie Whalen (01:12):

Great. Thank you Daniel. Thank you all for being here today. My name is Katie Whalen. I lead the North America issuing business for Fiserv. And I've been in the industry for quite some time, both at American Express, but I've been with Fiserv for six years.

Daniel Wolfe (01:30):

I think we're having mic issues again. Did we keep the handhelds or not.

Carolyn Homberger (01:36):

We've got them. Okay. Alright. Good morning everyone. My name is Carolyn Homberger. I lead the Americas for feature space. I've been at feature space for two years. Prior to that I was 15 years at a payment software company. And thank you for having us, Daniel and the American Banker team. This is a really great program.

Cathy Beardsley (01:59):

Good morning everyone. My name is Cathy Beardsley. I'm the president of Segpay. We're a payment service provider or specifically a payment facilitator, and we focus on e-commerce merchants.

Daniel Wolfe (02:10):

All right, so before we get into everyone's personal experiences, AI is a term and a concept and a technology that's been around for many years but is really in the spotlight right now with chat GPT and all of the related things, Gemini, copilot and so forth. And I wanted to get just a perspective, when you talk about AI in your organization, what exactly does that mean? What is front and center? Katie, do you want to start?

Katie Whalen (02:37):

We've had AI within our business for quite some time in terms of baked data into the products that we support our issuers with from. We've used IBM Watson Mave in our IVR system for quite some time. We worked with IBM for quite an extensive period of time, developing a new fraud detection tool that deploys AI as a mechanism to reduce false positives. But now as AI has become more and more prominent in terms of the everyday use cases for customer servicing and how we think about our employees using it, we've started to kind of look at various different pilots for how we deploy AI in our environment from a productivity and efficiency gain standpoint. And how we measure those is really where we're kind of at the start with in terms of how we can really think about embedding it into our everyday use cases for how we service our clients and how we think about applications where we can get efficiency gains and reduce a lot of manual processes. So we're really in the journey with AI within our business and how we think about the deployment as a mechanism for economic impact for our business, but also improvement in terms of client satisfaction.

Carolyn Homberger (04:02):

Thanks, Daniel. So we have, I guess two applications feature space. We're a fraud and financial crime payment prevention platform. So what we sell is machine learning and AI enabled. So we spend a lot of time in the market educating our clients on the differences between machine learning versus AI versus generative AI, which is really in the last couple of years. So that's the product we sell. We also apply it to ourselves. We're a smaller company and we use AI within the company to enable our team and to scale and grow the company as we sell our product into the market.

Cathy Beardsley (04:53):

So we're seeing AI in a couple ways, one through our merchants. So we're starting to see an AI site a week where e-commerce merchants are using it to create text or content that's very scary. Our banks don't like it. We're a little cautious about it. And then as a company, I would say we're AI novices. We're using some machine learning in our risk tools and in our reconciliation and accounting tools, but have taken on every year we have a goal, we're going to be more efficient. How do we scale? How do we not be a people economy, but an economy of servicing our merchants is? And each department is looking at how they can fit AI into their tool set to help make them more efficient and to date, some departments are embracing it and some are a little nervous about it.

Daniel Wolfe (05:43):

I've seen the same thing. So just to make sure we're all on the same page, Katie, would you mind just what do we mean when we say generative AI or machine learning? What are the differences in the various terms we're throwing around?

Katie Whalen (05:56):

Yeah, it's a good question. We think about it in the context of machine learning as kind of an automated process that happens in the backend for process efficiency gains. So as an example of what we referenced earlier in our business, we work with some of the largest banks on their issuing portfolios or card portfolios, and we deploy, as I mentioned before, fraud detection tools on transactions that run through our system for processing. And as part of that fraud detection tool, we have automated machine learning in the background that will evaluate various different inputs and changes in the data sets and change in fraud trends on an ongoing basis, and then automate adjustments to the strategies associated with detecting fraud on the backside that can either make sure that we're catching more fraud or with the goal of reducing false positives associated with it. The application of machine learning happens in the backend.

(06:57):

It's something that has data inputs associated with it to kind of drive a desired outcome. As part of that, we've seen a 60% reduction in false positives for the clients we work with that use that tool, but that kind of happens in the backend and is baked into the actual product set. When we think about narrative AI, we really think about it in the context of more of a consumer using the tool itself to drive more of a consumer oriented behavioral change. So we've deployed chat GPT and a couple of different models using IBM's Watson X and the chat GPT with Open AI with Microsoft, as well as working with a couple other providers in market like scale AI to kind of pilot different models that exist in the market where they can learn from ongoing engagement and data inputs as the tool gets used by our teams for specific purpose.

(07:55):

So as an example, we've deployed those three different models to pilot against one another to see how they perform against, we get tickets from our clients every single day about asking about documentation or asking questions about how the system works. And so we've loaded about 10,000 pages of documentation into a database that then is then mined as part of our narrative AI usage so that we can then have our servicing teams look at that, type in a question and then get that response back. And then we're measuring the actual performance of that response and the accuracy of that response. And the model is getting better and better as we're training it against that documentation set. So we kind of differentiate between the consumer or the consumption against use cases versus a backend model that's actually driving automated processes toward an end goal.

Daniel Wolfe (08:48):

Anything either of you would add on the distinctions of the types of ai? I think you covered that. I didn't think we each needed to weigh in. Okay. So a common theme about any AI discussion is, oh my God, is it going to take my job? I can say as a journalist seeing all these tools come out that produce written language, Hey, that's what I do for a living. Should I be concerned? It's a fair concern. And the answer is no. I shouldn't be concerned. I'm better than the ais. I know that I'm confident in myself, but you're supposed to laugh a little at that. All right, thank you. So how do you address within your organization, because now we're getting to the topic of how this impacts your career. How do you address within the organization, people see these tools come up, they see media coverage saying, oh, this company said they don't need to hire for this role, this customer service role, because people can just talk to a generative AI bot. What are people saying and how do you address that?

Carolyn Homberger (09:51):

Yeah, maybe I'll pick this one up. We see this quite a bit in the fraud world and the fact is fraud is changing. Scams are becoming more and more of an issue in particular in payments and with real time payments. So I think the way we talk about that change, it obviously changes how banks and processors, those providing fraud services, it changes their risk strategy. So you have to think of AI as another tool in your toolkit in the fight against fraud and financial crime. For the customer service facing folks, it's very much a change to their role because as you're scoring transactions and you see a riskier transaction, especially with first party fraud, it's helping that person understand, no, you're being scammed. And it changes the dynamic of the customer service role and the relationship that you have with your end client. So in many cases it doesn't necessarily disintermediate, but it certainly changes what you're doing and what your teams are doing.

Cathy Beardsley (11:12):

So as we've looked at it within our organization, it's been embraced and I think it's been most exciting for the customer service teams, the tech support team, our QA group, those are all the groups that are overwhelmed. So if there's a tool that can help them kind of take those real easy consumer customer support questions or client support questions, get those answered so they can focus on bigger problems, bigger challenges, provide a better level of support, they're all for it. So it's not really taking away from their job, it's allowing them to grow and step up and take on harder challenges. And same thing with the QA team. They're always overwhelmed. So anything we can do to make their life or their job easier, it allows them to make sure our code gets out quicker, we're rolling out better code because they're able to focus more on the problem areas.

Carolyn Homberger (12:03):

Yeah, I think the efficiency thing really can't be set in. Nobody likes to do the repetitive tasks over and over again. So giving your team the tools that automate that so they can spend their time doing what they add value the most to is I think really motivating.

Katie Whalen (12:24):

And I think similar to kind of what Cathy had mentioned, we've really kind of try to use a pilot model within our organization. We have a servicing function of probably about 60 people that service our largest clients and picking a group of about four to five individuals that can be part of a pilot and almost serve an evangelists within the broader organization and really learn it, understand the behavioral changes, and then teach other people how to do the behavioral change. Because really I think about AI as something where it's more about behavioral change and adoption within the organization and changing the way we work and adopting it into our everyday usage that we can then use for the efficiency gain. And getting those evangelists to really be the ones that early adopters then teach other people is really important because of the narrative that is around ai about maybe this could automate my job, I might lose my job, et cetera.

(13:27):

But the way we think about it's how do we measure the hours gained from that efficiency and make sure we have the appropriate KPIs in place to measure that? And then how do we make sure that those hours are then applied in more productive manner and not just doing it for the sake of applying AI as an experiment or something that is something we're throwing at the wall and seeing what happens, but actually making sure we can measure that productivity gain and then make sure that's applied in a more productive manner is something that's been really important for us to make sure that we're having economic impact from a p and l standpoint. So that's the way we've thought about it and we've seen really great gains from having those individuals that are evangelists be in our town halls, be in our all hands, showcasing the work that they're doing and then showing that it's not just something that's something where as a management team really kind of pushing top down, but rather it's something that can be adopted from the people that are actually doing the work.

Daniel Wolfe (14:22):

Okay. Can you talk a bit more about how these evangelists, are they chosen? Do they nominate themselves? You said it's not management, it might be the people who might otherwise be most threatened by some new AI tool.

Katie Whalen (14:34):

It's been kind of a little bit of both actual Daniel, it's identifying individuals that we know are going to be seen as leaders within the team or that are excited or have the right kind of approach or attitude or really can be change agents if you will, and have that just part of their overall ethos, if you will. So we've identified individuals that we know are embracive of it are open to making sure that they can build this into their every day, and then again, are good teachers or individuals that can have that type of teaching to others, if you will.

Daniel Wolfe (15:10):

Okay. So when we talk about the idea of automating tasks and AI doing certain roles in the company, it usually is those tasks that might be done by somebody who's more entry level or lower on that career ladder. And I was hoping to talk about how do you make sure that the people who need to have that experience don't just find that automated away and that they're still able to go through the experiences that you may have had or you and on your way to the higher rungs of the ladder? Cathy, do you want to start?

Cathy Beardsley (15:45):

Sure. So I don't see those roles going away in our organization. I see AI helping us keep our costs down, we'll be more efficient. So you're still going to be bringing in entry level junior people to learn your system, your culture, what your business is about, but AI is going to be augmenting those solutions to help you be a little more efficient and hopefully the junior people start to learn and they can move up to the next step into a new role.

Carolyn Homberger (16:17):

Yeah, I think generally Daniel, with our team members and especially those that use AI in their role every day, oftentimes I just think back to very early in my career I was an accountant and then went into corporate finance and you still have to learn T accounting and be able to explain what it is you want the technology to do, what outcome you're looking for. So I look for them to articulate that and then how they're applying it with the tool.

Katie Whalen (16:51):

I think similar to both what Carolyn and Cathy had mentioned it, it is really about, there's still a baseline learning and baseline expectation for understanding of the content, but the tools that we have been deploying really help with just the everyday tasks and the time that it takes to do that and helps us from a standpoint of making sure that there's a standard of outcomes and accuracy and helps to make, it's almost like we're using a little bit more of a qa, especially in the servicing space. So as an example, within our partner management function, we now have trained model to have standardization in terms of how we want emails to be written, the type of tone that we want to be deployed so that there's consistency in that language and whatnot. And it's actually saving a significant amount of time for those individuals. And in the pilot that we've run, we estimate that just by using this to kind of input data and then have the emails written almost on behalf of the individuals and saving six to 10 hours of time a week from, they're just kind of writing emails and thinking about that.

(18:04):

And that time can be then redeployed into other areas that elevate the client engagement and client consulting and we can put those hours towards something different. So I see it as not really automating individuals away, but rather kind of enhancing our organization where we can be more thoughtful about how we've deployed that time. Okay.

Daniel Wolfe (18:25):

So before I move on to my next question, I wanted to open that up to the audience, the same question. I dunno if anybody here has experiences they want to share about how the careers they see either their own or their colleagues, how AI is transforming the way that people are performing their jobs or able to advance in their jobs. There's some incredibly bright lights on me. I don't know if there's any hands or not. I think I see a hand.

Audience Member 1 (18:57):

Hi. I've seen a lot of discussion about people bringing their own AI to work. They may have a membership in chat GPT or something like that, and various organizations come down differently on their allowing that to occur because of fear of data leakage. I was just wondering if the panel could comment on how you feel about that. Do you feel like it needs to be controlled within the enterprise, or do you feel like there's a lot of innovation and experimentation going on with these tools that could be useful?

Daniel Wolfe (19:33):

That's a good question. So just to kind of elaborate on that, if you, I pay for the chat GPT plus, so I can use it however I want, but whatever I put into it is something that Open AI now has access to for training, and if you're able to share it with other folks and they have access to whatever you have trained it on. So there are certainly concerns about what is safe to work with in that sort of system and what needs to be more constrained. Would any of you have any thoughts on that?

Cathy Beardsley (20:12):

So if I heard the question, right, I know when we talked about AI with our CTO, he said, Hey, there's a big concern about leveraging code through ai. Who owns that? Is it proprietary? And he didn't really see AI being a tool that was going to help him maybe help optimize code, but not. So that was one where it didn't seem like it would be a good place for us to go venture into, but I'm a novice at AI. Someone might be able to correct me on that one.

Carolyn Homberger (20:45):

Yeah, I think in the fraud space, I kind of see two behaviors right now. One is we have a large payment company that we're working with, and I think we got literally this week, our fifth AI survey, to the point that I went back to the person I was working with. I'm like, are you guys just generating these questions through AI to punk me here? But genuinely they're trying to understand the technology, the regulation is so open-ended right now, and ensuring they're really covering their bases from a risk management standpoint and ensuring when future regulation gets put in place that they're compliant. So I see a lot of activity in that space. Equally, I see a lot of companies saying, okay, we know that there's a clear use case for AI with meaningful different outcomes in a good way with fraud and financial crime prevention. What other adjacencies to that could we safely apply the same technology to and get equally productive outcomes? So it's a real dichotomy right now. I think within payments, banks, you name it, in the ecosystem.

Katie Whalen (22:25):

I think it is a really good question and not a lot of companies have really dealt with it yet in terms of the policies associated with AI at Fiserv, because we deal with a lot of client data and proprietary information, we have a policy that we're not, and no employee should be using an external AI like a chat GPT for usage in the work that we do, especially from a client perspective. But that also means that you need to be sure that you're giving your employee base and alternative to be able to use that if there's the level of curiosity. And the alternative really needs to be grounded in specific use cases. So we've been very prescriptive about how we've rolled out AI at an enterprise level and have a council that rolls up to our president me chief technology officer to make sure that we have those policies in place and are thinking about that as there's been more innovation and more tools that have become available in the market.

(23:27):

So we have a chat GPT version that we've enabled internally for our employee base, but it sits with our servers so that anything that actually runs on it runs against internal Fiserv servers and make sure that we can control any of that data and where that actually goes. And it's not going out into on some open cloud servers. The other thing that's super interesting is I do anticipate that especially with Microsoft, we're a Microsoft shop, that copilots will become more and more baked into the tools that we're using every day, especially over the course of the next 6, 12, 18 months. That'll be interesting to see how companies gate the use cases and test those before they roll them out and what that impact has. So I still think it's a lot of unknowns and there's not a lot of kind of standardization yet in terms of how the everyday use cases are being baked in or even the policies that companies are adopting yet. So I think it's a great question. It's still evolving as we think about the deployment of the tools in their space.

Daniel Wolfe (24:32):

Okay. Any other audience questions? Hello?

Audience Member 2 (24:51):

Good morning. So my question is for Cathy, when you're talking in reference to using or having AI assist with coding, generally the process is to have it do some type of UAT. How would that in a way, could you feel, or do you feel like it would hurt those careers of the people who would be u user acceptance testing for those who might not know what UAT is because you're using AI to solidify your coding for programming, would you feel like that would not get rid of that particular sector?

Cathy Beardsley (25:31):

So my hope, and probably anyone that is in an organization know that development resources are scarce and we can never keep up with the demand. And so my dream was, hey, could we use AI to help expand our development resources? And the answer I got back was no, because at least not where we would be generating code out in the using chat GBT or that it really has to stay that is proprietary, but it could help us optimize the code so it wouldn't be taken away from that job, that role. Same with QA, that AI could help us with the testing process and finding bugs that would help make their job more efficient. So not really taking away from them just augmenting what they're doing to help them work a little faster.

Daniel Wolfe (26:26):

So I think we need to start our networking break soon. So let's do this last question as a lightning round. What do you think will be different about this conversation a year from now or five years from now?

Cathy Beardsley (26:39):

Oh, regulation. It's like when I got this topic, I'm like, oh my God, this is like crypto six years ago, and everyone's craze and it's going to take over. The financial markets and regulations come in place, made it more mainstream, weeded out bad players. So I think same thing for AI.

Carolyn Homberger (26:58):

Yeah, I think a year from now and we're starting to see, we can look over to the UK that we're going to, I think we're really going to start to see some momentum at the network level. And I think the power of leveraging network level data safely and also with being compliant with privacy standards, I think that there's going to be some really interesting outcomes out of that.

Katie Whalen (27:33):

I think Cathy, you're just spot on with regards to regulation and policy. But the thing I think about, and we, in the business I'm in, we're a processor for some of the largest issuers in the United States, and there's still so much manual processes in this industry, and there's so much fragmentation in terms of handoff points between different entities that I do think that there's going to be a lot of opportunities for us as an industry to apply AI and models to really further automate a lot of things that are legacy technology and things along those lines. As those become more evolved, that there's opportunity for us to really automate a lot of those kind of fragmented processes that still exist in the industry and handoffs that I think will just further elevate us as an industry, but also kind of enhance the actual cardholder and or consumer experience downstream that by taking care of a lot of those backend processes will just enhance the industry as a whole. So I think it's a really good thing. I don't think at all the use cases have been really identified yet, or there's not a lot of clarity yet, but I think a lot of that will start to come into scope as more and more models get applied into different areas of the value chain. Alright,

Daniel Wolfe (28:52):

So I know we didn't have too much time for questions, but good news, Katie and Cathy will be back tomorrow as part of an AI workshop we're doing that's meant to be a lot more interactive. We want you to bring your ideas and thoughts and examples of how AI is changing the way you work. So look for that tomorrow morning around 10 15. Meanwhile, I'd like everybody to thank all of the honorees here who have been a part of this conversation.