Insurtech 2.0 and beyond: Identifying the risks and uses for AI

Past event date: March 14, 2024 2:00 p.m. ET / 11:00 a.m. PT Available on-demand 30 Minutes
REGISTER NOW

Artificial intelligence, the Internet of Things, large language models and other technologies are changing every facet of the insurance space. This month we look at the new risks to watch with its increased use, discuss consumer adoption/preferences, and examine how it affects different lines of insurance. Steve Jarrett, a former member of the Tampa Police Department and the National Director - Special Investigations for Westfield Insurance provides an insider's look on what to watch for and how insurers can use this technology to identify and mitigate insurance fraud.

Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Patti Harman (00:09):
I am Patti Harman, editor in chief of Digital Insurance. Today we're discussing something. Everyone is talking about artificial intelligence or AI and how it is transforming every area of business. As the use of iot devices and AI grow in popularity, their adoption is creating real risks for end users and insurers alike. As bad actors use it to perpetrate new kinds of fraud, carriers can utilize it to identify these risks and fraudulent claims more quickly. They can also use it for a host of other opportunities like improving workflows. Joining me today to share some insights from the carrier side is Steve Jarrett, national Director Special Investigations for Westfield Insurance. Steve is a 22 year veteran of the Tampa Police Department and holds the FCLA, the Fraud claims Law Associate designation from the American Educational Institute and the CIF fi, the Certified Insurance Fraud Investigator designation from the International Association of Special Investigative Units. Thank you so much for joining us today, Steve.

Steve Jarrett (01:24):
Thanks for having me, Patti. It's a pleasure.

Patti Harman (01:28):
So generative AI has made it into our everyday lexicons pretty quickly. And before we go too far with our discussion, would you mind explaining what we mean when we use the phrase generative AI or gen ai, and maybe even give us a couple of examples of how or where it can be used?

Steve Jarrett (01:50):
Sure, not a problem. Yeah, there's a lot of information out there. AI has been around for quite some time, but to differentiate, generative AI basically refers to a type of artificial intelligence that's really designed to generate new content such as images, text, or music based on patterns and data that's been trained with it. So generative AI is often used in creative applications, content generation, and data augmentation. And to compare that, or a couple of examples would be being able to generate a photograph just by giving a description on, Hey, here's what I want here. I want to show damage to my vehicle, right? Say I want it to look like I crashed to a telephone pole you can take and it'll create just from what you're telling it to do. It'll create that image even though it's totally fraudulent. Another example of generative AI would be utilizing it for the purpose of evaluating maybe a document that's coming in the insurance world, a document that comes in that may be hundreds of pages long, where the program could review that document and give you a summation of that document as opposed to an individual human being having to take the time to look through all that and come up with a conclusion based on their review.

(03:25):

So the actual machine would be able to do that for you.

Patti Harman (03:31):
It can really save us a lot of time.

Steve Jarrett (03:34):
It could, yes. Yep.

Patti Harman (03:36):
So not to confuse people further, but could you also explain how this differs from predictive AI or predictive analytics and maybe explain where insurers might be using this type of AI then?

Steve Jarrett (03:50):
Sure. So a AI is more of a broad term that encompasses the development of machines and algorithms that can perform tasks that typically require human intelligence, such as learning reasoning and problem solving perception and decision-making. AI can also be categorized into different types, including machine learning, deep learning, natural language processing, computer vision, and more so it offers, they're designed for various applications that can be used for automation, optimization, personalization, and decision support. Whereas predictive analytics is more of a specific application of AI that focuses on using historical data, statistical algorithms and machine learning techniques to identify patterns and predict future outcomes or trends. And predictive analytics models analyze past data to make predictions about future events, behaviors, or trends. And those predictions are used to optimize decision making, forecast outcomes and mitigate risk and improve the business performance in various industries.

(05:26):

A prime example of that would be if you had a system, say if you were in the insurance world and you had a predictive analytic system that looked at every claim that came in and used the historical data of past claims that this individual had presented and took that information in, lined it with the new claim that came in so that past claims behavior would be indicative of what that future behavior could be. In other words, what that risk with that claim would be, and it would score that for you and give that back to you in a score. And not to say that necessarily that claim is totally fraudulent, but to let you know, Hey, you may want to take a closer look at this. So that would be one example of just utilizing the predictive analytics in that particular case.

Patti Harman (06:18):
It would also be helpful, I would think, in the underwriting process as well because you're able to take a lot of historical data and use it and say, okay, these are some of the patterns that we're seeing. So these might be risks that we would want to underwrite or at least pay a little bit more attention to, I think.

Steve Jarrett (06:35):
Yeah, absolutely. So when you're looking at it from an underwriting perspective, you're talking about where is this person garaging a particular car? Are they trying to commit premium fraud? They may be actually living in Pennsylvania or living in New York and putting an address as being in Pennsylvania to get a lower premium.

Patti Harman (06:59):
True. It's been really interesting for me to watch the adoption of gen AI over the last 18 months or so, and it seems at least from where I'm sitting that there are very few industries that aren't utilizing it in some way. Have you been surprised by how quickly it seems to have become accepted in so many different areas? And I know you said it's been around for a while, but it seems like the attention to it and the adoption has just really accelerated in the last 18 to 24 months.

Steve Jarrett (07:29):
Yes, it's evolved. I mean, AI has been around for probably 30 years or more, but the evolution has really escalated here, probably like you said, the last 18 months to two years. I think a lot of that rapid adoption of ai, whether it be generative or just predictive analytics or just AI itself is a result of customer demands. Partially, I say customer demands and competition as well as cost. When I talk about customer demands, I think most customers are, they're wanting faster service. They're wanting in the insurance world, they want their claims adjusted and paid faster and they just want that instantaneous turnaround as opposed to even less dealing with human. They'd rather just send a text and be done with it. So that demand has created an appetite for AI as well as competition. So everybody's looking for business that's out there, so they're trying to be competitive.

(08:40):

So to remain competitive, the market's bearing that different industries are looking at ways to provide the services that other companies may be providing that involve AI or generative ai. Also to reduce costs because if I can have a machine review a demand package that comes in that may be several hundreds of pages long as opposed to an individual that could be reducing our cost. But I think one thing to be aware of is that we can't just say, okay, we've got ai, so we don't need people anymore. You still need that human element to make sure that you're hitting on all cylinders. It's very important.

Patti Harman (09:23):
That's so true. I had a conversation with someone the other day and he was saying, AI can do a lot of things, but there is knowledge that I have gained over the last several decades that I can look at someone and I can know if they're telling me the truth, if they're lying about their insurance claim, whatever it is. He said, I have learned that over the years, and AI is not to that level just yet. So yes, you're really correct on that. What are some of the concerns though, associated with its use? We hear terms hallucinations or implicit bias, that sort of thing.

Steve Jarrett (10:05):
Yeah, so hallucinations is a issue, and when we talk about removing human element, that's one of the reasons you have to be so cautious because hallucinations are really inaccuracies that were misleading data that is coming or generated. It's usually due to the ambiguous input. So the old saying, junk in, junk out, it kind of even applies to this today. It is due to the ambiguous input, incorrect training of the data could be inadequate model itself. The way it was set up could be inadequate. So when something is generated from ai, it is very imperative that a human review that making sure that it aligns with and it makes sense, the information that's coming out of that makes sense and that it aligns with what the inputted data was and does it match up to what those results should be. So it's very important, like we were talking about earlier, having that human element is just so critical, and that's with the hallucinations.

(11:12):

Now, when it comes to the biases, obviously the model's only as good as the data, but we have to be very cautious on the data and ensure that the data represents the entire population that we're not focusing in on one particular group or other. In other words, are we focusing on something that's just gender related or race related, or is it a particular area of town that may be more of a lower socioeconomic class? So you have to be very cautious. And not only that, but the programmers themselves have to be aware of their own implicit biases. Do they have biases? And I think it's very important that companies understand that you need to continuously evaluate your systems to make sure that you're staying in line and you're not implied. Biases are not entering into that or biases of any kind that are entering into your models. And it comes down to an ethical issue as well.

Patti Harman (12:13):
And that's what I was going to ask, are there ethical considerations that come into play when insurers are using any type of ai? Then

Steve Jarrett (12:22):
There is ethical issues that come into play. And quite honestly, I think it's like everything else, the regulations, just like in social media, we were so slow to catch up, so the regulators were kind of slow to respond with social media. I think we're seeing the same thing with ai, but I think the regulators are kind of trying to get ahead of everything. So National Association of Insurance Commissioners, they adopted a model, an AI bulletin that kind of provides guidance on the use of ethical use of AI and how it should be used. And this AI governance has actually been, the bulletin has been adopted, the model has been adopted by New Hampshire, California, and Connecticut so far. It'll probably be adopted by probably every state across the country eventually. I think that even before the regulators tell us we need to be ethical, I think we need to look at our own selves and say, as a company, as an industry, as a person, you should always be, try to be as ethical as possible and ensuring that you are using the tool the way it's intended to be used, and not to use it just as a cost saving measure.

(13:39):

We need to be understanding of the possibilities of hallucinations, possibilities and biases and things that can go wrong with the system. This is a tool to be combined with the human element and not to be set aside just on its own.

Patti Harman (13:54):
That's really encouraging to hear you say that. I know a lot of people were like, oh, wow, we're going to lose our jobs because of the use of ai. And it really is kind of a balancing act between one and the other, and it's a great way to double check decisions that are being made and that sort of thing. What other types of risks should insurers and agents and brokers kind of keep in mind when they're using ai, whether it's the possibility of fraudulent claims or DeepFakes and those sorts of things?

Steve Jarrett (14:30):
Yeah, Patti, that's a very important point because I think as far as the education goes, there's a lot of misconceptions out there. I think folks feel like, Hey, we're pretty safe. We don't have to worry about us being victimized or our companies being victimized. We've got all these different products that protect us from it. But in all reality, the vulnerability is there. I mean, when you're talking about generative AI and the ability to generate a deep fake and a photograph or a video, you could generate a video for that matter, the synthetic media, those type of things, documents recreating a document. In other words, I could, on my application, maybe I'm creating something that would be going along with my application to take out a policy that could be totally false and be generated from AI photographs of damages to a vehicle when they're presenting a claim that's of great concern.

(15:31):

Presenting a photograph or a document to show that you purchased an item that you're scheduling for your insurance coverage. I mean, it's very easy to fabricate with generative AI something. So if there's not follow up with that, if you're not calling that particular place where they said they bought it from or you make a phone call and that place doesn't exist, then you know that you've got some issues. So there's tools to combat this, and I think that there'll be more of those to come. There's tools that do validation, they'll actually review the photograph that was submitted and can validate whether it's been AI generated, because there's plenty of platforms out there that don't even cost anything where you can go online and fabricate a photograph. And it's not like the old Photoshop days where it was kind of obvious that this thing was manufactured. They're really good. If you look at it, you would not probably be able to tell difference with just the naked eye. So it's very imperative that we're aware of that. I think that education is very important, and there's always somebody that's out there trying to utilize these wonderful tools to use it for things that are nefarious as opposed to doing the right thing.

Patti Harman (16:50):
Yeah, I was having a conversation with someone last night and she was saying, well, I know what I see with my eyes because I can see it. And I was trying to explain, no, you don't understand how technology now allows people to manipulate what you see or what you hear, and it's so realistic. Remember when we used to get the spam emails from the Iranian prints or whoever it was, and you always knew that it was a spam email, but generative AI has just really kind of upped the level and the ability of bad actors to really create some very believable and fraudulent information. And as we talk about all of this, despite the risks, AI can be used to help solve a wide variety of problems and really change the way we work. So from an insurance perspective, what are some of the benefits to using AI and how are carriers utilizing it to even improve their products and services?

Steve Jarrett (17:56):
Sure. I think a lot of it is from a customer service standpoint, utilizing AI for moving those claims through faster, utilizing techs, being able to have the folks generate different stuff that they're going to be sending in and moving the claim quicker is one of the best aspects of it. The other is being able to review that claim faster, review documents, whether it be either within claims or maybe it's something that's in litigation. There's a litigation document that a demand package that came in needs to be reviewed or subrogation. If you're looking at potential subrogation and there's a lot of information that needs to be reviewed, some of this can be done, can be generated, and there's also models that can be developed from a subrogation standpoint as far as looking for subrogation opportunities that may arise. So that, and obviously fraud reduction I think is a big one.

(18:53):

Obviously that's close to my heart since I head up the fraud department, but we have seen a tremendous impact on utilization of AI in the form of predictive analytics and being able to identify fraudulent claims or potentially fraudulent claims. And in some cases, it may not be straight out fraud, but there may be other things that are involved there that just aren't quite right and we need to know about. It may be from a renewal standpoint or from a process that we see, we're not collecting the correct premium on this particular type of policy. But by the same token, it also allows us to, if we see something kind of alerts, we can take a quick look at it and say, oh, no, this is good, let's move it through. Or if it doesn't alert, we say we can move it through faster, which it helps from a consumer standpoint too.

Patti Harman (19:51):
Are you finding that there's certain areas where it's maybe particularly helpful or maybe a little bit more adaptable than other areas within the insurance space?

Steve Jarrett (20:02):
Well, I think it's very adaptable in areas where you have good data. I think that's very important. And when I say good data, you have to have good solid internal historical data for these programs to work. And every organization is different, and every organization may write different products or maybe they write more of one particular product than others. Maybe they're more involved in work comp, maybe they're more involved in personal lines auto. So I would say each company would have to look at their own internal data, their own IT department as well, and see what are they capturing, what do they have as far as historical data that would help train the various models to be able to provide stuff that has value to be able to make good sound decisions.

Patti Harman (20:55):
As carriers are developing these new types of ai, they often have to partner with vendors to create some of the products. They just don't have the expertise. What are some of the factors that they should consider when selecting an AI or technology company to partner with?

Steve Jarrett (21:13):
Yeah, so I think the very first thing I would say is be aware of the fact that if you're hiring a vendor, you're still responsible for what they're doing with that model because you're still representing of your organization, your brand is out there, so you need to be very aware of how are they gathering, what are they doing, what kind of algorithms are they using? You need to make sure that they're ethically bound as well and ethically invested as you are as a carrier, and making sure that there's no implicit biases that are entering into the model. So you need to be aware of that. You also need to be aware of what are your IT resources needs and what IT resources does that vendor have, as well as how much money's going to be budgeted for this? Are there going to be any hidden surprises? How much time is a member of my team going to have to devote? Because when you start to build these, they're all different. It has to be customized for your particular organization and for your particular business. So it's not a one size fits all, and I think that's one of the most important things to remember, along with the fact that you're still responsible for that AI governance, either though it's an extension of your organization.

Patti Harman (22:31):
No, it's interesting. It's not like you just get to develop it and then you're done. It's really, there's a lot more care and feeding that's kind of involved with that. And because any type of AI is constantly changing, what are some of the areas that carriers need to constantly monitor within their programs? Then

Steve Jarrett (22:52):
I think from constantly monitoring, I think number one is we talked about, I think you have to go back and continually evaluate to make sure that there's nothing from, and we've talked about the implicit biases that are creeping in there or that you're ensuring that your data is fresh, that your data is good data, that you're not putting stuff in there that is going to impact the model in a negative manner, which could lead to hallucinations or something like that. As well as obviously we've talked about this, having that human element involved, and I think bringing people together to have discussions, have those after action discussions sitting down and saying, what went right? What did not go right? We saw a problem in this particular area, how do we correct it? I think being able to, when you see a problem and getting involved early on in making those changes and corrections is really imperative because if you let something go, next thing you know you've got a huge problem and you may have caused some issues that are very negative negatively going to affect your brand.

(24:05):

So it's very important to be able to be flexible and very fluid as you're moving through that. And a lot of times what you're doing too is you're looking at different scenarios, what makes sense this data that really is going to drive that model, or do we need something else? Is there a third party data that's out there of something else, some other product that we use that kind of gathers third party data, would that be better served being in our model as opposed to this other information? Or is it good to combine both of them or maybe various sources? So that's an important point.

Patti Harman (24:42):
It's really kind of amazing because at this point, what we don't know about AI is probably greater than what we do know. But I want to ask, what excites you the most about the possibilities or the opportunities going forward? I

Steve Jarrett (24:58):
Think what excites me the most about the possibilities is that we are in an age that things are changing rapidly, and I think that there's so much around the corner. I see it as great opportunities for companies to excel and provide better service to the customers. I think that it's an exciting time because we will be able to take what we've learned in the past and apply it to some of these tools which will provide a better product for our customers. And I think it'll provide a little bit more exciting for the workers once they get over the fact that I think a lot of folks have the idea that I think you mentioned earlier about being replaced by ai. I think once we can get over that thinking and recognize the fact that I don't see that in the near future, I think that people will be a little more relaxed and understand that, hey, this may make things a little bit easier for me actually and be able to provide good service to our customers.

Patti Harman (26:03):
It's true. It's sort of like the creation of Google and the internet and the ability that gave us to research a ton of information as a journalist when I realized what I used to have to do manually versus now I can type in a few queries and all of a sudden I have sources and ideas and I can verify information. And so from that perspective, it's very exciting to think about what the possibilities are. Do you have any concerns maybe about its use or how it could evolve?

Steve Jarrett (26:35):
Some of the concerns I have is always on the opposite end on how will continue to evolve to the point where people that want to use it for something that is to take advantage of the consumers or take advantage of the companies, that's a concern of mine. I think that until we establish some laws that may help in our regulation of this, and I think we're getting to that point, but it's all going to have to keep up with it and the changes are happening very, very quickly. So I think that's probably the biggest concern I have is as fast as things change, we have to be able to evolve and pivot to make sure that we're keeping up with it. So from that aspect, I am always concerned about folks using these tools. But getting back to your comment about Google, I mean, I remember the days before we had the internet and you had to do everything to do a courthouse search. You had to drive to the courthouse and pull up the records and you make sure you're there at a certain time. They only let you check at a certain time. And today it's just so much easier. Our investigative world is, there's so many tools at our disposal, but we got to remember that the bad guys have a lot of these tools too. So

Patti Harman (27:53):
Yes, one of the areas that I have loved covering for probably the last 10, 15 years has been insurance fraud. And I think part of it is that that's not how my mind works. And when I see what people are able to come up with, it's like, oh my goodness, I never would've thought of that. So you're right, it is. Some of these tools are making that a little bit easier for them going forward. For sure. So we've covered an awful lot in the last couple of minutes. Is there anything that I haven't asked you that you think our audience should know?

Steve Jarrett (28:28):
Well, one thing I would say is one point I'd like to make is the fact that unfortunately there's been some studies done recently. There's a large segment of the population that feels that commitment insurance fraud is not a big deal. They don't even see it as a crime. So unfortunately, that's an unfortunate thing. I think that's something to keep in the back of your mind for anybody that's out there investigating fraud. And the other thing is that in this 30 minutes that we've spent together, everything that we've talked about has probably already changed. So that's how rapidly the change is occurring. So if you're not involved and engaged, and if this is not on your radar, you need to get it on your radar and you need to be aware of it. And I think as much as anybody can educate themselves, you don't have to be an expert, but there's a lot of good information out there. And identify ways that you can protect your organization and utilize these tools to help make it a better workplace and provide better service to your customers.

Patti Harman (29:32):
Those are really wise words. Thank you so much for joining us today, Steve, to discuss how insurers are using artificial intelligence and other technology to identify fraud and for sharing the risks that the rapid adoption creates as well.

Speakers
  • Patti Harman
    Patti Harman
    Editor-in-Chief
    Digital Insurance
    (Moderator)
  • Steve Jarrett
    National Director - Special Investigations
    Westfield Insurance