AI use in customer service faces legal challenges that could hit banks

An Air Canada plane and Patagonia signage
Air Canada has been penalized for misinformation generated by its chatbot; Patagonia has been accused of letting a vendor's AI model listen to and analyze customer service conversations without consent. These cases have parallels in banking, where many institutions use AI-based chatbots and contact center software.

Two recent lawsuits challenge companies that use artificial intelligence in their chatbots and call centers, as many U.S. banks do.

In one case, a customer whose grandmother had just died booked a plane trip on Air Canada and was assured by the airline's generative AI-based chatbot that he had 90 days to apply for a bereavement discount. This turned out to not be the airline's policy and it refused to give the steep bereavement discount, saying its policy was stated correctly on its website. A civil resolution tribunal ordered Air Canada to award the customer the discount and pay fees. 

In the other case, several customers sued Patagonia after finding out that Talkdesk, a contact center technology vendor Patagonia uses, was recording and analyzing customer-support calls and using them to train its AI model. The customers say they would not have made these calls if they knew Talkdesk was eavesdropping on them, and that this was a violation of California's data privacy law. The complaint was filed on July 11 and has not yet gone to trial. 

The first case casts doubt on the use of AI in contact centers, something many U.S. banks do, mainly to analyze customer sentiment and call center rep performance and to summarize calls for their records. The second case calls into question any use of a customer-facing generative AI model by a retail company like a bank. 

Hallucinating chatbot

When Jake Moffat's grandmother passed away, he visited Air Canada's website to book a flight from Vancouver to Toronto using the airline's bereavement rates. While researching flights, he used a chatbot on the airline's website that told him he could apply for bereavement fares retroactively. 

But when Moffatt later applied for the discount, the airline said the chatbot had been wrong — the request needed to be submitted before the flight — and it wouldn't provide the discount. The airline said it could not be held liable for information provided by one of its agents, servants or representatives, including a chatbot. The airline said the chatbot was a "separate legal entity that is responsible for its own actions" and Moffatt should have clicked on a link provided by the chatbot, where he would have seen the correct policy. Air Canada did not respond to a request for an interview.

"This is a remarkable submission," noted the adjudicator Christopher C. Rivers. "While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot." 

Rivers also found that Air Canada did not take reasonable care to ensure its chatbot was accurate. "While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot," Rivers wrote in his decision. "It also does not explain why customers should have to double-check information found in one part of its website on another part of its website."

A civil resolution tribunal awarded the customer $650.88 in damages, $36.14 in interest and $125 in tribunal fees. 

This case is likely to make Air Canada think twice about the chatbots it "hires" and drive AI chatbot providers to improve their feature sets, said Greg Ewing, a member of law firm Dickinson Wright in Washington, D.C. 

"For example, you can start to put exclusions on what a chatbot can talk about," he said. "So I think it's going to both drive innovation and it's going to be a motivator for companies like Air Canada to be careful about which chatbots they choose." 

Humans make these kinds of mistakes, too, Ewing said. 

"This is not a unique circumstance," he said. "It's only unique because of who actually wrote the words."

Many banks offer AI-based chatbots on their websites, though most do not use generative AI today (many have said they would like to in the future). Bankers interviewed for this article say they are being cautious about pushing generative AI out to customers.

"At Citizens, we've focused our initial use cases internally with human oversight as we actively and safely pursue the adoption of gen AI," said Krish Swamy, chief data and analytics officer at Citizens. "We see gen AI's potential to help us serve our customers while also helping our colleagues innovate. Smart financial institutions should integrate appropriate guardrails, including human safeguards, protection of customer data and upholding privacy commitments, to best support scale deployment of gen AI."

AI models can be validated and tested with other AI models, said Sri Ambati, CEO of H2O.ai. 

"Curation becomes important, and the best way to curate it is with another AI," he said. His company offers a framework called Eval Studio that lets companies build evaluations that test for weaknesses.

AI eavesdropping at Patagonia

In their class-action lawsuit filed in the California state superior court in Ventura County against Patagonia, customers accused the retailer of violating California's data privacy law by letting an AI model from Talkdesk analyze customer-support conversations in real time. 

"When callers call one of Patagonia's support lines, they are told that the call 'may be recorded for quality training purposes,'" the complaint stated. "This tells reasonable consumers that Patagonia itself may use the recording to train its customer-service agents or improve its products. It does not tell reasonable consumers that a third-party (Talkdesk) will intercept, listen to, record, and use the call for its own purposes."

Patagonia and Talkdesk did not respond to requests for interviews.

The work of the TD Invent group and Layer 6 AI, a startup TD bought in 2018, show how innovation teams at large banks can cut through the bureaucracy and fiefdoms that exist in every big company to deploy and scale new technology.

May 16

Ewing said California has a wiretapping law that makes it illegal to record conversations without consent. 

"I think they've got a pretty strong case in that regard, because Talkdesk is, at Patagonia's request, recording these conversations, and at least according to the complaint and what I've seen, there was no real consent to that recording," he said. "We've all heard those, 'this call may be recorded for training purposes' preambles. That to me doesn't sound like 'we're sending it out to a third party.'" 

The complaint asserts Talkdesk is using the data to train its AI model. This raises the issue of potential bias.

"I would be concerned, if I were Talkdesk, that the allegation is essentially going to be, look, you have Patagonia users who call into Patagonia and are angry and have a Southern accent that the AI picks up on," Ewing said. "The next time that person calls, what is the AI going to use to make its recommendations to the customer-service agent?" 

This lawsuit will force Talkdesk and its customers to think through what they disclose, the consent they ask for and how they use AI models in contact centers, Ewing said.

Many U.S. banks use AI to analyze customer support calls, customer sentiment and service rep performance and even to rethink products that customers may be complaining about. Some use generative AI to help call-center agents provide informed answers. 

It's possible that more disclosure would stave off lawsuits like this one. In addition to the standard message about calls being monitored, companies could add a line about the fact that software is analyzing conversations "to assist our agents in providing the highest quality customer service," Ewing said. 

Eventually, customers will own their own data, H2O.ai's Ambati said. 

"If you own the data, you can then rent it," he said. "You get all the property rights of lending it for a large language model to be fine-tuned on. You could let it be used to fight Alzheimer's, for example, but not for political purposes."

For reprint and licensing requests for this article, click here.
Artificial intelligence Customer experience Technology
MORE FROM AMERICAN BANKER