One might suspect that most bankers — especially community bankers — fear that generative artificial intelligence could generate misinformation or that the biggest banks will leave smaller rivals in the dust as they invest hundreds of millions of dollars in the advanced technology.
Yet, bankers have said in recent interviews they're more worried that if they deploy advanced AI, such as the enterprise version of OpenAI's ChatGPT, the technology might work too well and
This concern has a name — uncanny valley — and it's been around since 1970, when Masahiro Mori, then a professor of engineering at Tokyo Institute of Technology, coined the term and wrote an
"I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley," Mori wrote.
Industrial robots are fine, in his thinking, because there's almost nothing humanlike about them. But if a robotic arm is covered with material that looks and feels like human skin and by "adding a bit of fleshy plumpness," he said, it can appear more like a person.
"When we realize the hand, which at first sight looked real, is in fact artificial, we experience an eerie sensation," Mori wrote. "For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny."
Gene Fichtenholz, vice president of digital strategy and engagement at Meriwest Credit Union in Mountain View, California, agrees with this concept and believes virtual assistants that mimic human voices can enter uncanny valley territory.
"There are a bunch of companies that are working on incredibly powerful resemblance of human speech," he said in an interview. "They design voices to go up and down, to be curious and to transmit the empathy of conversation."
His team deliberately designed Meriwest's virtual assistant,
"It's a gender-neutral creature of some sort," Fichtenholz said. "It's super cute, but it's not human. If it's completely not human, but cute, people can relate — this is fun, I know I'm talking to a machine, but it's a cute machine. As soon as it becomes too realistic, the creepiness is there."
Meriwest, which has $2.2 billion of assets and originally served only IBM employees, launched Scout in February around the same time it was switching to a
But even a cartoonish virtual assistant could still put off customers who have an aversion to change and new technology, Fichtenholz noted.
"We, on the banking side, create something new and we think it's going to be really exciting, but for some people it's not exciting, because you're changing things," he said. "That translates to the digital assistant as well — why is this thing talking to me?"
A considerable portion of customers do not want to play with new digital-banking technology, but just want to be able to click on three buttons and be done, he said. Another group, mostly young people in the credit union's Silicon Valley market, have embraced the new technology.
'Uncanny valley' meets virtual assistants
Every now and then a new AI capability is launched and received with mistrust.
At an event five years ago,
But reviewers criticized the demo, calling it "
"It was very weird," Theo Lau, founder of Unconventional Ventures, recalled in a recent interview. "I did not like it at all. It was creepy to me."
It can be hard for any company to build trust using AI-based virtual tools, Lau said.
"What we're struggling with now is you have deepfake videos, so you don't know whether or not it's true," she said. "You have deepfake pictures that you don't know whether or not it's true. And now you have deepfake voice. When you are one layer removed from an actual human being, you don't know."
This was once something bankers thought about as they experimented with avatars, according to Dan Miller, lead analyst and founder of Opus Research.
"The evergreen issue surrounding the uncanny valley used to be solely about the creepiness of interacting with an avatar that seemed a little off," he said.
As a result, "the financial service providers we've worked with have shied away from animated avatars, preferring to have invisible assistants that manifest as chatbots to answer questions and guide clients through the processes it takes to complete a task or get to the right information," Miller said.
Automated resources powered by large language models and generative AI, such as ChatGPT, Bard and Claude, are largely disembodied, text-based search assistants or voicebots, Miller pointed out.
"We're observing that, thanks to the utility of today's conversational AIs, customers are more comfortable than ever before when they interact with a humanlike voicebot or chatbot," Miller said.
It's true that most banks' virtual assistants are far from crossing the line into being too humanlike. Still, as banks experiment with ChatGPT-like generative AI, the worry about confusing or upsetting customers is growing.
One concern about the uncanny valley effect is that fraudsters can take advantage of any "is it human, or is it a bot?" uncertainty to
"Fraudsters are apparently getting better at creating synthetic humans as part of their efforts to defeat security systems," Miller said. "That's a valid concern."
Full disclosure of AI use
Even using disembodied, text-based bots, it can be unclear to customers whether they are talking to a person or a bot.
One way to bring clarity and avoid the uncanny valley is to disclose upfront that answers are coming from technology, not a human.
Being honest helps avoid creepiness and suspicion, Fichtenholz said. Also, people tend to be forgiving of dumb answers if they know they are talking to technology.
"When companies build AI, we need to make it clear to people when they go from AI to humans," said Zor Gorelov, CEO of Kasisto. "It's ethically the right thing to do."
This kind of disclosure is "the first step of establishing trust," Lau said. "If I don't even know whether I'm talking to a human or a bot, and somehow I find out it's the opposite of my preference, I will think, well, I can't trust you."