AI for banking faces trust hurdles, FINRA Foundation study says

Sign outside offic eof Financial Industry Regulatory Authority
Based on a survey of 1,033 consumers conducted in February, the FINRA Foundation reported that just 5% of respondents said they used AI when making financial decisions.

The promise of AI-powered money advice has long been a focus of bank and fintech marketing. Even before the rollout of generative AI chatbots, virtual assistants were positioned to transform the way consumers interact with financial institutions, with tailored responses delivered through chatbot interfaces that cut down on the need to speak with humans. It was positioned as a win for both banks and consumers.

The problem is that consumers are still wary about using AI to inform financial decisions, a new study from the Financial Industry Regulatory Authority Investor Education Foundation suggests. Based on a survey of 1,033 consumers conducted in February, the FINRA Foundation reported that just 5% of respondents said they used AI when making financial decisions. By contrast, 63% of respondents said they consulted financial professionals. A quarter of respondents said they turned to financial management apps, though it's unclear if these tools were AI-supported.

"AI usage appears to be very low when it comes to informing everyday financial decisions," said Gerri Walsh, president of the FINRA Foundation, who noted that consumers may not be aware that AI might be powering some of the financial apps they use. "If there's a lesson here, it's that one should not make assumptions about consumer responses to AI and that we should strive to make financial information and its sources work well for everybody."

AI versus human

Researchers presented four different pieces of hypothetical financial information to a nationally representative sample of respondents. For each question, half of study participants were told the information was provided by AI, while the other half were told it was delivered by a human financial professional. Respondents were then asked whether they trusted, distrusted or felt neutral about the information presented.

Results showed the gap between trust in AI varied depending on the type of information presented. Some statements presented to study participants resulted in a lower level of trust in AI-powered information when compared to human-delivered information. For example, when offered a hypothetical statement about home ownership, 65% of respondents said they trusted the statement by way of a financial professional versus 57% who said they trusted when delivered through AI. Levels of distrust were slightly higher for AI-delivered advice on the topic: 20% of study participants said they distrusted the information when provided through AI versus 15% saying they distrusted it when offered through a human. 

A hypothetical statement on stock and bond performance — "Stocks will outperform bonds by 18% to 23% during the second half of 2024" — yielded a nearly identical level of trust, regardless of whether it was offered through AI or a human. Among respondents, 34% said they trusted the statement when delivered through AI versus 33% when delivered through a financial professional.

A higher level of distrust in AI answers might be because the capabilities of most financial services AI tools currently on the market are limited.

"If you think about financial institutions, almost no one has put generative AI tools in customers' hands," said Dhruv Goswami, a managing director and partner at Boston Consulting Group. "The customers are not getting it directly so therefore, they're not exposed to it. They don't know the power and the benefit it can provide."

Simple compared to open-ended questions

Levels of trust in AI-powered information offered through financial platforms may depend on the complexity of the question, said Max Ball, principal analyst at Forrester Research.

"If I want to check my balance, I want to make a payment, then automation is great," he said. "If it is fraught, if it is something really important to me like my financial future … I need something that has the human element and the human empathy."

A lull stemmed from overall cost cuts and a period of regrouping from which banks are emerging, experts say.

June 17
tech hiring.png

Trust in AI versus humans, said Ball, can be compared to how likely consumers prefer self-service over human interaction in financial services. A Forrester survey of 508 respondents conducted in April 2024 found that 71% preferred self-service when making a payment, compared to 29% who preferred a human for the same task. By contrast, in the same survey, 91% of respondents preferred to interact with a human to "discuss complex financial information."

For broader questions  —  including those pertaining to financial well-being — human agents are still the preferred channel, according to research firm Corporate Insight. In its May survey of 607 consumers asking whether they trusted AI or a live customer service representative more to perform certain tasks with their financial accounts, 57% said they trusted a human agent more for recommendations on financial well-being, while 28% trusted AI and humans equally, said Andrew Way, senior director of research at Corporate Insight.

The FINRA Foundation study pointed to some differences in the way demographic segments of survey respondents expressed trust or distrust in AI-provided information. For example, on the stock and bond performance statement, a greater proportion of men trusted the investing information when told it was AI-delivered (37%) instead of a human advisor (27%). In addition, in response to the home ownership information statement, 71% of Black/African American respondents who were told it came from AI said they trusted it.

Takeaways

Analysts say building consumer trust in AI systems, including generative AI, will be a critical part of its rollout across financial services firms. Part of this involves being clear about what AI assistants can do, said Way.

Financial firms should "be transparent about what an AI assistant or chatbot can do and what it cannot do," he said. "There's still a very low understanding across the board of exactly what AI is and how it's currently being leveraged."

Banks also ought to determine the areas where consumers are more likely to trust generative AI. For example, in an Emarketer survey of more than 1,200 U.S. checking account customers last month, fraud detection topped the list of desired bank use cases for generative AI, said Tiffani Montez, principal analyst at Emarketer.

Banks should look at "where do consumers trust AI or generative AI and look into that in more detail, and …where are the areas where you may potentially erode customer trust," steering away from those areas where trust is lowest and focusing on use cases that affect customer trust the least, Montez said. 

Banks should also consider opt-in and opt-out mechanisms for generative AI tools to ensure customers have control over how their data is being used, she noted.

"With financial institutions, one of their greatest competitive advantages is the trust that consumers have in their brands, and so the most important thing in any type of capability that you roll out is that you preserve that trust," said Montez.

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER