American banks are increasingly trusting AI to handle sensitive data to improve efficiency — but thought leaders warn the industry not to get too complacent. Ensuring sensitive data remains private is going to be a moving target, and a ton of work.
Financial institutions need to be proactive about combating cyber theft by forming task groups assigned to keep up with the latest AI regulations and technology, according to a recent
"Banks have to be very, very careful about the recommendations they make, or the decisions they make, being seen to be wrong," Brad O'Brien, partner at Baringa's U.S. Financial Services practice, said. "I think that would create a huge reputational problem."
Last year, cyber crimes cost Americans more than $12.5 billion, according to an FBI report referenced in the whitepaper. The loss could potentially become greater as financial institutions expand the use of AI. Currently, 36% of U.S. banks are using generative AI and 38% are learning and collecting information about it, according to a recent American Banker survey. The risk that their AI models might be compromised is one of their gravest concerns: 31% said they worry about exposing personally identifiable or proprietary information outside of the company and 26% said they fear exposed vulnerability to cyberattacks due to their use of AI.
"Banks are now typically in what I call 'sandbox mode,' where they're starting to play with the technology, starting to try to figure out where within their organization's operation they can apply it usefully," O'Brien said.
Generative AI uses data it's been trained with to create new content based on patterns found in the original dataset. The model is constantly fed new data so the results are fine-tuned, or adapted to changing patterns. By training a generative AI model with financial wellness concepts and coupling it with someone's personal finance data, banks are able to offer financial advice to members instantly, as one example. While programs like this allow banks to offer services to more members efficiently, controlling how all the customer data is accessed and used is where the challenge lies.
"What we're proposing is that before anything goes into production, that banks think about putting a really strong governance and risk management framework around these technologies," O'Brien said.
Specifically, banks need to create a task force dedicated to ensuring data used by AI remains uncompromised, according to O'Brien. This body would be responsible for enforcing protection standards, green-lighting AI programs that abide by those regulations and staying up-to-date on the latest security technology by third parties.
"The bar is going to be raised in terms of the banks' defenses against bad actors," O'Brien said.
Concerns about AI and data security create opportunities for fintechs to find solutions. One company trying to solve the problem is ACI Worldwide, which built technology that uses a creative method of deleting data after AI downloads and learns from it.
Cleber Martins, global head of payments intelligence and risk solutions at ACI Worldwide, said the technology is inspired by how nature uses DNA as a code to build biological structures. The technology converts data banks entered into an AI system into a computerized code that "no human can read," to learn how to create outputs, Martins said. Under this system, financial institutions collaborating together will not actually be sharing data, just the code created by the system.
"What we did from a technology perspective, was getting that learning separated from the data," Martins said. "I really don't need the data. What I need is to make sure that I learned from the data. So, the technology that we developed is a new generation of artificial intelligence that doesn't need the data to leave where it's located."
ACI has been using AI for more than 30 years to develop tools for fraud detection, Martins said. Martins also said the company, and other fintechs, will continue to create security solutions as the use of AI and its capabilities expands and evolves. Other companies that offer AI-based fraud detection include FICO, ComplyAdvantage, Quantexa, Feedzai and ThreatMetrix. It's important for financial institutions to stay on top of the latest developments because bad actors are quick to adopt new technology.
"There's so much new technology available that criminals usually have faster access to than institutions," Martins said. "They don't have intellectual property constraints, they can just start playing with this type of technology. And that can be really harmful if you're not well prepared."
Ben Shorten, Accenture's finance, risk and compliance lead for banking and capital markets in North America, agrees that "timely disposal of data" should be one of the most important tools for securing data used by AI. But it's important for banks to vet how third parties handle their information, and not to implicitly trust anyone without reviewing their credentials.
"One of the areas that can be overlooked isn't just data within the institution, it's taking a complete holistic oversight approach and managing those third parties effectively in terms of their own usage of data, where it's relevant for the institution," Shorten said.
Unfortunately, regulations for protecting data used by AI are still in infancy. Even in the European Union — which quickly adopted legislation to be at the forefront of open banking — AI regulation laws are still pending. If enacted, the EU's AI Act creates a risk standard AI programs will be judged by. The U.S. is even farther behind: States are starting to form their own task forces and federal agencies started creating guidelines after President Joe Biden signed an executive order in 2023 asking them to do so. California is leading the states on AI regulation: In September, Governor Gavin Newsom signed AB 2013 into law, which requires developers to post information on their websites about the data used to train generative AI systems.
Analysts point out that banking has a global footprint, and industry leaders need to familiarize themselves with the current state of AI regulation because it's only going to keep changing.
"The legislation (in Europe) isn't finalized, but it strikes me that things are going to get much more complex, quite quickly," Gareth Lodge, London-based principal analyst of payments at international research firm Celent, said.