The promise and perils of agentic AI

Businessman supervising AI
Adobe Stock

A technology called agentic AI is being heralded by consultants and vendors as the next big thing. It has the potential to help banks and other companies reap efficiency and cost savings from their investments in large language models. 

It also comes with many risks. 

Agentic AI takes a large language model and allows it to do things, with minimal human intervention. Agentic refers to acting as an autonomous agent, capable of performing tasks on its own.

That's a new step in artificial intelligence. To date, banks using generative AI have said they have a human in the loop — an employee is always there to review the model's work and catch any hallucinations, errors or bias in its output. Developers edit generative AI-generated code. Call center representatives and financial advisors read generative AI suggestions and reject those that seem off or outdated. 

Data from Capgemini clients that have implemented agentic AI shows "we are getting to a point where they feel comfortable enough to let an AI agent make certain decisions without a human in the middle," said Kartik Ramakrishnan, deputy CEO, financial services at Capgemini, in an American Banker podcast that will go live on Oct. 8.

No U.S. bank has publicly said it's using agentic AI in production. 

"Some banks are going down this path now," Matt Kropp, managing partner at Boston Consulting Group, said in an interview. "There are planned and funded projects for moving in this direction."

What agentic AI can do

Agentic AI systems are sometimes called foundation model operating systems, large action models or AI agents.

They rely on large language models that understand a user's prompt and break it down into tasks that can be executed. One example might be an application programming interface call to gather data, perhaps to check an account balance. Another might be an action, such as a transfer of money from one account to another. A third might present a form to a user to get their input or approval on an action.

So if a customer needs $850 to meet an emergency but he doesn't have enough money in his checking account, an agentic AI chatbot would go to his savings account to see if there is money there. If not, it would check if the customer has a credit line that could be drawn upon. It would explore the different options available without a human telling it to do so.

A key difference between agentic AI and older technologies that banks already use to automate tasks, like business-process automation and robotic-process automation, is that agentic AI is able to understand language, due to the use of large language models, said Rajesh Iyer, global head of AI and machine learning at Capgemini. 

Although agentic AI is designed to run autonomously, if human input is required, such as to authorize a payment, it will ask for it, Iyer said. 

The technology is still in development.

"I think it's a next big thing," Iyer said.

Use cases

As the examples above show, agentic AI could be used to handle complex customer requests automatically. 

NetXD has an agentic AI system called Edge AI. A user could ask Edge AI, "How much interest have I earned?" The system, which would already be connected to the customer's bank accounts through APIs, would tell the customer how much interest is being earned across all accounts. It would also share information about money market accounts that could provide a higher yield. 

With the customer's permission, the system could prefill an application and with biometric authentication and approval, it would open the new account within about 15 seconds, according to Suresh Ramamurthi, chairman of NetXD and chairman of CBW Bank in Weir, Kansas. 

It could then move money from other accounts to the new money market account, again with the customer's permission and biometric authentication.

The system could also handle prompts like, "Remind me to send $50 to mom for my phone bill before the end of the month and make this monthly" or "reminder, split groceries with my boyfriend, but make sure to exclude my multiple trips to Sephora." 

NetXD will make its system generally available to consumers in a few weeks. It's also offering the technology to banks.

This ability to use agentic AI to do things for customers will be disruptive for banks, Ramamurthi said. 

"In the next two, three years, everybody will be scrambling to have this technology," he said. 

Bud Financial has a similar agentic AI model it's been testing with a bank client, according to Edward Maslaveckas​, co-founder and CEO.

Its model can analyze a customer's transactions and figure out how to move the customer's money between accounts to get the highest possible interest on cash that is sitting around, with the customer's approval. The model could be used more broadly to help people improve their finances and determine their next best action, Maslaveckas said. 

"Our biggest question when we set out to build this around a year ago was, can these things actually work?" he said. "We've been marching on with the technology to get it to a point where it works most of the time rather than some of the time. The progress in a year has been a lot." 

Though most banks want to minimize the interest consumers earn on their money and maximize the fees they pay, there are some, mostly community banks and challenger banks, that seek to offer consumers a better deal, Maslaveckas said. 

Banks that are testing the technology are not ready to talk about it because of regulatory questions, he said. Maximizing interest, however, is fairly low-risk, he said, and the customer has to opt in. 

Kropp said agentic AI could also be used in investment banks' equity research departments. It could do some of the research used to create deal pitch books, by combining 10Ks, analyst calls and other documents. It could help create research reports and slide decks. It could give users a ChatGPT-like tool to query and summarize the research. 

An analyst might come up with a thesis or a unique angle on a company and ask an agentic AI model to produce evidence supporting it. 

"That's the power of that kind of a setup," Kropp said. If the model does the compiling and synthesizing of research, the human analyst has more time for critical thinking.

Agentic AI could be used in cybersecurity, "where a lot of things have to happen and you have to stitch together a lot of things in a security operation center or someone that's doing testing," Iyer said. 

What could go wrong?

Risks abound. Agentic AI can make the same kinds of mistakes human agents do, said Todd Phillips, a policy advocate and fellow at the Roosevelt Institute, in an interview. 

"It can happen much more quickly than people can respond, and you don't have a person second-guessing and asking, 'Wait, really, should I do this thing?' AI may just do it," he said. 

If used in algorithmic trading and high-frequency trading firms, agentic AI could lead to flash crashes and unintentional market manipulation, he said. 

"Research has shown that if you let AI agents have free rein and you just tell them to find the most profit, they will eventually figure out how to do market manipulation," Phillips said. "Without proper guardrails, I think we could see a lot of that happening unintentionally — trading firms won't tell their AI agents to go and engage in market manipulation, they'll just do it on their own." Malicious actors could intentionally tell their AI agents to engage in market manipulation. 

If small and midsize businesses use AI agents for treasury management, that could lead to bank runs, Phillips said.

"You could end up in something like a Silicon Valley Bank situation where all at once firms' AI agents think, 'This bank is going to fail — I need to move the company's money out to another bank,'" he said. "And they all basically do an AI agent run on a bank." Two factors make this scenario more likely: developing AI is expensive and so far, most companies are gravitating toward a handful of large language models. 

"If a whole bunch of firms have AI manage their account balances, and they're all using the same LLM, they're going to get the same information all at the same time, and they're going to withdraw money all at the same time," he said. 

Even seemingly innocuous uses of agentic AI could lead to trouble, Phillips warned.

"I'm worried that if you can tell your phone, 'Hey Siri, pay my water bill from my bank account,' it may do that, but it may do that in ways that end up harming people," he said. "It may do the payment but have the client go into an overdraft and have to pay more fees that way."

Even if a customer or employee is asked to review each step the agentic AI takes, "I still have some fear because we can get so used to just clicking 'OK, OK, OK' to whatever a computer puts in front of you," Phillips said. 

Another risk is that criminals could use agentic AI to speed up their work. 

"Malicious actors could tell AI agents to hack into a bank and steal information, and the AI agents could just go and do it," Phillips said. "They can run many bots at the same time, trying to figure out different methods."

The law is unclear about how decisions of AI agents are going to be interpreted legally, Phillips said. 

Overall, Phillips says, banks should proceed with caution with agentic AI. "And I think you need to have levels of review, just like you would with people," he said.

Getting closer to prime time

Agentic AI is not yet practical for banks, Kropp acknowledged. The BCG X group he leads has built working agentic AI systems that are used in other industries like health care. It also is working on a generative AI transformation program at a global investment bank. 

"In heavily regulated sectors like banking and insurance, regulators have to understand how decisions are made. If you have a model that is making decisions, it can't be a black box. You have to be able to inspect why a model makes a particular decision. All of that creates a lot of friction" to agentic AI, Kropp said.

But such barriers don't make agentic AI impossible, he said. "It just means things are going to move slower because financial institutions are going to have to get their regulators comfortable with a model that is making decisions before they will be able to deploy it." 

The technology will quickly get a lot better, Kropp said. "Within five years, I think the quality will be very, very good and we'll be able to start relying on it." But it may be a long time before anyone is comfortable relying 100% on agentic AI, he said.

There are certain necessary conditions that need to be met for banks to implement agentic AI, according to Ramamurthi. 

"They need to have zero-trust security and digital signature capability, they need to have process flows well defined and secured by zero trust as a precondition to achieving AI-driven zero ops, where entire operations are done with zero human intervention," he said. 

"They also need to have real-time reconciliation capability in place before they embark on anything high-speed or automated. Just as you need brakes and steering to drive a car on a winding downhill, banks need to invest in intelligent kill switches and lane controls," Ramamurthi said.

In the same vein, Kropp compared the technology to self-driving cars, which have different levels of autonomy, but most people are not ready to take their hands off the wheel. For some time, agentic AI models may need to keep humans in the loop at multiple points, to do quality reviews or to sign off on decisions or actions, he said.

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER