Community banks grapple with policies to cover hidden risks of AI

Left: Merchants & Marine Bank branch. Top right: Ryan Hildebrand, chief innovation officer of Bankwell Bank. Bottom right: Kim Kirk, chief operations officer of Queensborough National Bank & Trust.
“The big question is, how do we guard that public trust that our information is confidential?” said Jeff Trammell, chief operations officer of Merchants & Marine Park. Bankwell Bank chief innovation officer Ryan Hildebrand, top right, and Queensborough National Bank & Trust chief operations officer Kim Kirk, bottom right, are also considering the rules they want to set about AI usage at their banks.

Community banks as a group are in the early stages of integrating artificial intelligence into their operations. Written formal policies about its usage are even further away from materializing.

But both actions can, and should, happen at the same time.

"It's a little shocking to see the degree to which a lot of community-based institutions are just barely dipping their toe into the water," said Jim Perry, senior strategist at Market Insights, a consulting firm for community banks and credit unions. "The idea of establishing the kind of governance and oversight that will help them navigate AI in the future is just now getting to their radar screen."

While safe and compliant practices for artificial intelligence and generative AI are a concern for financial institutions of all sizes, they are particularly dicey for community banks. These institutions tend to partner with third parties rather than build capabilities in-house, which means less visibility into other parties' practices and an open door to fourth-party risk.

"The concerns aren't new or unique to AI tools," said Jasper Sneff Nanni, a principal at financial services consulting firm FS Vector. "They are generally within the scope of existing privacy, information security, third-party risk management, and model risk management programs. However, the recent popularity of these tools can create situations where employees do not realize that these risks are present."

The Federal Reserve, Federal Deposit Insurance Corp. and Office of the Comptroller of the Currency issued a 30-page guidebook on managing affiliate risks. The report builds on formal guidance issued last year.

May 3
Michael Barr

For instance, bank employees may use a third-party AI product to transcribe audio from a conference or summarize client documents without realizing that in certain cases, confidential company data or customer personally identifiable information could be leaked back to the provider of the tool, said Sneff Nanni. FS Vector has written policies for both banks and fintechs in recent months, and Sneff Nanni has found the primary concerns about AI tools are privacy and information security risks and exposing the company to model risk.

He advises banks to turn to enterprise versions of tools from companies such as OpenAI, Google and Anthropic, with more transparent data usage policies than their free, off-the-shelf versions.

The question of how far to take a consumer-facing generative AI product arose at Merchants & Marine Bank in Pascagoula, Mississippi.

The $687 million-asset institution is "always trying to leverage technology to out-punch our weight class," said its chief operations officer, Jeff Trammell. "AI can be a great leveler." He and other employees started experimenting with ChatGPT for fun, such as by asking the bot to write a song in the tone of the Smashing Pumpkins about a very angry chief risk officer. ("It was a great song," said Trammell. "It sounded just like what [lead singer] Billy Corgan would write.") Once they saw ChatGPT's potential, they realized a practical usage: to get the ball rolling on a policy governing their two-year-old cannabis banking program.

"You can buy policies and procedures off the shelf for SBA [Small Business Administration] lending and traditional mortgage lending but for cannabis banking, there is nothing," said Trammell. "You can ask around at different banks, but a lot of times these programs are confidential. AI can help you figure out angles of high risk activity like this."

The team was careful to keep sensitive information out of ChatGPT. Instead, they posed queries to kick-start their own discussions, such as, "What are the risk factors for a small community bank to consider in developing a cannabis banking program?"

"We developed the entire program in less than 60 days," said Trammell. "ChatGPT played a part in getting us through the basics quickly."

At this point, Merchants & Marine has gone as far as Trammell feels comfortable with ChatGPT. Before he and his colleagues explore other generative AI tools, Trammell says they will devise rules about data governance and control, such as who in the bank is allowed to use these products and what queries they can pose without crossing a line.  

"The big question is, how do we guard that public trust that our information is confidential?" said Trammell.

Another challenge is how to get started.

Julieann Thurlow, president and CEO of the $900 million-asset Reading Cooperative Bank in Reading, Massachusetts, is approaching AI cautiously and prefers to wait for more guidance from regulators. For now, Reading's use of AI is limited to fraud detection and transaction blocking in the peer-to-peer payment system it uses, Chuck.

"You have to learn about this space before you run with it," she said.

Bankwell Bank in New Canaan, Connecticut, is examining AI under the lens of its third-party risk management framework and considers anything related to AI to be high risk. At this point in the $3.2 billion-asset bank's journey into AI — which includes a small-business lending pilot using generative AI  and experiments with AI-driven sales and marketing, prequalification, underwriting and more in its small-business banking unit — "it's hard to put together a solid, well-rounded policy outside of it being a high-risk partnership," said chief innovation officer Ryan Hildebrand. "We are at the starting line regarding AI usage procedures and policies, but that's similar to where we are at with the use cases of the products themselves."

Kim Kirk, the chief operations officer at Queensborough National Bank & Trust in Louisville, Georgia, purchased an AI policy template from BankPolicies.com that she plans to customize. The time was ripe because the $2 billion-asset bank is buying fraud solutions from its core provider that use machine learning, has hired a machine learning engineer for its business intelligence unit, and wants to address the way cyber threat actors are using AI.

She has also grappled with case-by-case situations. Earlier this year, Kirk considered allowing her project managers to use Otter.ai for transcription during the bank's core conversion. Because they may contain customer or strategic information, she investigated the transcription functionality and the security of Otter.ai's archives, and ultimately was not comfortable moving forward.

As at Merchants & Marine, prudent usage of ChatGPT is on her radar.

"We need to govern what our employees do with ChatGPT and ensure that they understand it isn't appropriate to put any kind of bank information in ChatGPT or other public generative AI models," said Kirk.

For the topic of fourth-party risk, that posed by a bank's vendors' vendors, Sneff Nanni recommends that banks include a provision in their policies that anything being used by a fintech partner is subject to the same approval procedures the bank would apply to the tools it was using directly.

Looking further ahead, banks should be cautious about adapting AI and model risk management policies to generative AI, experts said.

"Banks tend to be more conservative than a lot of other institutions we work with and have the most rigorous validation standards and transparency requirements around models," said Jey Kumarasamy, senior associate at Luminos.Law, a law firm founded by both data scientists and lawyers that focuses on AI risk. "Some of those requirements don't work well when dealing with generative AI models."

For example, a standard model risk management framework will contain three lines of defense: robust model development, model validation (ensuring the model is performing as expected) and internal audits. With generative AI, it's difficult to pressure-test a model for model validation by replicating it, because of the time and expense it takes to build a generative AI model. Instead, the bank might want to consider other methods, such as red-teaming and evaluating the model against benchmark datasets.

Generative AI models also carry unique risks, such as hallucinations and toxicity, or disrespectful language.

With all kinds of AI, Perry says the space is moving so rapidly that community banks should establish a systems framework rather than relying on the policies of their core providers or third-party vendors.

"It's important for this issue to rise up to a level of priority so community banks don't fall behind," he said.

For reprint and licensing requests for this article, click here.
Artificial intelligence Community banking Technology
MORE FROM AMERICAN BANKER