BankThink

Compliance is banking's superpower; that goes double in an era of AI

Generative AI won't transform banking anytime soon BankThink
Banks' compliance teams have often been the source of innovative technologies with applications far beyond financial services. That is likely to hold true with artificial intelligence-related compliance solutions, writes Brandon Carl, of Smarsh.
Adobe Stock

In 1907, America was on the precipice of financial ruin. It all started when Otto Heinze hatched a scheme to corner the copper market. It failed spectacularly, causing his brokerage house to collapse.

The ripple effect was devastating: Over the next seven days, bank runs swept across the country, with New York City nearly becoming insolvent. James Pierpont Morgan quickly realized what was happening.

He responded by locking bank presidents in his office library until they came up with a solution. His actions narrowly averted a more widespread crisis.

The episode — which eventually became known as the Panic of 1907 — highlighted the need for regulation and eventually led to the creation of the Federal Reserve. Then, as is the case now, many chafed at the introduction of additional red tape, believing it caused undue friction and consternation.

But regulations are a good thing. Somewhat counterintuitively, well-designed regulations can increase innovation, productivity and growth. While the imposition of regulations has often been seen as a drag on business efficiency, it turns out that compliance might be the driving force behind a quiet revolution in technology and innovation — particularly in the realm of artificial intelligence.

Consider bank compliance teams. They have long been at the forefront of innovation. Well before ChatGPT became a household name, banks were already deploying large-scale natural language processing to tackle insider trading, money laundering and other financial crimes.

Banks did this in part because of regulations, not despite them. Every financial institution is required by regulations to preserve its business communications, whether from one employee to another or an employee to a client. These regulations haven't just made markets safer; they've also laid the groundwork for some of the most advanced technology solutions in use today, including enterprise-grade AI.

What can we learn from this journey? AI is powerful and works very well. Modern compliance teams are responsible for sifting through a haystack of firm communications to find a needle of risk that may indicate a financial crime.

Without AI, this is nearly impossible. With AI, firms can accurately assess risk and business intelligence from their data. These tools are applicable far beyond financial services and can drive new sources of revenue as well as risk reduction.

Through trial, error and innovation, financial institutions have learned how to operate artificial intelligence reliably and responsibly. This is not a flashy demo or parlor trick — these are production-grade systems on a massive scale.

At the Most Powerful Women in Banking conference this week, New York's top banking regulator said banks need to take full responsibility for the artificial intelligence models they use, even if they bought them from a third party.

October 22
Adrienne Harris and Chana Schoenberger

Other industries should take note: Outside of financial services, most firms have little idea of what's actually happening within their companies. The ability to separate signal from noise can provide real-time insights into the risks they confront every day. Beyond that, it can also uncover opportunities, including shedding light on ways to improve customer experiences and create better and more productive work environments.

As with any source of power, there is also peril. AI can be biased, poorly trained and used for purposes that aren't in the best interest of the collective good. The combination of power and peril is not new: We test and regulate airplanes, medicines and power plants. From that, we've learned three pillars for responsible innovation.

First, the technology must be resilient and reliable. These systems are critical infrastructure and deserve to be treated as such.

Second, AI must be subject to careful risk management. Errors in judgment propagate quickly. By setting up appropriate processes and testing, we get the joint benefits of scale and reliability. Be wary of the hype: These systems are hard to create but worthwhile to utilize.

Finally, firms should choose responsible use cases. As with any new technology, there will be trade-offs among stakeholders. AI can be more prone to bias and poor training than other technologies. We do not need to be scared, but we do need to be smart.

Few technologies elicit a broader mixture of fear and excitement than AI. People appropriately worry about their jobs being eliminated or getting caught in the machine without the ability to reach a human.

As such, regulation runs the risk of being politicized. The White House recently laid out its vision for AI oversight, echoing the European Union's efforts with its AI Act.

Notably, it's safe to assume that this is only the beginning. More regulations are coming, and financial services should embrace that. With the infrastructure, the talent, the opportunity and the scrutiny, financial institutions are poised to lead in this area.

For the banking industry, AI is fast becoming an indispensable tool not just for compliance, but for driving innovation and market leadership. We have no choice but to get it right. However, regulation and innovation are not mutually exclusive. If 1907 was any lesson, it's that chaos makes for lousy progress. It is better to lay a strong foundation now than to end up locked in a library later — scrambling for a last-minute fix.

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance Technology
MORE FROM AMERICAN BANKER