BankThink

Banks must get serious about measuring and mitigating AI-related risk

BT: Banks need to get creative about the products and services they offer
Artificial intelligence will be integrated into the financial system. Banks need to be clear-eyed about AI's potential dangers and proactive about avoiding them, writes Gene Ludwig.
Adobe Stock

In this third installment of my series on banking technology and regulation, we delve into the critical issue of fraud, cyber and other security risks in the age of artificial intelligence and modern technologies.

About a decade ago, the CEO of a small bank told me they protected their institution from a cyber wiper virus attack by printing out all records nightly. While this might have been a viable strategy in the halcyon past, today's world is a different beast. The technology horse has long since left the barn, and our businesses and personal lives are inextricably intertwined with the digital realm. AI, with its immense potential and inherent risks, is the newest and most powerful force shaping this landscape

Building upon the foundation of existing technologies and connectivity, AI has the potential to significantly enhance the capabilities of the banking industry. However, this same technology also magnifies the risks of security breaches and fraudulent activities. We're already witnessing the early stages of AI-driven threats, and these risks are likely to intensify in the future.

The most impactful cyberattack threats originate from both government adversaries and private criminal organizations. Our government adversaries, including nations with significant cyber capabilities, continue to refine their tactics in this area. These adversaries have significantly large "cyber armies," well-funded and staffed by their governments, which employ advanced persistent threats, or APTs, in nation-state attacks targeting critical infrastructure, including financial institutions.

Motivations for these attacks have evolved. While financial gain was once a primary motive, particularly for some nations, others now increasingly focus on "destructive" attacks aimed at disrupting critical services and causing economic harm. This shift reflects the broader geopolitical landscape, where cyber operations have become a significant component of national strategies. Major players like the United States, China and Russia utilize cyber tactics to achieve their foreign policy objectives. For instance, allegations of cyber espionage and intellectual property theft have been prominent in the ongoing rivalry between the U.S. and China.

In the criminal space, the threat of cyberattacks has become more sophisticated and persistent. AI-enabled risks, particularly in the retail and credit card sectors, are significant. By facilitating data breaches through more sophisticated phishing techniques and creating synthetic identities, AI can amass and manipulate enough true customer information to deceive banks' ID verification tools. Even today, criminals are exploiting these vulnerabilities to open fraudulent accounts, tap into credit and then close the accounts, causing banks massive losses.

Banks, governments and tech companies should prioritize the development or acquisition of AI-driven tools to combat the growing threat of cybercrime and fraud. While many in the industry continue to rely on multilayer authentication as a first line of defense, it is becoming increasingly clear that strong AI-powered defensive tools will be essential to counter the sophisticated attacks enabled by AI.

There is a growing concern that government regulations could hinder the development and adoption of AI products in the banking industry. While it is essential to address AI's potential dangers, such as discriminatory and operational risks, it is equally important to avoid throwing the baby out with the bathwater. By focusing solely on the risks, regulators could inadvertently stifle innovation and limit the benefits that AI can bring to the banking sector.

The Treasury Department recently solicited input on the uses, opportunities and risks of AI in the financial sector, offering a critical opportunity for the industry to educate regulators and policymakers. By highlighting the benefits of AI-powered tools in defending against cyber threats and improving operational efficiency, banks can help ensure that regulations strike a balance between mitigating risks and promoting innovation.

The widespread adoption of AI is inevitable, regardless of regulatory efforts. Both legitimate businesses and criminal actors will utilize AI to gain advantages in various fields, including banking. Attempting to halt the development and use of AI is akin to King Canute's futile attempt to stop the tide.

Dobrin, founder of advisory firm Qantm AI and former global chief AI officer at IBM, warns that popular generative AI models were trained on the whole of the internet and hallucinate at an unacceptable rate.

September 10
Dr. Seth Dobrin

If regulators are overly restrictive in their approach to banks' AI adoption, it could create a dangerous imbalance. Criminal organizations and state-sponsored cyber attackers may be able to more effectively exploit AI technologies, gaining a significant advantage in the war against fraud and cybercrime. To give banks a fighting chance, governments must allow banks to robustly utilize and experiment with AI tools that can effectively counter sophisticated attacks.

Furthermore, most AI innovations, like other technological advancements, will not be developed by banks themselves. They will be built by small and large companies specializing in AI and other related technologies. The key for banks is to seamlessly integrate these new vendor-driven technologies into their systems through APIs and other interfaces, carefully monitoring these technologies to identify and mitigate any potential cyber, fraud or operational risks they may introduce.

Key steps banks must take to protect themselves against the new, more vigorous cyber threats include implementing robust third-party risk management, identity and access management, threat and vulnerability management, and data-protection programs, conducting communications and awareness campaigns, and ensuring technology is up-to-date with the latest releases.

Regardless of whether technology is developed internally or by a third party, the company bears ultimate responsibility for ensuring its security. It is crucial to hire strong cyber and technology leaders and staff to oversee these efforts and maintain a high level of security. In this regard, it is worth noting that more traditional threats have not gone away — e.g., a rogue employee. They remain part of an ever more complex set of cyber challenges.

In the area of cyber threats, the financial services industry has collaborated effectively with government agencies to enhance understanding of the latest risks and develop solutions. The Information Sharing and Analysis Centers, such as FS-ISAC for financial services and the Financial Services Sector Coordinating Council, or FSSCC, play crucial roles in facilitating information sharing and collaboration including threat intelligence and mitigation strategies.

For the government, the Treasury Department takes the lead, coordinating with other agencies like the National Security Agency, the FBI, the Cybersecurity & Infrastructure Agency and others to share real-time threat intelligence. The FSSCC, which includes financial services companies and their regulators (through the Financial and Banking Infrastructure Information Committee), holds annual meetings to discuss emerging threats and mitigation strategies. AI threats are increasingly a focus of these discussions. By working together, the industry and government can better understand and address the evolving cyber landscape.

Despite the significant progress made in addressing cyber threats, it is surprising that technology risk is not yet treated as a major risk vector in all financial institutions, on par with credit risk, compliance risk, fraud and cyber risk. This is not solely a matter of government regulation but a fundamental issue of good banking practice.

It is essential to consider both everyday operational risks and tail risks associated with technology. For too many banks, governance in this area remains weak, and CEOs may not have a full understanding of the risks being faced or the measures being taken to mitigate them.

For reprint and licensing requests for this article, click here.
Artificial intelligence Cyber security Regulation and compliance
MORE FROM AMERICAN BANKER