'Tip of the iceberg': How the EU AI Act will affect banks in the U.S.

European Union
Adobe Stock

The European Union Artificial Intelligence Act could have an impact on U.S. financial institutions similar to how the EU's General Data Protection Regulation was felt around the world.

The act, which is meant "to protect against the harmful effects of AI systems in the Union" and build trust in the technology by setting guardrails around usage of AI, is the first legal framework of its kind in the world. It will prohibit especially risky AI practices and set requirements for others, with penalties reaching up to 35 million euros or 7% of global revenue for noncompliance.

American Banker contacted more than a dozen large U.S.-based banks and credit unions as well as European banks with presences in the U.S. to inquire how the act will affect them and what they are doing to prepare. All declined to provide comment or did not respond to a request for comment.

But providers, developers or deployers of AI systems — which the act defines as a machine-based system that is designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that generates outputs that can influence physical or virtual environments — will need to take inventory of their AI systems and determine where their obligations lie.

The bill would have forced several restrictions on the largest AI model providers, including a so-called "kill switch." A new version could restrict the riskiest uses of AI, such as for loan and hiring decisions.

September 30
Gavin Newsom

"U.S. companies, including financial institutions, will have to comply with the EU AI Act as soon as they launch or use AI technology in the EU market, regardless of whether they have any presence in the region," said Tanguy van Overstraeten, a Brussels, Belgium-based technology, media and telecommunications partner at law firm Linklaters and global head of data protection at the firm.

The act does not treat all AI systems equally. Systems deemed to carry "unacceptable" risk, such as those that manipulate people into making harmful decisions, are banned outright. High-risk applications of AI, which could include software for job recruitment or credit scoring, must comply with certain restrictions before they are deployed. There are lighter rules for "limited risk" applications and no restrictions on minimal or no-risk applications.

For example, if a U.S. bank developed an AI-based credit scoring tool for the U.S. market but planned to roll it out in the EU or evaluate the creditworthiness of humans in the EU, the technology would fall under the scope of the EU AI Act.

In this scenario, the bank would need strong risk and quality management frameworks and ensure it is relying only on high-quality data. It would need to thoroughly document the system and how it functions, and demonstrate there is human intervention where needed, said Joshua Ashley Klayman, senior counsel at Linklaters in New York.

"It is something U.S. banks might not necessarily be thinking about," she said.

Using generative AI in the context of human resources — such as for recruitment and selection, to determine promotions, to calculate bonuses, or to evaluate employees — could also snag banks as it would fall under the "high risk" category.

"The use of AI in most workplace decision-making is perceived as posting a significant risk of harm," said Jennifer Granado Aranzana, employment managing associate at Linklaters in Brussels.

Those with a minimal retail or business banking footprint in Europe face a smaller risk, but banks of all sizes should pay attention.

"The EU is first to the finish line, but many of the core concepts around governing AI are probably going to evolve into best practices in other markets," said John Carey, managing director at Aarete, a global management and technology consulting firm.

The EU AI Act comes at the same time that other blueprints or bills are surfacing in the U.S. government or at the state level.

"The U.S. has always been striving for innovation, but there are pros and cons to that," said van Overstraeten. "The EU is always fostering the trust element. If you want to deploy new technology, the first thing you need is more boundaries."

He refers to the EU AI Act as "the tip of the iceberg." 

The legislation is in its early phases of implementation and was formalized on Aug. 1. Rules around general-purpose AI models come into play on Aug. 2, 2025, and rules around certain high-risk AI models will be enforced starting Aug. 2, 2026 — two elements that are most likely to be relevant to banks.

Banks can start by determining how to comply with the EU AI Act, mapping the systems they use and the categories of risk into which they fall, and training employees, including HR, to watch out for subtle traps that arise when using AI in violation of the act.

"Regardless of whether or not a bank falls into the EU AI Act, we recommend they build an AI governance framework and AI inventory, and report to the board and senior management on AI risks within the bank," said Rani Bhuva, a principal at EY.

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance Technology
MORE FROM AMERICAN BANKER