The European Union has proposed a rule that would impose data-quality, testing and oversight requirements on artificial intelligence systems deemed to be high-risk. Companies could be fined as much as 6% of their annual worldwide revenue for violations.
The plan is of interest to the many banks developing AI technology. It would apply directly to banks that do business in the EU, and similar rules may be adopted in the United States — perhaps in the same way that Europe’s data-privacy regulation has been mimicked by several state governments.
"Because they are the first hard-law regulations, the European Commission proposal is likely to become a model for the rest of the world," said Will Uppington, co-founder and CEO of the AI software company TruEra.
The proposal comes just weeks after
“The EU’s announcement that it will regulate AI use across sectors reflects just how important it is that we understand AI’s potential risks and benefits,” said Melissa Koide, CEO of FinRegLab, a nonprofit research group in Washington that tests new technologies and data to inform public policy. “Research suggests artificial intelligence and machine learning can create measurable benefits including financial inclusion, yet we also know it can bake in and even exacerbate historical bias and exclusion.”
The proposed EU rules are based on four levels of AI risk. The highest level, “unacceptable risk,” applies to a limited set of particularly harmful uses of AI that violate fundamental human rights, such as social scoring by governments, exploitation of children, and biometric-identification systems in public spaces used for law enforcement purposes. Those would all be banned.
The next level, high-risk, are uses of AI that could affect people's safety or their fundamental rights. For such systems, the proposal would regulate the quality of data sets used; technical documentation and recordkeeping; transparency and the provision of information to users; human oversight; as well as robustness, accuracy and cybersecurity. In case of a breach, the plan would let national authorities have access to the information needed to investigate whether the use of the AI system complied with the law.
The EU provided a long list of such high-risk AI applications, two of which seem relevant to banks.
One is “systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score.”
"It seems that AI used in bank credit-decisioning systems would be impacted," Uppington said. "Banks will not be starting from scratch in complying with these new regulations; however, given that they already have model risk management and compliance procedures in place for credit-decisioning."
The other high-risk form of AI banks use is biometric authentication.
“Under the new rules, all AI systems intended to be used for remote biometric identification of persons will be considered high-risk and subject to an ex ante, third-party conformity assessment including documentation and human oversight requirements by design,” the EU wrote in an
These requirements apply to banks' use of biometric recognition for authentication and know-your-customer applications, Uppington said.
"Banks have some capabilities around quality management already through their model risk management and compliance frameworks, but they will need to apply these to their biometric systems and understand what incremental capabilities they will need to put in place to address these new regulations," he said.
The EU considers some types of AI software to be of limited risk. One example is chatbots. These would be subject to transparency rules “intended to allow those interacting with the content to make informed decisions. The user could then decide to continue or step back from using the application.” The rules suggest that if a company interacts with customers via an AI-powered virtual assistant, it needs to make clear to people that they are interacting with software rather than a human.
The fourth category, minimal risk, would call for no restrictions beyond existing legislation. The vast majority of AI systems currently used in the EU fall into this category.
"I think the new rules are sensible," Uppington said. Though he acknowledged that the proposal could temporarily slow the adoption of AI that has been designated as high-risk, he also said the ethical and legal framework the European Commission is putting in place "should help society develop greater trust in AI, which should over time speed up and deepen the adoption of AI while minimizing potential friction around its adoption."
Some people might question why certain uses of AI have not been designated as high-risk, such as self-driving cars and insurance, Uppington said.
"The EC has thought of this, as the legislation puts in place a mechanism by which new use cases of AI could be added to the high-risk designation over time," he said.