Financial services companies, in particular banks, are racing to harness the potential of artificial intelligence. Lenders are employing algorithms that analyze consumer data to conduct credit scoring and determine appropriate loan amounts, and AI tools examining customer transaction habits are improving fraud alerts and reducing money-laundering risk.
But how might regulators react?
Given AI’s potential, the industry can expect government will have an increasing number of questions about how banks are using this technology. That will include how institutions use AI, how well it works, how stable and secure it is, the quality of the data it uses and generates and how it’s governed.
Industry should anticipate these concerns so it can help shape regulations and reviews that add value, minimize burden and best satisfy regulatory needs. Anticipating rules also allows institutions to avoid criticisms later when it can be harder to adjust.
At this time, government rules and regulations aren’t specific around the use of AI, as the technology is still new and still evolving. But regulators have latitude to determine whether certain practices are unsafe and unsound — which could include misuses of AI.
For now, banks can expect regulators to ask a number of questions about AI’s application in banking — for instance, if its capture and utilization of data comply with specific rules, including privacy standards. Regulators may also want to know if certain AI applications create discriminatory outcomes and what bankers are doing to avoid these potential problems. Regulators will also be interested in how much AI is being relied on within an institution, including how well management and the board understand how the technology is being used.
Over time, regulators will want to understand bank AI applications and be able to examine matters for themselves. Eventually, government interest in how quantitative models are designed, tested and validated will extend to AI — and banks will need to be able to demonstrate that the technology isn’t harming consumers or creating undue risk to the system.
There are a number of things bank executives and boards can do now. These include making sure there is strong governance around AI’s usage.
Management should ensure the bank’s policies, procedures and practices don’t create compliance or risk problems. There must be an audit trail surrounding the use of AI and its decisioning that can be thoroughly explained to regulators. This audit trail must be monitored to make sure that AI is producing understandable outcomes and isn’t being used where there isn’t sufficient reason or experience to rely on it.
Management should also carefully scrutinize AI vendors. Fly-by-night AI companies may not have given government expectations as much attention as more established technology providers. It is most cost effective to select technology and other innovative approaches that take regulatory concerns into account on the front end. Patch and fix after a system is partially or fully installed — even if possible — can be highly disruptive and costly.
AI is likely to become an increasingly important tool in finance. But just like other powerful technologies, it has the potential to create regulatory problems if it isn’t used carefully.