BankThink

Banks have an important opportunity to lead on AI safety

artificial intelligence 13.jpg
By repealing the Biden administration's artificial intelligence safety guidelines, President Trump has created a chance for the banking industry to demonstrate that it can be a leader in protecting consumers, writes Deon Crasto.
Fotolia

The Trump administration's repeal of the Biden administration's AI safety order might sound like a green light for unregulated innovation. But having fewer government rules doesn't mean banks are free from responsibility. In fact, it may be a rare moment for the industry to set its own high standards for using artificial intelligence — especially now that big decisions about fairness and transparency are banks' to make.

Financial services already rely on AI for crucial tasks, from approving loans to detecting fraud. A well-tuned algorithm can slash operating costs and boost customer satisfaction. Yet a risky model, left unchecked, can discriminate against certain groups or make baffling decisions that erode trust. Biden's order, while not perfect, offered a basic framework around model transparency and safety checks. Removing that framework places the job of self-regulation back in the hands of bankers and compliance teams.

Why might this be a good thing? First, banks get to shape guidelines that move at the same pace as their evolving AI projects. Federal rules often lag behind tech changes, forcing bankers to navigate outdated mandates or wait for rule updates. If a bank runs AI for fraud detection, for example, being stuck with slow-moving, one-size-fits-all guidance can hold it back. Now, the bank can tailor more flexible policies that evolve quickly with each new data source or model upgrade.

Second, the industry setting its own bar for AI safety could boost public trust. The public might think banks would jump at a chance to cut corners, but modern banking thrives on reputation. A single AI fiasco — where a large group of customers are wrongly denied accounts or slapped with unfair fees — could spark a national uproar. By publicizing that you test AI models for fairness, bias and accuracy, a bank can position itself as a leader in consumer protection. This matters more than ever, since customers increasingly expect personalized services (like dynamic loan offers) without surrendering their data privacy or dealing with arbitrary decisions.

Sen. Edward Markey warns that biased AI algorithms are making decisions that deny mortgages to Black people. 

September 24
Senator Edward Markey

Third, self-regulation doesn't have to be lonely or chaotic. Banks can band together — perhaps under an industry association — to create shared best practices for AI in financial services. Think about how payment networks or anti-money-laundering initiatives often involve cooperation across the sector. If multiple banks adopt similar principles around interpretability and data governance, it reassures both regulators and the public that everyone is taking AI risk seriously. In turn, that unity might reduce the chance of future heavy-handed mandates, because lawmakers could see a functioning, transparent ecosystem of bank-led AI oversight.

Even without a federal safety order, caution still rules the day. Risk committees should ask tough questions: Are we sure our AI models aren't penalizing certain neighborhoods or age groups? Do we have backup plans if a model starts drifting and generating strange results? Who owns the ultimate sign-off for putting a new AI product into production? Basic as these questions sound, many institutions run AI pilots without a formal checklist. That's dangerous — at some point, a glitch or bias may slip through and harm real people. (In a past piece for American Banker, I argued that training data itself can sow biases in otherwise well-built AI models.)

None of this suggests banks should celebrate the repeal because "now we can do anything we want." On the contrary, responsible institutions might hold themselves to stricter standards than the old rules required. Stepping up now with strong self-regulation can show that the financial services industry leads, rather than waits for government directives. That sort of leadership becomes a competitive advantage.

Yes, a bank setting its own rules demands more upfront work. It will require cross-functional teams — compliance folks, data scientists, legal counsel — drafting internal policies and auditing models. But done well, it prevents future headaches and helps deliver AI-driven services that customers genuinely trust. Perhaps a bank will roll out user-friendly model explanations or easy dispute processes when the system makes a questionable call. These steps go beyond compliance; they're part of building lasting relationships in a world where AI is growing by the day.

So, while the withdrawal of Biden's AI safety guidelines may make some bankers feel like the floor just vanished under their feet, it can be an invitation. Banks can prove they can responsibly push AI forward, sustain consumer confidence and maintain fair processes. Rather than letting "deregulation" define this era, banks can choose to define it themselves by adopting robust AI ethics, championing industrywide best practices and reminding everyone why trust, not short-term gain, remains the bedrock of banking.

For reprint and licensing requests for this article, click here.
Artificial intelligence Regulation and compliance Politics and policy
MORE FROM AMERICAN BANKER