![artificial intelligence 13.jpg](https://arizent.brightspotcdn.com/dims4/default/13cdacb/2147483647/strip/true/crop/600x400+0+0/resize/740x493!/quality/90/?url=https%3A%2F%2Fsource-media-brightspot.s3.us-east-1.amazonaws.com%2Fc7%2F9d%2F066dc8904291a0178c03ebbef606%2Fartificial-intelligence-13.jpg)
The Trump administration's
Financial services already rely on AI for crucial tasks, from approving loans to detecting fraud. A well-tuned algorithm can slash operating costs and boost customer satisfaction. Yet a risky model, left unchecked, can discriminate against certain groups or make baffling decisions that erode trust. Biden's order, while not perfect, offered a basic framework around model transparency and safety checks. Removing that framework places the job of self-regulation back in the hands of bankers and compliance teams.
Why might this be a good thing? First, banks get to shape guidelines that move at the same pace as
Second, the industry setting its own bar for AI safety could boost public trust. The public might think banks would jump at a chance to cut corners, but modern banking thrives on reputation. A single AI fiasco — where a large group of customers are wrongly denied accounts or slapped with unfair fees — could spark a national uproar. By publicizing that you test AI models for fairness, bias and accuracy, a bank can position itself as a leader in consumer protection. This matters more than ever, since customers increasingly expect personalized services (like dynamic loan offers) without surrendering their data privacy or dealing with arbitrary decisions.
Sen. Edward Markey warns that biased AI algorithms are making decisions that deny mortgages to Black people.
Third, self-regulation doesn't have to be lonely or chaotic. Banks can band together — perhaps under an industry association — to create shared best practices for AI in financial services. Think about how payment networks or anti-money-laundering initiatives often involve cooperation across the sector. If multiple banks adopt similar principles around interpretability and data governance, it reassures both regulators and the public that everyone is taking AI risk seriously. In turn, that unity might reduce the chance of future heavy-handed mandates, because lawmakers could see a functioning, transparent ecosystem of bank-led AI oversight.
Even without a federal safety order, caution still rules the day. Risk committees should ask tough questions: Are we sure our AI models aren't penalizing certain neighborhoods or age groups? Do we have backup plans if a model starts drifting and generating strange results? Who owns the ultimate sign-off for putting a new AI product into production? Basic as these questions sound, many institutions run AI pilots without a formal checklist. That's dangerous — at some point, a glitch or bias may slip through and harm real people. (In a past piece for American Banker,
None of this suggests banks should celebrate the repeal because "now we can do anything we want." On the contrary, responsible institutions might hold themselves to stricter standards than the old rules required. Stepping up now with strong self-regulation can show that the financial services industry leads, rather than waits for government directives. That sort of leadership becomes a competitive advantage.
Yes, a bank setting its own rules demands more upfront work. It will require cross-functional teams — compliance folks, data scientists, legal counsel — drafting internal policies and auditing models. But done well, it prevents future headaches and helps deliver AI-driven services that customers genuinely trust. Perhaps a bank will roll out user-friendly model explanations or easy dispute processes when the system makes a questionable call. These steps go beyond compliance; they're part of building lasting relationships in a world where AI is growing by the day.
So, while the withdrawal of Biden's AI safety guidelines may make some bankers feel like the floor just vanished under their feet, it can be an invitation. Banks can prove they can responsibly push AI forward, sustain consumer confidence and maintain fair processes. Rather than letting "deregulation" define this era, banks can choose to define it themselves by adopting robust AI ethics, championing industrywide best practices and reminding everyone why trust, not short-term gain, remains the bedrock of banking.