Satish Lalchand
United States
Val Srinivas
United States
Brendan Maggiore
United States
Joshua Henderson
United States
In January 2024, an employee at a Hong Kong-based firm sent US$25 million to fraudsters after being instructed to do so by her chief financial officer on a video call that also included other colleagues. It turned out, however, that she wasn't on a call with any of these people: Fraudsters created a deepfake that replicated their likenesses to trick her into sending the money.
Incidents like this will likely proliferate in the years ahead as bad actors find and deploy increasingly sophisticated, yet affordable, generative AI to defraud banks and their customers.
How generative AI is making fraud a lot easier—and cheaper—to pull off
Generative AI offers seemingly endless potential to magnify both the nature and the scope of fraud against financial institutions and their customers; it's limited only by a criminal's imagination.
The astounding pace of innovations will challenge banks' efforts to stay ahead of fraudsters. This is because generative AI-enabled deepfakes incorporate a "self-learning" system that constantly checks and updates its ability to fool computer-based detection systems.
Specifically, the ready availability of new generative AI tools can make deepfake videos, fictitious voices, and fictitious documents easily and cheaply available to bad actors. There is already an entire cottage industry on the dark web that sells scamming software from US$20 to thousands of dollars.
So, no wonder financial services firms are particularly concerned about generative AI fraud that accesses client accounts. One report found deepfake incidents increased 700% in fintech in 2023.
Some fraud types may be more vulnerable to generative AI than others. For example, business email compromises, one of the most common types of fraud, can cause substantial monetary loss, according to the FBI's Internet Crime Complaint Center's data, which tracks 26 categories of fraud.
Banks have been at the forefront of using innovative technologies to fight fraud for decades. However, a US Treasury report found "existing risk management frameworks may not be adequate to cover emerging AI technologies."
How banks can prepare for a new era of fraud prevention
Banks should focus on their efforts to fight generative AI-enabled fraud to maintain a competitive edge. They should consider coupling modern technology with human intuition to determine how technologies may be used to preempt attacks by fraudsters. There won't be one silver-bullet solution, so anti-fraud teams should continually accelerate their self-learning to keep pace with fraudsters. Future-proofing banks against fraud will also require banks to redesign their strategies, governance, and resources.
The pace of technological advancements means banks won't fight fraud alone as they increasingly work with third parties that are developing anti-fraud tools. Since a threat to one company is a potential threat to all companies, bank leaders can develop strategies to collaborate within and outside of the banking industry to stay ahead of generative AI fraud. This will likely require entities across the banking industry to work together. Banks should work with knowledgeable and trustworthy third-party technology providers on strategies, establishing areas of responsibility that address liability concerns for fraud among each party.
Customers, too, can serve as partners in helping prevent fraud losses. But customer relationships may be tested when determining whether a fraud loss is to be borne by customers or their financial institutions. Customers expect efficiency and security when using their money, and generative AI's deepfake technology could disrupt these two goals. Banks have an opportunity to educate consumers and build awareness about potential risks and how the bank is managing them. Building this level of awareness will likely require frequent communication touchpoints, such as push notifications on banking apps that warn customers of possible threats.
Regulators are focused on the promise and threats of generative AI alongside the banking industry. Banks should be actively participating in the development of new industry standards. By bringing in compliance early during technology development, they can have a record of their processes and systems prepared in case it's needed for regulators.
And finally, banks should invest in hiring new talent and training current employees to spot, stop, and report AI-assisted fraud. For many banks, these investments will be expensive and difficult; they're coming at a time when some bank leaders are prioritizing managing costs. But to stay ahead of fraudsters, extensive training should be prioritized. Banks can also focus on developing new fraud detection software using internal engineering teams, third-party vendors, and contract employees, which can help foster a culture of continuous learning and adaptation.
Generative AI is expected to significantly raise the threat of fraud, which could cost banks and their customers as much as US$40 billion by 2027. Banks should step up their investments to create more agile fraud teams to help stop this growing threat.
About this prediction
Our prediction for generative AI adoption in fraud is based on historical trends and input from Deloitte professionals specializing in fraud and risk. We assigned a "generative AI fraud risk" score to each of the 26 types of fraud tracked by the FBI's IC3 report. We assigned expected growth rates for different fraud types until 2027 under different scenarios of generative AI adoption: conservative, base, and aggressive. Assumptions made in the forecast are informed by our understanding of the differences in various fraud types.
Endnotes
- Heather Chen and Kathleen Magramo, "
Finance worker pays out $25 million after video call with deepfake 'chief financial officer ," CNN, February 4, 2024. - Jon Bateman,
Deepfakes and synthetic media in the financial system: Assessing threat scenarios , Carnegie Endowment for International Peace, July 8, 2020. - Alakananda Mitra, Saraju P. Mohanty, and Elias Kougianos, "
The world of generative AI: Deepfakes and large language models ," Arxiv.org, February 8, 2024. - Nabila Ahmed et al., "
Deepfake imposter scams are driving a new wave of fraud ," Bloomberg, August 21, 2023. - Hannah Murphy, "
Deepfakes make banks keep it real ," Financial Times, September 20, 2023. - Isabelle Bousquette, "
Deepfakes are coming for the financial sector ," Wall Street Journal, April 3, 2024. - Huo Jingnan, "
Using AI to detect AI-generated deepfakes can work for audio — but not always ," NPR, April 5, 2024. - Federal Bureau of Investigation,
Internet crime report 2023 , April 4, 2024. - US Department of the Treasury,
Managing artificial intelligence-specific cybersecurity risks in the financial services sector , March 2024. - Edmund Lawler, "
Banks face the twin-edged sword of generative AI ," BAI, March 4, 2024. - Penny Crosman, "
JPMorgan Chase using advance AI to detect fraud ," American Banker, July 3, 2023. - Mastercard, "
Mastercard supercharges consumer protection with gen AI ," press release, February 1, 2024.
Acknowledgments
The authors would like to thank Andrew Myers, Advisory manager of Deloitte & Touche LLP, for his contributions to this article.
Cover art by: Natalie Pfaff