Treasury warns banks deepfake fraud is on the rise

Face recognition with facial scan in phone. Identification and verification to unlock smartphone. Deep fake technology. Man using cellphone. AI mobile tech and biometric id authentication. Data access
Adobe Stock

A bureau of the Department of Treasury is warning that deepfakes are playing a larger role in fraud that targets banks and credit unions.

The Financial Crimes Enforcement Network issued an alert designed to help financial institutions identify fraud schemes associated with deepfake media, joining a chorus of government agencies warning against the threats presented by images and audio that have been skillfully altered to appear legitimate.

Fincen on Wednesday included text in its definition of deepfakes in its alert, which focused on how fraudsters might mislead a bank about their identity. The deepfake techniques include manipulated photos of identity documents and AI-generated text in customer profiles or in response to prompts.

Alerts such as the one Fincen released typically prelude reports documenting the extent of the impact the subject (in this case, deepfakes) has on financial institutions, helping to quantify various risks. In September, Fincen released an analysis after issuing an alert last year detailing exactly how criminals are stealing money from banks and customers using check fraud.

While no data exists to quantify the financial impact of deepfakes on U.S. banks and credit unions, anecdotal evidence and warnings from law enforcement suggest they pose a major threat. Last year, the FBI, the National Security Agency and the Cybersecurity and Infrastructure Agency released a joint report documenting the impacts deepfakes can have on various organizations.

Bad actors are actively exploiting deepfake technology to defraud U.S. businesses and consumers, according to Fincen Director Andrea Gacki.

"Vigilance by financial institutions to the use of deepfakes, and reporting of related suspicious activity, will help safeguard the U.S. financial system and protect innocent Americans from the abuse of these tools," Gacki said in the alert.

While deepfakes have existed for years, they have become more notable recently thanks to advances in AI technology that make them more convincing and products that make the technology more widely available, according to Rijul Gupta, CEO and co-founder of AI communications company DeepMedia.

"Deepfakes have gotten more sophisticated — not to mention easier to create — over the years," Gupta said. "Today, a hacker can manipulate a person's voice using just seconds of audio."

Indeed, deepfake audio has recently become a special concern for banks, especially those that use voiceprinting technology to authenticate customers using their voices. Even when banks do not authenticate customers using voiceprints, companies that specialize in detecting deepfakes have detected AI-generated audio being used against banks' call centers, to try to trick employees.

In its alert, Fincen warned against audio but also highlighted that fraudsters can manipulate and synthesize images and even live video of a person's face or identity documents. Banks sometimes use these live verification checks to authenticate the user.

While the methods for generating these deepfakes are often advanced, they can leave artifacts banks and credit unions can use to detect the use of generative AI. For example, a customer's photo might have internal inconsistencies (visual tells that the image is altered), be inconsistent with other identifying information (such as the customer's date of birth), or be inconsistent with other identity documents belonging to the customer.

Fincen highlighted other red flags that a fraudster is using deepfake technology. For example, the "customer" might use a third-party webcam plug-in during a live verification check (indicating they may be using software to create the live images rather than an actual video feed), or the user attempts to change communication methods during a live verification due to supposed glitches. Reverse-image lookup might match an online gallery of generative AI-produced faces.

Red flags that generally apply to fraud schemes also apply to deepfake schemes. For example, if the customer's geographic or device data is inconsistent with their identity documents, or a newly opened account or an account with little prior transaction history suddenly sees high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges.

Whenever a financial institution files a SAR involving deepfakes, Fincen requests they include the key term "FIN-2024-DEEPFAKEFRAUD" in SAR field 2 ("Filing Institutions Note to Fincen") to ensure the report is included in the expected data analysis the bureau will release.

For reprint and licensing requests for this article, click here.
Fraud FinCEN Treasury Department Artificial intelligence Technology
MORE FROM AMERICAN BANKER