Fraudsters are using generative AI to accelerate certain brute-force card-testing attacks, and Visa hopes to turn the tables on them by
After more than a year of experimenting with gen AI, the San Francisco-based card network has rolled out a new product trained on more than 15 billion annual VisaNet transactions. The technology will warn issuers in real time when a payment account number appears to have been compromised by an
"Until now, stopping an enumeration attack has been like finding a needle in a haystack, but with gen AI we're able to spot it by rapidly combing through trillions of rows of data," said Paul Fabara, Visa's chief risk and client services officer.
Fraudsters perpetrate enumeration attacks by using scripts to run trial-and-error authorization attempts on millions of potential payment account numbers at scale until they randomly hit one with the right combination of authorization credentials. Increasingly, they're using gen AI to do this even faster, according to Fabara.
In recent years, fraudsters have intensified their efforts in enumerated attacks by using AI to crack the logic of a targeted bank's sequential issuing logic and guess at customers' card numbers and credentials, sometimes finding combinations merchants will validate online, he said.
Enumeration attacks account for about $1.1 billion in global losses globally each year and represent about 10% of global fraud, according to Visa data.
To block issuers' exposure to enumeration attacks, Visa used gen AI to update Visa Account Attack Intelligence, a machine learning-powered tool introduced in 2019, with the new product that evaluates up to 182 risk attributes in a millisecond, generating a two-digit score in real time that predicts the likelihood of an enumeration attack.
"Speed is critical, because the minute a fraudster hits on a valid account number in an enumeration attack, the clock is ticking as fraudsters immediately start trying to monetize the account data," Fabara said.
The new product, which has six times the fraud-detection features of previous VAAI models, also helps reduce the rate of false positives by 85%, so issuers are less likely to block legitimate transactions that may have looked like they were subject to enumeration attacks, he said.
Visa used "noisy data" to train the model, but Fabara declined to describe the precise large language models Visa used to develop the new VAAI Score, which is rolling out to U.S. issuers this year.
"It's a Visa-built tool, a generative model that uses deep learning techniques to generate new data that resembles data we know about enumerative attack patterns, allowing the tool to score transactions in real time for the likelihood of enumeration with improved performance," he said.
Companies have used various forms of AI to detect fraud for almost 30 years, Fabara noted, and the industry is in the early stages of adapting gen AI to attack new fraud vactors.
"AI has been an integral part of our fraud detection and prevention strategies for the past three decades, but generative AI unlocks the next era in payments security," Fabara said.
Sift, a San Francisco-based fraud-detection firm, uses advanced machine learning to spot and block suspicious transactions.
According to FIBR, Sift's