The
The number of bots attempting to take over consumer financial accounts increased 202% from Jan. 1 to June 30, while bots' attempts to establish fake bank accounts using
Arkose didn't disclose exact figures for proprietary reasons, but it says it analyzed "tens of billions" of sessions of traffic on its network during the first six months of the year.
So-called intelligent bots employ machine learning and AI to mimic human behavior and make mass purchases for items like tickets or to earn a discount, wreaking havoc on genuine accounts. Credential stuffing is a common tactic where bad actors use stolen usernames and passwords to access accounts, exploiting the fact that many people reuse credentials across multiple platforms.
With card testing, fraudsters methodically try out stolen credit card information on thousands of different merchant websites, greatly accelerated by machine learning and AI.
To make these scams more efficient, fraudsters are using generative AI models to fine-tune attacks based on data scraped from websites, and Arkose Labs' research shows that criminals are increasingly accessing these tools via applications programming interfaces on publicly accessible websites.
"What we're seeing now is cybercrime-as-a-service, where criminals can go online and shop for all the tools they need to bypass anti-bot security walls, including getting past the CAPTCHA programs companies put in place to thwart bots, and it all looks very sleek and professional," Gosschalk said, noting that these sites enable individuals with little expertise to cause widespread damage.
"It's becoming very difficult to spot some of these AI-enhanced phishing scams because they look exactly like legitimate emails," Gosschalk said, noting that Arkose Labs is also seeing an increase in romance scams that use data scraped at scale from websites to more effectively target victims.
Another wave of threats introduced this year uses phishing scams to trick consumers into providing confirmation of multifactor authentication processes, he said.
Arkose Labs, founded in 2016, analyzes criminals' specific methods of infiltrating consumer accounts and defeating the effectiveness of the fraud-perpetration tools they're buying on the dark web and elsewhere, he said.
"We research the entry points where criminals are attacking banks and other firms and increase the obstacles so it's not worth it for them to continue," Gosschalk said.
Arkose Labs' work is aided by the research it gathers from some of its core customers, including OpenAI. The firm got its start by working with Roblox, Dropbox and Github to protect those sites from criminals seeking broad tranches of data they could use elsewhere to commit crimes.
"OpenAI uses Arkose Labs to make sure hackers aren't using ChatGPT APIs," Gosschalk said, noting that organizations with massive databases and large banks are more vulnerable to the newest attacks than smaller institutions.
"Cybercriminals are all about operating at scale, so they go after bigger banks with huge customer lists, to increase the odds of one of their phishing campaigns reaching someone who thinks a bogus message they've received from the bank is real," Gosschalk said.
Three-quarters of all web traffic on travel and hospitality sites is now coming from bad bots, while bots are driving 65% of retail web activity and 45% of all financial services website traffic, the firm said.
Web-scraping attacks showed the highest growth rate of all bot attacks this year, rising 432% between the beginning of 2023 and the end of the second quarter, followed by a 135% increase in fake accounts, a 114% increase in account takeover and a 30% increase in payment attacks and card-testing scams, according to Arkose Labs.
Arkose Labs Global Intelligence Network gleaned this data by analyzing cybercrime trends among some of the largest global corporations and U.S. banks during the first three quarters of the year.
Arkose analyzed billions of sessions for its network from January through September.
Providers of financial services security technology are scrambling to find ways to detect and block fraud that's been accelerated and enhanced by generative AI, said Jim Mortensen, a strategic advisor for fraud and anti-money-laundering activities at Datos Insights.
"What's different with generative AI is the availability of new capabilities and efficacy, along with the potential for greater scale, as some scams and account-takeover fraud can only be automated to a point with dated technology, and gen AI changes that," he said.
FraudGPT and WormGPT are relatively new fraud tools using generative AI, and generative AI is proving to be very effective in helping fraudsters trick CAPTCHA puzzles to perpetrate credential stuffing and other automated attacks, Mortensen said.
"Any time a new technology is introduced, industry tries to anticipate how it will be used by fraudsters and how we can
Arkose Labs said that the rapid evolution of generative AI-enhanced fraud makes it difficult to predict how effective financial institutions' existing security defenses will be.
"The sophistication of attacks we've seen so far this year is orders of magnitude greater than what we saw even a couple of years ago — it's creating new problems and making old problems like basic card-testing new again," Gosschalk said.