As banks look for places to save time and effort by deploying artificial intelligence software, one logical place is cybercrime and fraud investigations.
One reason for this is that the amount of cybersecurity threat information keeps growing as the availability of cybersecurity skills in this country shrinks. According to the Ponemon Institute, organizations receive on average nearly 17,000 malware alerts a week, and the time spent responding to these alerts as well as to inaccurate and erroneous intelligence in general costs $1.27 million annually. The group's research has also found that 19% of all security alerts are considered reliable, but only 4% are investigated.
According to IBM, 10,000 security research papers are published every year and over 60,000 security blogs are posted each month, adding to the challenge for small security teams to keep up.
Meanwhile, Cisco estimates there are more than 1 million unfilled security jobs worldwide.
Another motivator for using AI to assist human cybersecurity analysts is that cybercrime evolves constantly, and it can take humans some time to identify new strains of cyberattacks. Software can identify new patterns in seconds.
"Using AI may help maintain the rapid response required to detect and react to the landscape of ever evolving cyber threats," the White House stated in a
IBM announced in May that it had started a yearlong program to train Watson, its "Jeopardy!"-winning AI software, on the language of cybersecurity, with plans to begin beta deployments later in the year. A spokesperson confirmed the company remains on schedule for the beta programs.
Big Blue says Watson for Cyber Security will find connections between data, emerging threats and remediation strategies. Watson will be fed the contents of IBM's X-Force research library, which includes details on 8 million spam and phishing attacks and 100,000 documented vulnerabilities. Students at eight universities will provide additional information, such as academic papers on security, for Watson to ingest. Blogs, articles, videos, reports and alerts will also be included. Watson will use natural-language processing to understand the vague and imprecise language in these files. Then it will generate information about emerging threats and recommendations on how to stop them. (Separately, IBM is buying Promontory Financial Group, with plans to have the consulting firm's regulatory experts teach Watson all about compliance.)
MIT's Computer Science and Artificial Intelligence Laboratory and the machine-learning startup PatternEx have built an artificial intelligence platform called AI2 that they say predicts cyberattacks better than existing systems by continuously incorporating input from human experts.
The software detects suspicious activity by clustering data into patterns using unsupervised machine-learning. It then presents this activity to human analysts who confirm which events are actual attacks, and incorporates that feedback into its models for the next set of data.
Finding Crime on the Blockchain
The most difficult place to catch cybercrime might be blockchains, the public ledgers used for cryptocurrency payments. Bitcoin and the systems that compete with it are designed for anonymity, making it ideal for cybercriminals carrying out attacks like ransomware threats. Attribution is extremely difficult, investigators say, and few perpetrators have been caught.
"It's very easy to set up a cryptocurrency transaction, and it's easy for people to send hundreds or thousands of transactions back and forth to try to obfuscate where the funds are coming from or where they're going," said Fabio Federici, CEO and founder of a startup called Skry. "That requires new types of tools to allow you to investigate that and try to identify the flow of funds."
His firm has built artificial intelligence software for detecting suspicious behavior in blockchains. The use of artificial intelligence means the software doesn't have to be told what to look for. It can watch transactions on a blockchain and start to identify odd patterns by itself.
Law enforcement agencies use the software to spot patterns of bad behavior, then to connect the digital footprint of the anonymous cryptocurrency users with real-life people or entities. Bitcoin exchanges use it to find weird or anomalous behavior that they might need to keep an eye on. Banks could use it for the same thing, especially financial institutions that have cryptocurrency exchange or bitcoin wallet provider customers.
One thing the software can do is follow the flow of money to its source.
"Startups that accept bitcoin, like exchanges or wallet providers, or financial institutions that have those types of companies as clients, might want to do due diligence on the source of the funds," Federici said.
The software can monitor an entity that has exhibited suspicious activity that might have to be reported to regulators or law enforcement agencies. For clients like bitcoin exchanges, Skry doesn't try to identify a crime, per se. It finds behavior patterns that are not normal and might be worth a close look.
To help law enforcement, the software goes a little further. It looks first for unusual patterns in blockchain transactions. When needed, it pulls in data from a partner like Terbium Labs, which specializes in crawling the
This type of snooping inevitably raises privacy concerns.
"It's a double-edged sword," Federici said. "Our goal is not to uncover the average Joe buying his coffee at the corner store. For us, it's about making cryptocurrencies like bitcoin safer and more legitimate and making companies feel comfortable interacting with this type of technology without having to be afraid of getting involved in illicit activity that could cost them their banking license or lead to some type of lawsuits, which is still a real fear." And Skry doesn't try to identify people or companies until after criminal behavior has been spotted.
Catching Card Fraud
They may not realize it, but most banks already using artificial intelligence to help detect credit card fraud, according to Scott Zoldi, chief analytics officer at FICO. The company's Falcon software uses
"It's become one of the key technologies banks use to make decisions around fraud," Zoldi claimed. "When you swipe your credit card, that authorization message that says, 'Can I buy this coffee at Starbucks?' comes from the acquirer, goes to the issuing bank, the issuing bank runs it through a model like Falcon, which calls up a card profile that shows the history of transactions on this card."
The neural network produces a score between 1 and 999 — 999 being the very riskiest. The bank then uses that score in its decision to allow authorization or decline. Banks usually have a rules-based strategy for this, Zoldi said. They might automatically decline all transactions with scores between 970 and 999 because they're so high-risk. They might allow small transactions scored between 900 and 969, but watch them closely.
"They don't want to interrupt your transactions, but then they'll call you after the fact," Zoldi said. "They just use it as a very strong predictive indicator of what transactions are likely fraudulent for consumers, based on the technology that deeply understands your transaction behavior, tuned to you as an individual." At lower score ranges, an analyst at the bank might call the customer or send a text notification to her phone, asking her to confirm the transaction.
Some large banks, like Citi and JPMorgan Chase, have stables of scientists working on their own artificial intelligence software that examines the highest-scoring cases in Falcon and look more closely at those customers to fine-tune the rules strategy for fraud decisions, Zoldi said.
These are just a few examples. The uses for AI in cybercrime are almost endless and the volume of suspicious behavior worth analyzing is unlikely to ever let up.
Editor at Large Penny Crosman welcomes feedback at