Traditional signature-based detection, i.e., antivirus, has proven wholly ineffective in today’s threat landscape where thousands of new variants are created daily, each with a new signature.
To combat this, cybersecurity vendors, financial institutions and payment companies are adopting artificial intelligence to improve the detection rate of malware and attacks. AI enhanced detection products scan a file to determine if it includes suspicious patterns of CPU instructions or API imports.
They also can check at runtime whether the process is performing malicious operations by analyzing its memory and I/O operations. This provides exponentially better malware and attack detection than antivirus, but is it enough?
Before malware is delivered to the computer, there is a phase of exploitation. This phase may involve, for example, executing malicious shellcode in the browser or exploiting Adobe Reader or Microsoft Office vulnerabilities.
This initial phase of exploitation is where the attacker gains a foothold in your system, hence the most dangerous. Generally, this stage involves minimal memory, disk or I/O operations; simply not enough to tag it as malicious by AI techniques. AI relies on catching the malware itself at a later stage, once it begins to operate in the system.
Different vendors use different AI methods. But whether simple machine learning, genetics algorithms, deep-learning or neural networks, all approaches have in common that they are based on an experience from the past. To put it simply, AI is based on learning from past malware about what malware looks like and how it behaves. 0-days and advanced threats are, by definition, something new. While some APT behaviors are similar enough to past events that AIs can recognize them, completely new techniques have no "similar past event."
AI detection is based upon analyzing a wide range of events in the system, such as monitoring disk and file operations, API hooks, resource access tracing, process creation and termination trace, and so on. Such heavy monitoring of the OS and its processes consumes runtime resource and degrades system performance.
Deep AI involves massive data analytics engines which usually sit at the server or cloud level, and data is sent from the endpoint to the cloud. The cloud engine then decides whether "it’s malware or not." Permanent connectivity may not always be possible and the exposing of event logs may raise privacy, regulatory or legal concerns.
AI and machine learning are not deterministic. They estimate how much this looks like an attack or behaves like malware. This naturally includes incidents of "false-positives."
Enterprises need security technologies that know with certainty when a threat is a threat, even fileless attacks that leave no traces to detect.
Moving Target Defense (MTD), for example, breaks completely away from the malware detection model and the reliance on previous knowledge. It preempts attacks by morphing the system runtime environment, so an attack cannot find and exploit the memory resource it is targeting. Under the MTD model, there is no monitoring that takes a CPU toll, no detection rules or signatures that can be evaded and no determinations to be made about whether or not a file is suspicious.