How to Sidestep Generative AI Cybersecurity Risks

Worried about generative AI? CISA Director Jennifer Easterly is. Back in April, she called cybersecurity risk from generative AI “… the biggest issue that we’re going to deal with this century.” The National Institute of Standards and Technology (NIST) feels similarly. On top of their AI risk management framework released in January, NIST is about to launch a new public working group on AI to “tackle risks of rapidly advancing generative AI.”

LLM Tech Poses a Threat?

The rise of ChatGPT got regulators and thought leaders thinking that the malicious use of large language models (LLM) technology poses a real threat to society. They’re partly right. Listening to expert voices in this field, a clear pattern is emerging. Security leaders are worried about generative AI breaching their detection and prevention capabilities.

We know that current generation generative AI tools are already helping criminals develop malware strains, scan for vulnerabilities, breach passwords and phish network insiders (many of whom would not otherwise be able to do so without generative AI).

What’s particularly concerning is the potential for AI to create dynamic strains of malware that could evade security controls on the fly. Imagine LLM-powered malware writing its own code repeatedly until it defeats heuristic-based controls. Or adversarial machine learning, where cybercriminals use their AI to beat your increasingly AI-powered security controls.

An AI Cybersecurity Arms Race

Follow some of the conversations around this topic for long enough, and you end up asking whether detection and response could become impossible if generative AI continues evolving at its current pace. It’s also rational to imagine that the answer is better, smarter generative AI used for defense, i.e., AI trying to prevent AI from hacking—an AI arms race.

AWS Builder Community Hub

These fears are somewhat wide of the mark. AI will lower the bar for cybercrime. Using generative AI in the cybersecurity field, whether to develop malware or prevent it, shares a common weakness. Generative AI draws from sets of static training data to predict real-time conditions and is “value-locked” into a particular understanding of how systems and technology work.

LLMs are predictive models that use reinforcement learning to pick up on patterns. If a word or code snippet has come before another word or code snippet nine times out of 10 in the past, it probably will the next time it pops up, too. Generative AI LLMs are not writing anything new. They just emulate code or copy based on parameters within their training data.

In a cyberthreat use case, the reliance on patterns persists. For example, imagine a threat actor using an LLM application to develop malware to attack passwords stored in RAM. Their LLM will have to rely on training parameters based on data that shows user passwords commonly located in a particular memory address or API latches.

Any malware created with their LLM will work on the basis that future passwords are likely to be stored in similar locations and accessed via similar APIs. When the reality is different and the LLM-designed malware encounters an application where passwords are randomly stored and accessed via randomly generated APIs, it will fail by falling into a “smart trap” without knowing why.

LLMs use statistics to make sense of the world. Present them with data outside their experience, and you get “hallucinations.” This operational reality of generative AI presents defenders with a theoretical advantage. Make your attack surface unpredictable, and you will remove the fundamental advantage generative AI could give threat actors.

Technologies like Automated Moving Target Defense (AMTD) by Morphisec make this theoretical advantage real. It does this by periodically changing a system’s or service’s APIs (or functions) based on a random number and then changing only trusted code in memory at runtime. AMTD randomizes asset locations in a way that cannot be trained for or incorporated into a model and, as a result, forces AI to hallucinate and miss its mark. In fact, leading security analysts recognized these challenges and described AMTD as “The Future of Cyber.”

From the AI-powered attacker perspective, moving assets based on randomly selected (morphing) system resources means that an adversary cannot train an AI system based on a known system layout. In an AMTD-protected system, attacks will try to target application and system assets but inevitably miss when reality contradicts their training data.

Step away from the idea of an AI arms race, and generative AI looks a lot less scary.

Avatar photo

Nir Givol

Nir Givol, CISSP, is a Director of Product Management at Morphisec, overseeing the company's Linux Security solutions - providing advanced security to Linux Servers, Cloud Workloads, OT assets, and other devices. Before joining Morphisec Nir was an early employee at SentinelOne (NYSE:S), managing their MacOS and Linux solutions. Previously Nir held Senior Product management roles at Forescout, Cisco, and other tech startups.

nir-givol has 1 posts and counting.See all posts by nir-givol