AI vs AI, Part II: Code Is a Battlefield

Security teams today find themselves fighting on two fronts in the AI war.
As we talked about in our previous blog, developers frequently unintentionally write vulnerable code—and so, in turn, do LLMs and AI-powered tools. This problem won’t go away any time soon, because the models themselves are trained on high volumes of messy, real-life code with real-life flaws.
When developers use AI to write code, it might solve the problem they asked the model to solve, but it may also add security flaws. As code velocity accelerates from AI augmentation, security issues can start piling up. What’s more, it’s possible that literally no one alive has really thought about how that code works, or how to fix it.
And that’s when everyone is doing their best, trying to commit good code! These developers aren’t trying to take part in a pitched battle with security; the situation arises only due to the unintended limitations of the models and developers.
So here’s an even thornier question: what happens when threat actors start using AI to create deliberately malicious code? That’s the second front in the AI vs. AI code war—security vs. threat actors.
Code Red: When Threat Actors Use AI
Attackers saw AI as a massive boon from the beginning, and security researchers have started to understand the full extent of the threat.
Here are a few of the ways threat actors and security researchers can use AI to launch attacks:
- Malicious code obfuscation: If a rogue developer in your organization wants to commit code that deliberately exfiltrates data or creates a backdoor, GenAI can generate complex but irrelevant code to hide the true purpose of the malicious code.
- Hiding secrets in plain sight: An insider threat could deliberately include hard-coded credentials and other secrets in the code base, allowing authentication bypasses and evasion of existing security controls.
- LLM hijacking: Researchers recently showed that attackers can poison training data sets for LLMs in ways that lead them to create malicious code, which could then be adopted by developers for use in your code base.
- Deliberately insecure defaults: We all know that many users never change configurations from defaults—so if a developer uses GenAI to code a deliberately insecure default configuration, the resulting security hole could have a big impact.
Fighting Fire with Fire: AI For the Good Guys
As you can see, malicious insiders can use AI to attack from a number of angles. For companies working in sensitive industries or with valuable data, GenAI creates an urgent need for a comprehensive security strategy to stop serious—even existential—threats from entering your code base.
The only way to detect these AI-created issues will be AI-based security software, built to see the real-world ways that AI threats manifest in application code. Since code’s being created faster than ever, it’s even more important to use security tooling with automated fixes that can stop attackers from using the flaws they’ve deliberately introduced.
Subscribe to Amplify Weekly Blog Roundup
Subscribe Here!
See What Experts Are Saying
BOOK A DEMO

Jeremiah Grossman
Founder | Investor | Advisor

Saeed Abu-Nimeh
CEO and Founder @ SecLytics
Kathy Wang
CISO | Investor | Advisor
.jpg?width=1200&height=1600&name=IMG-20210714-WA0000%20(1).jpg)