AuthonAuthon Blog
tutorial6 min read

35 New CVEs This Month Were Caused by AI-Generated Code. We Have a Problem.

Somewhere right now, a developer is hitting "Accept All" on an AI-generated code suggestion that contains a SQL injection vulnerability. They'll ship

AW
Alan West
Authon Team
35 New CVEs This Month Were Caused by AI-Generated Code. We Have a Problem.

Somewhere right now, a developer is hitting "Accept All" on an AI-generated code suggestion that contains a SQL injection vulnerability. They'll ship it to production tonight. It'll get a CVE number next month.

This isn't hypothetical. According to data tracked by Georgia Tech's SSLab through their Vibe Security Radar project, AI-generated code was linked to 6 CVEs in January, 15 in February, and 35 in March 2026. That's not linear growth — that's exponential.

The Scale of AI-Generated Code Is Insane

According to reports from Infosecurity Magazine, Claude Code alone has reportedly accumulated over 15 million commits on GitHub, accounting for more than 4% of all public commits. That's one AI coding tool. Add Copilot, Cursor, Windsurf, and the dozens of other AI code assistants, and we're looking at a substantial percentage of new code being AI-generated or AI-assisted.

More code, shipped faster, with less human review. What could go wrong?

What the CVEs Actually Look Like

The vulnerabilities being flagged aren't exotic. They're the boring, well-known patterns that we supposedly solved years ago. SQL injection. Path traversal. Hardcoded credentials. Improper input validation. The stuff that OWASP has been screaming about since 2003.

Here's a pattern the Vibe Security Radar reportedly flagged — AI-generated code that looks correct at first glance:

python
# AI-generated file upload handler
# Looks reasonable, right?
import os
from flask import Flask, request

app = Flask(__name__)

@app.route("/upload", methods=["POST"])
def upload_file():
    file = request.files["document"]
    filename = file.filename  # CVE territory: no sanitization
    upload_path = os.path.join("/var/uploads", filename)
    file.save(upload_path)
    return {"status": "uploaded", "path": upload_path}

See the problem? The filename comes directly from user input with zero sanitization. An attacker sends filename=../../etc/cron.d/backdoor and now they're writing files anywhere on the filesystem. This is a textbook path traversal, and AI models generate this pattern constantly because it "works" in the training data.

The fixed version isn't complicated:

python
import os
import uuid
from flask import Flask, request
from werkzeug.utils import secure_filename

app = Flask(__name__)
ALLOWED_EXTENSIONS = {"pdf", "doc", "docx", "txt"}

@app.route("/upload", methods=["POST"])
def upload_file():
    file = request.files["document"]
    if not file.filename:
        return {"error": "No filename"}, 400

    # Sanitize filename
    filename = secure_filename(file.filename)
    if not filename:
        return {"error": "Invalid filename"}, 400

    # Validate extension
    ext = filename.rsplit(".", 1)[-1].lower()
    if ext not in ALLOWED_EXTENSIONS:
        return {"error": "File type not allowed"}, 400

    # Use UUID to prevent overwrites and path tricks
    safe_name = f"{uuid.uuid4().hex}.{ext}"
    upload_path = os.path.join("/var/uploads", safe_name)

    # Final check: ensure we're still in the upload directory
    if not os.path.abspath(upload_path).startswith("/var/uploads"):
        return {"error": "Invalid path"}, 400

    file.save(upload_path)
    return {"status": "uploaded", "id": safe_name}

The difference is about 15 lines of code. The AI didn't write them because the training data doesn't consistently include them.

Why AI Models Keep Making These Mistakes

LLMs generate code based on statistical patterns in training data. The training data is GitHub. GitHub is full of tutorials, example code, and quick-and-dirty prototypes that skip security best practices for brevity.

When you ask an AI to "write a file upload handler," it gives you what most file upload handlers on GitHub look like — the minimal version without security checks. It's not malicious. It's just averaging over a corpus where most examples are incomplete.

The Georgia Tech team reportedly found that the most common vulnerability categories in AI-generated CVEs are:

  • Injection flaws (SQL, command, path) — 40% of tracked cases
  • Authentication/authorization gaps — 25%
  • Data exposure (hardcoded secrets, verbose errors) — 20%
  • Cryptographic misuse (weak algorithms, bad IV handling) — 15%
  • The "Vibe Coding" Problem

    There's a term circulating in security circles: "vibe coding." It means generating code with AI, glancing at it to see if it looks right, and shipping it. No deep review. No security analysis. Just vibes.

    The problem is that insecure code often looks right. A SQL query with string interpolation works. A JWT without expiration validation works. An API endpoint without rate limiting works. They all work — right up until someone exploits them.

    javascript
    // AI-generated auth middleware that "works"
    // but has a critical timing attack vulnerability
    function verifyToken(req, res, next) {
      const token = req.headers.authorization?.split(" ")[1];
      if (!token) return res.status(401).json({ error: "No token" });
    
      try {
        // Problem 1: no algorithm restriction
        const decoded = jwt.verify(token, process.env.JWT_SECRET);
        req.user = decoded;
        next();
      } catch (err) {
        // Problem 2: error message leaks info
        return res.status(401).json({ error: err.message });
      }
    }
    
    // What it should look like
    function verifyToken(req, res, next) {
      const token = req.headers.authorization?.split(" ")[1];
      if (!token) return res.status(401).json({ error: "Unauthorized" });
    
      try {
        const decoded = jwt.verify(token, process.env.JWT_SECRET, {
          algorithms: ["HS256"],  // Restrict algorithms
          maxAge: "1h",           // Enforce expiration
        });
        req.user = decoded;
        next();
      } catch (err) {
        // Generic error - don't leak verification details
        return res.status(401).json({ error: "Unauthorized" });
      }
    }

    What Should Teams Do Right Now?

    Add AI-aware static analysis to your CI pipeline. Tools like Semgrep and Snyk can catch the common patterns. If your pipeline doesn't have SAST, you're shipping blind regardless of whether humans or AI wrote the code. Treat AI-generated code as untrusted input. Every line gets the same review scrutiny as a junior developer's first PR. Because statistically, that's about the security awareness level you're getting. Create security guardrails in your AI prompts. Instead of "write a file upload handler," try "write a file upload handler with input validation, path traversal prevention, and file type restrictions." The AI will actually include the security code — you just have to ask. Track your AI-generated code ratio. Know what percentage of your codebase was AI-generated. When a CVE drops for a pattern your AI tends to produce, you'll know where to look.

    The Uncomfortable Truth

    The 6-15-35 trajectory from the Vibe Security Radar isn't going to flatten. AI code generation is accelerating, not slowing down. The number of developers using AI assistants is growing every month. The security review capacity of most teams is not growing at the same rate.

    We're shipping code faster than ever. We're reviewing it less than ever. The CVE numbers are the inevitable result. The question isn't whether your codebase has AI-generated vulnerabilities. It's how many, and whether you'll find them before someone else does.

    35 New CVEs This Month Were Caused by AI-Generated Code. We Have a Problem. | Authon Blog