AuthonAuthon Blog
debugging6 min read

How to Stop AI Code Assistants From Making Unauthorized Changes

AI coding assistants can make unauthorized changes to your codebase. Here's how to set up guardrails with git worktrees, pre-commit hooks, and review workflows.

AW
Alan West
Authon Team
How to Stop AI Code Assistants From Making Unauthorized Changes

So there I was, reviewing a pull request on Monday morning, and half the codebase had been refactored by an AI assistant that a teammate left running over the weekend. New dependencies added, config files rewritten, even some database migration files generated. Nobody asked for any of it.

This is becoming a surprisingly common problem. AI-powered code generation tools are incredibly useful, but without proper guardrails, they can make changes that nobody authorized. Let me walk you through how to lock this down.

The Root Cause: Implicit Trust by Default

Most AI coding tools operate with whatever filesystem and git permissions your user account has. That means if you can write to a file, so can the AI. There's no built-in concept of "you can edit this function but not that config file."

The real issue is twofold:

  • No scoping mechanism — the tool has access to your entire working directory
  • No review gate — changes get written to disk immediately, sometimes staged or even committed automatically

This isn't a security vulnerability in the traditional sense. It's an authorization design gap. The tool is doing exactly what it's designed to do. The problem is that "what it's designed to do" is broader than what you actually want.

Step 1: Isolate the Working Directory

The simplest fix is to never let AI tools operate on your main working tree. Use git worktrees to create isolated copies:

bash
# Create an isolated worktree for AI-assisted work
git worktree add ../project-ai-sandbox feature/ai-assisted-refactor

# Point your AI tool at the sandbox, not your main repo
cd ../project-ai-sandbox

# When you're done, cherry-pick what you actually want
git log --oneline  # review what was generated
git worktree remove ../project-ai-sandbox  # clean up

This way, nothing touches your actual branch until you explicitly move it over. I've been using this workflow for about six months now and it's saved me from at least a dozen "wait, I didn't ask for that" moments.

Step 2: Use Pre-Commit Hooks as a Safety Net

Even with worktrees, you want a second layer of defense. Pre-commit hooks can block changes to sensitive files before they ever get committed:

python
#!/usr/bin/env python3
# .git/hooks/pre-commit
# Block commits that touch protected files without explicit override

import subprocess
import sys

PROTECTED_PATTERNS = [
    "docker-compose*.yml",
    ".env*",
    "**/migrations/**",
    "package.json",       # prevent surprise dependency additions
    "Dockerfile",
    ".github/workflows/*",
]

def get_staged_files():
    result = subprocess.run(
        ["git", "diff", "--cached", "--name-only"],
        capture_output=True, text=True
    )
    return result.stdout.strip().split("\n")

def matches_protected(filepath):
    from fnmatch import fnmatch
    return any(fnmatch(filepath, p) for p in PROTECTED_PATTERNS)

staged = get_staged_files()
blocked = [f for f in staged if matches_protected(f)]

if blocked:
    print("BLOCKED: These protected files were modified:")
    for f in blocked:
        print(f"  - {f}")
    print("\nTo override, commit with: ALLOW_PROTECTED=1 git commit")
    # Allow explicit override via environment variable
    import os
    if not os.environ.get("ALLOW_PROTECTED"):
        sys.exit(1)

The key here is the override mechanism. You don't want to make it impossible to change these files — you want to make it intentional.

Step 3: Scope Changes with .gitignore Patterns

Most AI coding tools respect .gitignore or have their own ignore configuration. Create a dedicated ignore file that limits what the AI can see and modify:

bash
# .ai-ignore (or whatever your tool supports)
# Infrastructure — humans only
terraform/
k8s/
.github/

# Secrets and config
.env*
config/production.*

# Database
db/migrations/
prisma/migrations/

# Lock files — changing these cascades everywhere
package-lock.json
yarn.lock
Gemfile.lock

Not every tool supports a custom ignore file, but many respect a project-level config. Check your tool's documentation for the specific mechanism.

Step 4: Review Diffs Before They Land

This sounds obvious, but the workflow matters. Don't review AI-generated changes in the same mental mode as human-written code. AI changes tend to be syntactically correct but semantically wrong in subtle ways.

Here's my actual review process:

bash
# After the AI finishes, before doing anything else:
git diff --stat          # what files were touched? any surprises?
git diff -- '*.json'     # check config changes separately
git diff -- '*.lock'     # any dependency changes?

# Then do a focused review on the actual logic changes
git diff -- 'src/**'     # the code you actually asked for

# Stage selectively, never use git add .
git add -p               # patch mode: review each hunk

The git add -p step is non-negotiable for me. It forces you to look at every change individually. Yes, it's slower. That's the point.

Step 5: Set Up Branch Protection

If you're on a team, your CI/CD pipeline is your last line of defense. Make sure AI-generated code goes through the same review process as everything else:

  • Require pull request reviews — no direct pushes to main, regardless of source
  • Run your full test suite — AI-generated code is notorious for passing lint but failing integration tests
  • Add a CODEOWNERS file — require specific humans to approve changes to critical paths
text
# CODEOWNERS
# Require senior dev approval for infrastructure changes
/terraform/         @team-lead @infra-team
/.github/workflows/ @team-lead
/db/migrations/     @team-lead @db-admin
Dockerfile          @infra-team

Prevention: Making This the Default

The fixes above are reactive. To prevent this from being an ongoing problem, establish team conventions:

  • Document which directories are AI-safe in your project's contributing guide
  • Default to read-only — configure AI tools to suggest changes rather than write them directly when possible
  • Treat AI output like any external dependency — it needs review, testing, and approval before it touches shared branches
  • Audit regularly — run git log --author filtering or commit message patterns to track AI-assisted commits

The thing that trips most teams up is that these tools feel like they're part of your development environment, so people extend the same trust they'd give their text editor. But your text editor doesn't decide to refactor your authentication middleware because it noticed some code smell.

The Bigger Picture

This isn't an argument against using AI coding tools. I use them daily and they genuinely make me more productive. But "more productive" includes the time you spend cleaning up unwanted changes, debugging mystery refactors, and figuring out why your CI pipeline is suddenly failing.

Set up the guardrails now, before you're the one writing the postmortem about how an AI assistant dropped your production database because someone left it running with migration permissions. That's not a hypothetical — I've heard variations of this story three times in the last two months.

Treat AI code generation the way you'd treat a very fast, very eager junior developer. Helpful? Absolutely. Unsupervised? Absolutely not.

How to Stop AI Code Assistants From Making Unauthorized Changes | Authon Blog