AuthonAuthon Blog
debugging6 min read

Why Your Python Scripts Fail in Self-Hosted n8n (And How to Fix It)

Why Python scripts fail in self-hosted n8n Docker containers and how to fix it with custom images, virtual environments, and sidecar patterns.

AW
Alan West
Authon Team
Why Your Python Scripts Fail in Self-Hosted n8n (And How to Fix It)

If you've ever spun up n8n in Docker, connected a Python script, and watched it fail spectacularly — welcome to the club. I spent an embarrassing amount of time debugging what turned out to be a very common self-hosting gotcha, and I'm writing this so you don't have to.

Let's walk through exactly why Python breaks in a default n8n setup and how to fix it properly.

The Problem: "python: not found"

You set up n8n with a quick docker run or a basic docker-compose.yml. Everything works great — until you try to run a Python script using the Execute Command node. You get something like:

text
Error: Command failed: python3 /tmp/script.py
/bin/sh: python3: not found

Or maybe Python runs, but import requests blows up because none of your pip packages are there. Or worse — everything works fine, you restart the container, and all your installed packages vanish.

This is the self-hosting rite of passage nobody warns you about.

Root Cause: The Default n8n Image Doesn't Ship Python

The official n8nio/n8n Docker image is built on Alpine Linux and is intentionally kept lean. It includes Node.js (because n8n is a Node app) and not much else. Python is not installed. Neither is pip. Neither are any system-level dependencies your Python scripts might need.

When you use the Execute Command node and call python3, the container's shell simply can't find the binary. It's not a permissions issue or a path issue — it literally isn't installed.

And even if you shell into the running container and install Python manually with apk add python3, those changes live in the container's ephemeral filesystem. Next restart, gone.

The Fix: Build a Custom n8n Image

The proper solution is to create a custom Dockerfile that extends the official n8n image and bakes Python into it. Here's what I use:

dockerfile
FROM n8nio/n8n:latest

# Switch to root to install system packages
USER root

# Install Python, pip, and common build dependencies
RUN apk add --no-cache \
    python3 \
    py3-pip \
    python3-dev \
    gcc \
    musl-dev \
    libffi-dev

# Create a virtual environment so pip doesn't complain
RUN python3 -m venv /opt/python-env

# Install whatever Python packages you need
RUN /opt/python-env/bin/pip install --no-cache-dir \
    requests \
    pandas \
    beautifulsoup4

# Make the venv's Python the default
ENV PATH="/opt/python-env/bin:$PATH"

# Switch back to the n8n user for security
USER node

Then build and run it:

bash
docker build -t n8n-python .
docker run -d --name n8n -p 5678:5678 n8n-python

Now when your Execute Command node calls python3, it finds the binary, and your packages are there too. Restarts won't wipe them because they're baked into the image.

Setting It Up With Docker Compose

If you're running n8n with docker-compose (which you probably should be), here's a more complete setup:

yaml
version: '3.8'

services:
  n8n:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "5678:5678"
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=changeme
    volumes:
      - n8n_data:/home/node/.n8n
      # Mount a scripts directory if you want to edit without rebuilding
      - ./scripts:/home/node/scripts

volumes:
  n8n_data:

That scripts volume mount is a nice trick. You can put your Python scripts in a local ./scripts folder and reference them in n8n as /home/node/scripts/my_script.py. Edit locally, run in the container, no rebuild needed.

The Virtual Environment Thing

You might have noticed I used python3 -m venv in the Dockerfile. This isn't just best practice pedantry. Recent versions of Python on Alpine (and Debian-based distros too) will actively refuse to let pip install packages into the system Python. You'll get this error:

text
error: externally-managed-environment

This environment is externally managed.
To install Python packages system-wide, try 'apk add py3-whatever'

The venv sidesteps this entirely. It also keeps your n8n Python environment isolated from any system Python packages, which prevents dependency conflicts down the road.

Common Gotcha: The USER Directive

One thing that tripped me up for longer than I'd like to admit: the official n8n image runs as the node user, not root. If you try to install packages without switching to USER root first, every apk add and pip install will fail with permission denied errors.

But — and this is important — you should switch back to USER node at the end of your Dockerfile. Running n8n as root inside a container is a security risk you don't need. The pattern is always: switch to root, install your stuff, switch back.

Alternative Approach: Sidecar Container

If you don't want to maintain a custom image, there's another pattern worth considering. Run Python in a separate container and have n8n call it over HTTP:

yaml
services:
  n8n:
    image: n8nio/n8n:latest
    ports:
      - "5678:5678"
    volumes:
      - n8n_data:/home/node/.n8n

  python-worker:
    image: python:3.12-slim
    volumes:
      - ./scripts:/app
    command: python /app/server.py
    # Expose internally, not to host
    expose:
      - "8000"

Your server.py could be a simple Flask or FastAPI app that exposes endpoints for whatever your Python scripts do. Then in n8n, you use an HTTP Request node instead of Execute Command, pointing to http://python-worker:8000/your-endpoint.

This is more work upfront but has real advantages: you can scale the Python worker independently, your n8n image stays vanilla (easier upgrades), and you get proper error handling through HTTP status codes.

Prevention Tips for Self-Hosting Newbies

After going through this and a few other self-hosting adventures, here's what I wish someone had told me:

  • Never install things inside a running container and expect them to survive. If it's not in the Dockerfile or a volume, it's temporary.
  • Read the Dockerfile of any image you're using. Five minutes reading the base image's Dockerfile saves hours of debugging. Check what user it runs as, what's installed, and what the entrypoint does.
  • Use volumes for data, images for dependencies. Your n8n workflows live in a volume. Your Python runtime lives in the image. Don't mix these up.
  • Pin your image versions in production. Using n8nio/n8n:latest is fine for tinkering, but in production use a specific version tag like n8nio/n8n:1.34.0. An unexpected update shouldn't break your setup.
  • Check the logs first. Running docker logs n8n before searching the internet will answer most of your questions faster than any forum post.

Wrapping Up

The core issue here isn't really about n8n specifically — it's about understanding what's inside your Docker containers and what isn't. The default n8n image is a Node.js app, so it ships Node.js. If you want Python, you need to bring it yourself.

Once you internalize that Docker images are just carefully constructed filesystems, a lot of self-hosting headaches start making sense. Your container doesn't have Python for the same reason your brand new laptop doesn't have Python — nobody installed it yet.

Now go automate something.

Why Your Python Scripts Fail in Self-Hosted n8n (And How to Fix It) | Authon Blog