Ever looked at the padlock icon in your browser and felt safe? Yeah, me too. Then I actually traced what happens to my traffic when it passes through a reverse proxy, and let's just say the padlock started feeling more like a participation trophy.
Here's the uncomfortable truth that bit me during a security audit last year: if you're routing traffic through any TLS-terminating reverse proxy, your "encrypted" traffic is being decrypted, inspected, and re-encrypted at a point you may not control. Let's break down why this happens and what you can do about it.
The Problem: TLS Termination Breaks End-to-End Encryption
When you set up a reverse proxy in front of your application — whether it's for DDoS protection, caching, or load balancing — that proxy needs to decrypt your TLS traffic to do its job. It terminates the TLS connection from the client, reads the plaintext HTTP, and then opens a new TLS connection (hopefully) to your origin server.
This means there's a point in the chain where your data sits in plaintext on someone else's infrastructure.
Client (Browser)
│
├── TLS Connection 1 ──→ Reverse Proxy (TLS terminates here)
│ │
│ │ ⚠️ Plaintext HTTP visible here
│ │
│ ├── TLS Connection 2 ──→ Your Origin Server
│
└── You think this is one encrypted tunnel. It's not.This isn't a bug. It's how TLS-terminating proxies fundamentally work. The proxy holds the private key for your domain's certificate. It has to decrypt the traffic to route it, cache it, apply WAF rules, or do anything useful at the HTTP layer.
Why This Matters More Than You Think
I used to shrug at this. "So what? The proxy provider is trustworthy." But then I started thinking about what's actually exposed:
- Authentication tokens — every JWT, session cookie, and API key passes through in plaintext
- POST bodies — form submissions, file uploads, API payloads
- Headers — including authorization headers, custom tokens, and internal routing info
- WebSocket frames — real-time data streams, chat messages, whatever you're pushing
And it's not just about trust. It's about attack surface. Any compromise of the proxy layer — a vulnerability, a rogue employee, a legal order — exposes everything flowing through it.
Step 1: Audit Your Current TLS Setup
First, figure out what's actually happening with your traffic. You can check whether your origin is being contacted directly or through a proxy:
# Check the certificate chain your browser actually sees
openssl s_client -connect yourdomain.com:443 -servername yourdomain.com 2>/dev/null | openssl x509 -noout -issuer -subject
# Check if the server IP matches your actual origin
dig +short yourdomain.com
# Compare with your actual server IP
curl -s ifconfig.me # run this on your origin serverIf the IP from dig doesn't match your origin, something is terminating TLS before traffic hits your server. Check the certificate issuer too — if it's issued by your proxy provider and not your own CA or Let's Encrypt, the proxy is handling TLS termination.
Step 2: Enable Strict TLS to Your Origin
The most common misconfiguration I see is proxy-to-origin traffic running over plain HTTP. Don't do this. Even if the proxy-to-origin hop is "internal," enforce TLS on both legs.
If you're running nginx as your own reverse proxy, here's a proper configuration:
server {
listen 443 ssl;
server_name yourdomain.com;
ssl_certificate /etc/ssl/certs/yourdomain.crt;
ssl_certificate_key /etc/ssl/private/yourdomain.key;
# Strong TLS settings
ssl_protocols TLSv1.3; # TLS 1.3 only — no downgrade
ssl_prefer_server_ciphers off; # TLS 1.3 handles this
location / {
proxy_pass https://your-origin-ip:443; # NOT http://
proxy_ssl_verify on; # actually verify the origin cert
proxy_ssl_trusted_certificate /etc/ssl/certs/origin-ca.crt;
proxy_ssl_server_name on;
proxy_set_header Host $host;
}
}That proxy_ssl_verify on line is critical. Without it, nginx will happily connect to any server presenting any certificate, which defeats the purpose entirely.
Step 3: Consider Running Your Own Reverse Proxy
If the concern is a third party terminating your TLS, the most direct fix is to handle it yourself. Here are solid open-source options:
- Caddy — automatic HTTPS with Let's Encrypt, dead simple config, written in Go
- Traefik — great for containerized setups, auto-discovers services, handles cert renewal
- nginx — the workhorse, more config required but maximum flexibility
- HAProxy — excellent for pure load balancing with TLS passthrough support
Caddy in particular makes this almost embarrassingly simple:
yourdomain.com {
# That's it. Caddy handles TLS automatically.
reverse_proxy localhost:8080
}No cert management, no renewal cron jobs, no OpenSSL configuration rabbit holes. Caddy requests a Let's Encrypt certificate and keeps it renewed.
Step 4: Use TLS Passthrough When You Can
If you need a load balancer or edge proxy but don't want it reading your traffic, TLS passthrough is the answer. The proxy routes traffic based on the SNI (Server Name Indication) header — which is sent in plaintext during the TLS handshake — without decrypting the actual payload.
Here's how to set this up with HAProxy:
frontend https_in
bind *:443
mode tcp # TCP mode, not HTTP — no decryption
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
# Route based on SNI without decrypting
use_backend app1 if { req_ssl_sni -i app1.yourdomain.com }
use_backend app2 if { req_ssl_sni -i app2.yourdomain.com }
backend app1
mode tcp
server app1 192.168.1.10:443 check
backend app2
mode tcp
server app2 192.168.1.11:443 checkThe tradeoff? You lose HTTP-layer features: no caching, no header manipulation, no WAF rules at the proxy level. But your TLS connection is genuinely end-to-end.
Step 5: Add mTLS for Origin Verification
Mutual TLS (mTLS) adds another layer: not only does the client verify the server, but the server also verifies the client. This is useful for ensuring only your authorized proxy can connect to your origin.
# Generate a client certificate for your proxy
openssl req -new -x509 -days 365 -nodes \
-keyout proxy-client.key \
-out proxy-client.crt \
-subj "/CN=my-proxy"Then configure your origin to require client certificates. In nginx on the origin:
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/origin.crt;
ssl_certificate_key /etc/ssl/private/origin.key;
# Require client certificate
ssl_client_certificate /etc/ssl/certs/proxy-client.crt;
ssl_verify_client on;
# Reject connections without valid client cert
if ($ssl_client_verify != SUCCESS) {
return 403;
}
}Now even if someone discovers your origin IP, they can't connect without the proxy's client certificate.
Prevention: A Practical Checklist
After going through this exercise on my own infrastructure, here's what I check on every project now:
- Never run plain HTTP between proxy and origin. Not even "just in the internal network." Networks get compromised.
- Verify certificates on both ends.
proxy_ssl_verify onin nginx, equivalent settings elsewhere. - Use TLS 1.3 exclusively where possible. It's faster (1-RTT handshake) and has fewer footguns than 1.2.
- Monitor Certificate Transparency logs. If someone issues a cert for your domain that you didn't request, you want to know. Tools like
certspottercan alert you. - Evaluate whether you actually need HTTP-layer proxying. If all you need is load balancing, TLS passthrough gives you true end-to-end encryption.
- Keep your threat model honest. If you're handling medical records, financial data, or anything regulated, the "trust the proxy" model might not satisfy your compliance requirements.
The Honest Tradeoff
Look, I'm not saying every reverse proxy is evil. TLS termination exists because it enables genuinely useful features — caching, bot protection, request inspection. The problem isn't the technique; it's the blind trust.
The padlock in your browser means "encrypted to something." Your job is to make sure that something is as close to your actual server as your threat model demands. Sometimes a third-party proxy is fine. Sometimes it's not. The point is to make that decision consciously, not by default.
The tools to control your own TLS chain are all free, open-source, and battle-tested. You just have to decide to use them.
