AuthonAuthon Blog
debugging7 min read

Why Your DNS Resolver Might Be Silently Blocking Legitimate Domains

Learn how to debug and fix DNS resolution failures when security-filtered resolvers silently block legitimate domains you depend on.

AW
Alan West
Authon Team
Why Your DNS Resolver Might Be Silently Blocking Legitimate Domains

So there I was, trying to pull up a cached page from a web archive service, and... nothing. No timeout, no error page, just a blank refusal to resolve. I spent a solid twenty minutes blaming my local network before I figured out what was actually happening.

Turns out, the DNS resolver I was using had quietly categorized the domain as a command-and-control botnet endpoint. A completely legitimate archival service, flagged as malware infrastructure. The domain simply stopped resolving through any DNS resolver that had security filtering enabled.

If you've hit something similar, here's how to debug it and make sure it doesn't silently break your workflows again.

Understanding the Problem: Filtered DNS Resolution

Most developers know that DNS translates domain names to IP addresses. What fewer realize is that many popular DNS resolvers offer "security-enhanced" variants that filter responses based on threat intelligence feeds.

These filtered resolvers will return NXDOMAIN or 0.0.0.0 for domains they consider malicious. The problem? Threat categorization isn't perfect. Legitimate domains get caught in the crossfire, and when they do, everything that depends on that resolution just quietly fails.

Here's what makes this especially frustrating:

  • There's no error message telling you the domain was blocked
  • Standard HTTP debugging tools won't help because the failure happens at the DNS layer
  • It can affect CI/CD pipelines, scripts, and automated workflows silently
  • Different machines on your team might behave differently depending on their DNS config

Step 1: Confirm It's a DNS Issue

Before you go down any rabbit holes, verify that DNS resolution is actually the problem. The dig command is your best friend here.

bash
# Query your system's default resolver
dig archive.today

# If that returns nothing useful, try a known unfiltered resolver
dig @9.9.9.9 archive.today

# Compare against a filtered resolver variant
dig @9.9.9.2 archive.today

If the unfiltered resolver returns an IP address but the filtered one doesn't, congratulations — you've found your culprit. The domain is being blocked at the DNS level by security filtering.

You can also use nslookup if you prefer:

bash
# Quick comparison between two resolvers
nslookup archive.today 9.9.9.9
nslookup archive.today 1.1.1.2  # filtered variant — may return NXDOMAIN

Pay attention to the response. A filtered block typically looks like an NXDOMAIN response or resolves to a sinkhole IP like 0.0.0.0. A genuine DNS failure looks different — you'll see timeouts or SERVFAIL responses instead.

Step 2: Understand Why It's Happening

DNS-level security filtering relies on categorization databases. These databases classify domains into buckets like "phishing," "malware," "command and control," and so on. The classification usually comes from automated scanning systems, third-party threat feeds, or user reports.

The catch is that these systems optimize for safety over accuracy. A false positive (blocking a legit site) is considered less harmful than a false negative (allowing actual malware through). That's a reasonable tradeoff for consumer protection, but it's a pain when it hits a domain you need.

Archival and caching services are particularly vulnerable to miscategorization because:

  • They serve cached copies of other sites, including potentially malicious ones
  • Their traffic patterns can resemble scraping or bot activity
  • They may host content that automated scanners flag without context
  • Their IP infrastructure sometimes overlaps with addresses that have mixed reputations

Step 3: Fix Your Local Resolution

The immediate fix is straightforward — configure your system to use an unfiltered DNS resolver, or set up domain-specific resolution overrides.

Option A: Switch Your DNS Resolver

If you don't need DNS-level security filtering (and honestly, most developers don't — you've got other layers of protection), switch to an unfiltered resolver.

On Linux, you can modify /etc/resolv.conf or use systemd-resolved:

bash
# Check your current DNS config
resolvectl status

# Temporarily override DNS for testing
sudo resolvectl dns eth0 9.9.9.9 149.112.112.112

# For a permanent change, edit your network config
# On systemd-based systems:
sudo nano /etc/systemd/resolved.conf
# Set DNS=9.9.9.9 149.112.112.112
# Then restart: sudo systemctl restart systemd-resolved

Option B: Override Specific Domains

If you want to keep filtered DNS for general browsing but need specific domains to resolve, the /etc/hosts file works in a pinch:

bash
# First, find the actual IP using an unfiltered resolver
dig @9.9.9.9 archive.today +short
# Let's say it returns 99.83.180.5

# Add it to your hosts file
echo "99.83.180.5 archive.today" | sudo tee -a /etc/hosts

Fair warning: this is brittle. If the site's IP changes, you'll need to update it manually. I'd only use this as a temporary workaround.

Option C: Run a Local Resolver with Selective Forwarding

This is the approach I actually settled on. Running something like unbound locally gives you granular control over which upstream resolvers handle which domains.

yaml
# /etc/unbound/unbound.conf (simplified)
server:
    interface: 127.0.0.1
    port: 53
    access-control: 127.0.0.0/8 allow

# Default: use your preferred filtered resolver
forward-zone:
    name: "."
    forward-addr: 9.9.9.2  # filtered by default

# Exception: domains you know are miscategorized
forward-zone:
    name: "archive.today"
    forward-addr: 9.9.9.9  # unfiltered for this domain

This way you keep the safety net for 99% of your browsing but carve out exceptions where you know the categorization is wrong.

Step 4: Fix It for CI/CD and Automated Systems

This is where things get sneaky. Your local machine might be fine now, but what about your build servers, Docker containers, and deployment pipelines?

Check what DNS your containers are using:

bash
# Inside a running container
cat /etc/resolv.conf

# Or from outside
docker run --rm alpine cat /etc/resolv.conf

If your CI environment uses a filtered DNS provider by default, you might need to override it in your pipeline config or Docker daemon settings. I've seen builds fail silently because a dependency's domain got miscategorized, and nobody figured it out for days because the error message just said "network unreachable."

Prevention: Don't Get Bitten Again

A few practices I've adopted after dealing with this:

  • Monitor DNS resolution in your health checks. If your app depends on external domains, add a DNS resolution check alongside your HTTP health checks.
  • Pin critical dependencies. If you rely on an external archival or caching service, consider mirroring what you need locally rather than depending on live resolution.
  • Document your DNS configuration. Future you (or your teammates) will thank you when something randomly stops resolving at 2 AM.
  • Set up alerts for DNS resolution failures. A simple cron job that runs dig against your critical domains and alerts on failure can save hours of debugging.
bash
#!/bin/bash
# Simple DNS health check — run via cron every 15 minutes
DOMAINS="archive.today example.com your-critical-api.dev"

for domain in $DOMAINS; do
    if ! dig +short "$domain" @9.9.9.9 | grep -q '.'; then
        echo "DNS resolution failed for $domain" | \
            mail -s "DNS Alert: $domain" ops@yourteam.dev
    fi
done

The Bigger Picture

This whole situation highlights a real tension in internet infrastructure. DNS-level filtering is a blunt instrument — it protects a lot of people from genuine threats, but when it misfires, it effectively removes a site from a chunk of the internet with zero transparency.

As developers, we need to be aware that DNS is not a neutral, reliable layer anymore. It's a policy enforcement point, and those policies don't always align with our needs. Building resilience into your DNS configuration isn't paranoia — it's just good infrastructure hygiene.

If you've run into similar issues with other domains being silently blocked, I'd be curious to hear about it. This seems to be happening more frequently as security filtering becomes the default on more networks.

Why Your DNS Resolver Might Be Silently Blocking Legitimate Domains | Authon Blog