AuthonAuthon Blog
debugging6 min read

How to Debug the Slow Frustrations Killing Your Dev Workflow

How to identify and fix the small workflow frustrations that silently eat hours of your dev time every week — with scripts to measure and fix them.

AW
Alan West
Authon Team
How to Debug the Slow Frustrations Killing Your Dev Workflow

Ever had one of those weeks where everything in your development setup feels like it's fighting you? Builds taking forever, cryptic error messages that send you on a 45-minute goose chase, flaky tests that pass locally but explode in CI. None of these are showstopper bugs. They're small, persistent frustrations — and they compound.

I spent the last month auditing my own workflow after realizing I was losing roughly 90 minutes a day to accumulated friction. Not on hard problems. On waiting and re-running things. Here's how I tracked it down and fixed the worst offenders.

The Root Cause: Death by a Thousand Paper Cuts

The tricky thing about workflow friction is that no single issue feels worth fixing. A 12-second build rebuild? Fine. A test suite that occasionally needs a re-run? Annoying but whatever. A dev server that takes 8 seconds to reflect changes? Livable.

But multiply those across a full day and you're bleeding focus. Every interruption has a cognitive switching cost that research puts at anywhere from 10 to 25 minutes to fully recover from. That "quick" re-run isn't costing you 30 seconds — it's costing you the mental thread you were holding.

The fix starts with measuring.

Step 1: Actually Measure Where Your Time Goes

Before I started optimizing anything, I wrote a dead-simple shell wrapper to log how long my most common commands took:

bash
# Add to your .zshrc or .bashrc
devlog() {
  local start=$(date +%s%N)
  "$@"  # Run the actual command
  local end=$(date +%s%N)
  local duration=$(( (end - start) / 1000000 ))  # Convert to ms
  echo "$(date +%Y-%m-%d\ %H:%M:%S),$duration,\"$*\"" >> ~/.devlog.csv
}

# Usage: wrap your common commands
alias build='devlog npm run build'
alias test='devlog npm test'
alias dev='devlog npm run dev'

After a week of this, I imported ~/.devlog.csv into a spreadsheet and sorted by total time spent. The results were not what I expected. My test suite wasn't the problem — it was my dev server's hot reload that was silently taking 4-6 seconds per change, and I was making about 200 changes a day.

Step 2: Fix the Hot Reload Problem

If your dev server's file watching feels sluggish, the culprit is almost always one of three things:

  • Watching too many files — your node_modules or build output directories are being watched
  • Polling instead of native filesystem events — common in Docker or VM setups
  • Expensive transforms running on every change — TypeScript type checking on every save
  • Here's how I diagnosed it. First, check what's actually being watched:

    javascript
    // debug-watch.mjs — run this in your project root
    import { watch } from 'fs';
    import { readdir } from 'fs/promises';
    import { join } from 'path';
    
    async function countWatchedPaths(dir, depth = 0) {
      if (depth > 5) return 0; // Don't go too deep
      let count = 0;
      try {
        const entries = await readdir(dir, { withFileTypes: true });
        for (const entry of entries) {
          if (entry.isDirectory()) {
            const fullPath = join(dir, entry.name);
            // These should NEVER be watched
            const skip = ['node_modules', '.git', 'dist', 'build', '.next'];
            if (skip.includes(entry.name)) {
              const sub = await readdir(fullPath, { recursive: true }).catch(() => []);
              console.log(`  ⚠ ${fullPath}: ${sub.length} files (should be excluded)`);
            }
            count += await countWatchedPaths(fullPath, depth + 1);
          }
          count++;
        }
      } catch (e) { /* permission denied, etc */ }
      return count;
    }
    
    const total = await countWatchedPaths('.');
    console.log(`\nTotal paths in project tree: ${total}`);

    In my case, a misconfigured bundler config was watching 47,000 files when it should have been watching about 300. The fix was a two-line ignore pattern in my config.

    Step 3: Kill the Flaky Tests

    Flaky tests are workflow poison because they train you to distrust your test suite. Once you start habitually re-running failures, you've lost the entire point of having tests.

    I wrote a small script to identify my worst offenders:

    bash
    #!/bin/bash
    # flaky-finder.sh — run your test suite N times and track failures
    RUNS=10
    FAILURE_LOG=$(mktemp)
    
    for i in $(seq 1 $RUNS); do
      echo "Run $i/$RUNS..."
      # Capture only the failing test names
      npm test 2>&1 | grep -E '(FAIL|✗|✘)' >> "$FAILURE_LOG"
    done
    
    echo "\n--- Flakiness report ---"
    sort "$FAILURE_LOG" | uniq -c | sort -rn | head -20
    rm "$FAILURE_LOG"

    The usual suspects for flaky tests:

    • Shared mutable state between tests (global variables, database rows not cleaned up)
    • Timing-dependent assertions — anything with setTimeout or race conditions
    • Order-dependent tests — test B passes only when test A runs first
    • Hardcoded dates or timestamps that break at midnight or on weekends

    For the timing issues specifically, replace arbitrary waits with proper polling:

    javascript
    // Bad: brittle, slow, and flaky
    await new Promise(r => setTimeout(r, 2000));
    expect(element).toBeVisible();
    
    // Good: polls until true or timeout
    async function waitFor(fn, timeout = 5000, interval = 50) {
      const start = Date.now();
      while (Date.now() - start < timeout) {
        try {
          const result = await fn();
          if (result) return result;
        } catch (e) { /* keep trying */ }
        await new Promise(r => setTimeout(r, interval));
      }
      throw new Error(`waitFor timed out after ${timeout}ms`);
    }
    
    await waitFor(() => element.isVisible());

    Step 4: Make Error Messages Actually Useful

    This one's less about fixing your workflow and more about fixing it for your whole team. If you maintain any internal tools, libraries, or scripts — invest 30 minutes in better error messages.

    The difference between a 2-minute fix and a 30-minute investigation is usually one good error message. Instead of:

    text
    Error: ENOENT: no such file or directory

    Give context:

    text
    Error: Config file not found at ./config/app.json
      Looked in: /Users/you/project/config/app.json
      Hint: Run 'cp config/app.example.json config/app.json' to create it
      Docs: https://your-project/docs/configuration

    This takes almost no extra code but saves everyone who hits it.

    Prevention: Build the Habit

    The biggest thing I took away from this whole exercise is that workflow friction is invisible until you measure it. I now keep that devlog wrapper running permanently and check the CSV every couple of weeks. When something starts creeping up, I fix it before it becomes background noise I've learned to tolerate.

    A few rules I follow now:

    • If I wait for something twice, I automate it. File watching, test reruns, environment setup — if it's manual and repetitive, it gets scripted.
    • If an error message sends me to Google, I fix the error message. Even if it's in a dependency, I'll submit a PR or fork it.
    • If a test flakes, it gets quarantined immediately. A flaky test is worse than no test because it teaches the team to ignore failures.

    The frustrating thing about these problems is that none of them are interesting. Nobody wants to spend their afternoon debugging why file watching is slow or why a test fails on Tuesdays. But fixing the boring stuff is what separates a codebase that's a joy to work in from one that slowly drives everyone to resentment.

    Your tools should get out of your way. If they aren't, measure it, fix it, and move on to the interesting problems.

    How to Debug the Slow Frustrations Killing Your Dev Workflow | Authon Blog