Author: john crider
-
Harness Engineering Part 5 -The Big Picture
The Honest Truth About AI Coding Agents: What Nobody Wants to Admit You’ve probably heard the pitch by now. AI coding agents are going to 10x your productivity. Three engineers can build a million-line product in five months. Stripe merges over a thousand agent-generated PRs per week. Individual developers ship thousands of commits per month…
-
Harness Engineering Part 4 – Scaling Up
Your AI Coding Setup Works for You. Here’s How to Make It Work for Fifty People. You built a system that makes AI coding agents reliable. You’ve got the rules file, the architecture linters, the verification scripts, the whole harness that turns a chaotic AI session into something that actually ships production code. Your teammates…
-
Harness Engineering Part 3 – Real World Patterns
The Messy Reality of AI-Assisted Development: Patterns That Actually Work You’ve got the tools. You’ve got the setup. Your AI coding agent can generate a CRUD endpoint faster than you can type the file name. And yet — somehow — your projects still go sideways. The agent builds the wrong thing. Or it builds the…
-
Harness Engineering Part 2 – Core Concepts
The Operating System Your AI Agent Is Missing You’ve hit the wall. Maybe not today, maybe not on this project, but you’ve felt it. The AI coding agent that was brilliant for the first 200 lines starts producing garbage by line 2,000. It forgets the architecture you explained twenty minutes ago. It puts database queries…
-
Harness Engineering Part 1 – The Problem
Your AI Coding Agent Isn’t Broken. Your Setup Is. You remember the moment it clicked. You opened Claude Code, described an app you’d been thinking about for weeks, and watched it materialize. Routes, database models, a clean React frontend — all from plain English. You shipped a working prototype in a weekend. You told your…
-
What I Learned and Where I am Going
Part 10 of 10: What I’m Learning Now and What Comes Next I’ve been working with Claude Code for a while now. I’ve built a working Pomodoro timer. I’ve developed practices that are validated by research and proven by retrospectives. I’ve learned what works, what doesn’t, and what I’m still figuring out. But this isn’t…
-
What I Wish I’d Known on Day One (A Letter to My Earlier Self)
Part 9 of 10: The Lessons That Took Time to Learn My Pomodoro timer is live. Real users are using it (not many, but real ones). I’ve gone from “five minutes to a web app!” excitement to “nothing works and Claude keeps lying” despair to something that feels sustainable. If I could go back to…
-
When I Shipped to Real Users (And Learned Everything I’d Been Missing)
Part 8 of 10: When Theory Meets Reality After building my Pomodoro timer for a while, I had: What I didn’t have: a single user who wasn’t me. It was time to deploy. Time to find out if everything I’d learned actually mattered. The Pre-Deployment Panic As deployment approached, I did something I hadn’t done…
-
The Difference Between Working and Understanding Why
Part 7 of 10: When Best Practices Met Real Practices A book dropped that had a big impact on me. “Agentic AI Designs” from Google. Fresh research on AI development patterns, team structures, and architectural approaches. I’d been building my Pomodoro timer for a while. I had practices that worked. Session retrospectives that showed improvement.…
-
The Two-Hour Session That Changed How I Think About AI Development
Part 6 of 10: Finding the Rhythm That Actually Works Several weeks into building my Pomodoro timer, I’d accumulated a lot of practices: Session retrospectives. Code review. Verification checklists. Quick reference guides. Gap tracking. But I was about to discover that the most important practice was also the simplest: Stop when the timer goes off.…