Notes on building systems that have to keep working.
Working notes, not polished essays. I show how things were built, and why.
One bug, eight fixes, and the cost of cognitive debt
AI can accelerate implementation and repairs, but it does not replace the deliberate thinking needed to catch requirements that were never stress-tested.
How you talk to AI is itself a design decision
Spec-driven prompting and iterative prompting are both design choices. The useful question is where the quality judgment lives: in a contract you can define upfront, or in feedback you only get once something exists.
AI is a universal translator, but it still amplifies your intent
AI compresses the distance between intent and working software. That makes small tools worth building, but it also makes unclear intent show up faster and louder.
Before adopting AI, agree on what you mean by AI
Adopting AI is too broad to be useful as a directive. Teams need sharper language before they can make sensible decisions about tools, risks, workflows, and ownership.
The thinking you can't get back (and the hidden cost of AI)
Every level of AI assistance removes work from humans. The hidden question is which parts of that work were also building judgment, memory, and understanding.
Five levels of AI in software teams (and why the hardest part isn't the AI)
A practical ladder for how software teams change as AI takes on more of the work, from removing boilerplate to surfacing what humans should pay attention to next.
Why AI features need proving before they ship (and why demos aren't enough)
A demo proves that an AI feature can look convincing once. Shipping requires stronger evidence: whether it works repeatedly, under real load, for the people expected to rely on it.
Finding the Unwritten Context in Code and Documentation
Your AI agent can read your entire codebase — but can it read what your team never wrote down? Here's what that blind spot actually costs you.