When the bottleneck was never the code

The most common mistake I see teams make when they adopt AI coding tools is treating them as a faster typist. They plug in Copilot or Claude or Cursor, line output goes up, and everyone feels productive for about six weeks. Then the entropy arrives. The codebase accumulates patterns that don't quite align, the architecture documentation is already stale, and nobody can agree on what the thing is actually supposed to do. The AI didn't cause those problems. It amplified the ones that were already there.

Here's the uncomfortable truth underneath that experience: the bottleneck in software delivery has never been writing code. It has always been articulating intent.

That idea sits at the center of Spec-Driven Development, a practice that has been gaining real momentum as agentic coding tools become powerful enough to execute long autonomous runs without human intervention. A recent piece by Hari Krishnan on InfoQ, Spec-Driven Development: Adoption at Enterprise Scale, explores what it takes to move this from an individual developer practice into something that works across a real organization. It got me thinking hard about what this shift actually demands, not just technically, but organizationally.

The inversion worth taking seriously

Spec-Driven Development, at its simplest, inverts the traditional workflow. Instead of writing code that you hope matches the requirement, you write a specification that functions as the executable source of truth. The code gets generated from, or validated against, the spec. The spec gets version-controlled, reviewed, and continuously maintained just like production code. When something breaks, you don't just fix the implementation. You fix the spec that produced it.

If you've spent time debugging systems where the actual behavior diverged from what anyone thought the system was supposed to do, this probably sounds appealing. I've been in those rooms. I've spend three days tracking down a bug only to discover that the "bug" was the system doing exactly what the original ticket said, just not what the business actually wanted. The spec was wrong. Or rather, there was no real spec. There was a Jira ticket with a line or two and a link to a Figma mockup, and everyone filled in the gaps differently.

That gap between intent and implementation is not a new problem. It predates AI by decades. But something genuinely interesting happens when you introduce agents that can execute long autonomous runs from a single context window. The gap becomes load-bearing. An agent running 20, 50, 100 steps without human checkpoints doesn't just drift a little from the intent. It can produce something architecturally incoherent that compiles, passes tests, and looks fine until someone tries to extend it six months later. Spec quality now determines implementation quality in a direct, traceable way. That's a real shift in where engineering judgment needs to live.

This is why I think SDD deserves more attention than it's currently getting in most of the teams I talk to. It's not a methodology for its own sake. It's a structural response to a problem that agentic AI has made much harder to ignore.

Where senior engineers actually spend their time

The part I find most compelling about this approach is the cultural framing. Specification authoring as a first-class engineering surface. Not a precursor to the "real" work of coding. Not a product manager's artifact that gets handed over a wall. A living interface that the whole team owns and evolves together, with the same quality practices applied to it as to production code.

That framing resonates with me because of something I've observed consistently across engineering organizations: the most impactful senior engineers I've worked with spend a disproportionate amount of their time on clarifying work. Does everyone on this team understand what we're building and why? Is the shared mental model of the system accurate? Are we solving the right problem? They're the ones who slow down a conversation to ask "wait, what are we actually trying to accomplish here?" and quietly save the team two weeks of building the wrong thing. They do this instinctively, often invisibly, and it almost never shows up in a performance review.

SDD essentially says that work should produce an artifact. It shouldn't live only in Slack threads and the heads of the three people who were in the right meeting. It should be written down, reviewed, versioned, and used to drive everything that follows.

That maps to something I've always believed about what separates high-performing engineering organizations from struggling ones. The best ones have a shared, accurate, continuously updated understanding of what they're building. The struggling ones are constantly reconstructing that understanding from first principles, one meeting at a time, losing fidelity with each retelling. Everything else in delivery quality flows from that distinction.

Where this gets hard in practice

Writing good specifications is genuinely difficult. It requires the same kind of clarity and precision as writing good code, plus the ability to think at a higher level of abstraction and communicate across roles and functions. This is not a skill most teams have in abundance.

I've seen well-intentioned specification efforts die more often than I can count, not because people didn't want to write specs, but because nobody could agree on what a good spec looked like, the tooling made it feel burdensome relative to just opening a ticket, and the feedback loop between spec quality and delivery quality was too long to be motivating in a quarterly planning cycle.

The multi-repository problem is real and underappreciated. Most SDD tooling today co-locates specs with code in a single repository. That works fine for a small service with a single team. It breaks down fast in an enterprise environment where you have microservices spanning dozens of repositories, shared libraries, platform teams, and infrastructure-as-code living in entirely separate contexts. A spec that governs the behavior of an API consumed by five downstream services needs to live somewhere that all five teams can see, version, and contribute to. That is a governance and tooling problem that most organizations have not solved yet, and it's one of the honest gaps in the current SDD ecosystem.

The brownfield problem is equally thorny. The practical reality for most enterprise engineering teams is that the interesting, high-stakes code already exists. It was written before SDD was a concept, probably before half the current team joined. Retrofitting specs onto an existing system is not the same as building spec-first from scratch. It requires investment with no immediate delivery payoff, which makes it extremely difficult to prioritize against the feature work your stakeholders are asking about in every sprint review.

The honest qualification

Here is where I have to push back on my own enthusiasm.

SDD is, at its core, a bet on the idea that if you make intent explicit and enforce it continuously, you reduce the drift between what you meant and what you built. I think that bet is correct. But it doesn't eliminate the hard part. It relocates it.

The hard part was never writing code. It was never even writing specs. It has always been reaching genuine shared understanding among people with different mental models, different incentives, and different definitions of done. Writing a spec down doesn't resolve that conflict. It surfaces it, which is valuable, but it does not resolve it on its own.

I've been in enough design reviews to know that you can have a twelve-page specification document and still have two teams walk out of the room with completely different understandings of what was agreed. The document isn't the shared understanding. The conversation that produced the document is. And the depth and quality of that conversation depends entirely on the culture of the organization: whether people feel safe raising ambiguity early, whether senior engineers make time for the clarifying work, whether product and engineering actually collaborate on intent rather than throwing requirements over a wall and hoping for the best.

SDD gives you better tooling for capturing the output of that collaboration. It does not substitute for the collaboration itself. If your organization has a culture where requirements are vague by default, stakeholders are routinely misaligned, and the engineering team is expected to figure it out from a two-paragraph ticket and some wireframes, adding a specification layer on top of that process will not fix the underlying dysfunction. It might even obscure it by creating an artifact that looks comprehensive on the surface while the real ambiguities remain unresolved underneath.

The question worth sitting with

SDD is the right direction, particularly as agentic coding tools become the default mode of delivery for more teams. The case for treating specs as executable, version-controlled, first-class artifacts is going to become more obvious to more organizations over the next couple years. The teams that are building this discipline now will have a structural advantage when it matters.

But the teams that will actually benefit are the ones who treat this as a cultural shift before they treat it as a tooling decision. The question worth sitting with is not "which SDD tool should we adopt?" It's "do we already have the shared clarity about what we're building that would make a specification meaningful?" If the answer is not clearly yes, start there. The tooling will be more valuable once the practice exists to support.

The value of SDD at its best is that it makes the implicit explicit. It turns the mental models floating around in engineers' heads and product managers' decks into something testable and durable. That is genuinely powerful. But only if the organization is already doing the work of building shared understanding. If it isn't, SDD is just another artifact nobody reads.

If you want to go deeper on the enterprise adoption challenges and what the tooling landscape looks like right now, Hari Krishnan's piece on InfoQ, Spec-Driven Development: Adoption at Enterprise Scale, is a thorough and practical look at where this is headed.

Previous
Previous

We Can't Have It Both Ways: AI Agents and Context Switching

Next
Next

The Weird Place We've Landed With AI at Work