What good product specs look like now
The PRD format had everything except the thing that mattered. A good spec is a precisely framed human problem, with the context to act on it immediately.
I have written some of the worst product specs in existence.
Dozens of pages. Structured sections. Stakeholder context. Market background. Detailed user stories. Edge cases mapped three levels deep. Success metrics defined for every possible interpretation of done. Nothing left to ambiguity, because ambiguity was the enemy.
I was very proud of them. I also suspect almost nobody read them in full.
That's not a criticism of the people I worked with. It's a criticism of the format. Long, comprehensive product requirement documents were never really written to be read end to end — they were written to exist. To demonstrate that thinking had happened. To give everyone something to point to when questions arose. To cover, in advance, every possible discussion that might otherwise slow down the build.
What they actually produced, most of the time, was more discussion. The meetings stopped being about the prototype and the experience. They became about the document. Whether the requirements were right. Whether the scope was correct. Whether the success metrics captured what the business actually needed.
The spec became the product, before the product existed.
What a spec is actually for
A product spec has one job: get the right thing built.
Not align stakeholders. Not demonstrate thoroughness. Not create a paper trail. Get the right thing built, by the right people, with enough context that they don't have to stop and ask questions that slow everything down.
Everything in a spec that doesn't serve that job is overhead. And for most of the history of product management, we created an enormous amount of overhead.
The PRD format made sense in its context. Engineering teams were large. Build cycles were long. The cost of misalignment was high — if a team of ten engineers spent three weeks building the wrong thing, that was a significant loss. The document was insurance against that outcome. It forced clarity upfront, even if it generated its own kind of friction in the process.
That context changed. The audience for specs changed with it.
How AI coding tools changed the audience for product specs
Here's the shift that most spec frameworks haven't caught up with.
For most of the history of software development, specs were written for humans — engineers, designers, QA teams, stakeholders who needed to understand what was being built and why. The document was a communication tool. Its job was to align people.
Now, increasingly, specs are being consumed by AI coding tools. Cursor. Claude Code. Tools that turn a well-formed brief into working code in hours. The audience isn't a team of engineers who need to be aligned. It's a system that needs to understand the problem clearly enough to produce a solution.
Those are different readers. They need different documents.
A human reader benefits from context, narrative, stakeholder framing. They need to understand the why in a way that lets them make good judgment calls when the spec doesn't cover an edge case. They need enough background to be invested in the outcome.
An AI coding tool needs one thing: a clearly defined human problem.
Not the market context. Not the stakeholder alignment. Not the three-level-deep edge case mapping. A clear description of what a real person is experiencing, why it matters, what done looks like — and enough codebase context to understand where the solution lives.
Give an AI tool a thirty-page PRD and it will find the two paragraphs that matter and work from those. Give it a precisely written five-section brief and it will build something that solves the actual problem.
What the five sections actually are
I've landed on five sections for every spec Circuit generates. Not because five is a magic number, but because these five things are the minimum viable context for getting something built well — whether the builder is a human or an AI. Everything else is optional. These five things aren't.
1. What to build
What to build. A precise description of the feature or change. Not the solution in exhaustive detail — the outcome. What should exist that doesn't exist now. Concrete, specific, not abstract.
2. Why it matters
Why it matters. The customer problem being solved. Written from the customer's perspective, not the business's. This section is where most specs fail — they describe the business rationale rather than the human experience. The AI coding tool doesn't care about the business rationale. It needs to understand what a real person is struggling with.
3. Customer voice
Customer voice. The actual words customers used when they described the problem. Not a paraphrase. Not a synthesis. The direct quote from the feedback, preserved exactly. This is the section most specs omit entirely — and it's the one that most changes how the builder approaches the work. When an engineer or an AI tool reads what a customer actually said, they understand who they're building for in a way that no amount of structured requirements can replicate.
4. Files to touch
Files to touch. For codebases connected via GitHub, the specific files and components relevant to this feature. File paths. What gets created, what gets modified, what needs to be aware of the change. This section is what makes a spec immediately actionable rather than just comprehensible. The builder doesn't start with a blank page — they start with a map.
5. Done criteria
Done criteria. How you know it's working. Not abstract success metrics — concrete, testable conditions from the customer's perspective. The feature is done when a customer can do X without Y happening. Written as observable outcomes, not business KPIs.
Five sections. Fit on one screen. Everything the builder needs, nothing they don't.
A good spec is not a PRD. It's a precisely framed human problem, with the context to act on it immediately.
Why the PRD format failed at the most important thing
This is the part I find hardest to say, having written so many of them.
The PRD format optimised for comprehensiveness. It covered everything because anything not covered was a potential point of failure. Market context: covered. Competitive landscape: covered. Technical constraints: covered. Rollout plan: covered. Localisation considerations: covered.
What it frequently didn't cover well — despite all the pages — was the human problem at the centre of the feature. The actual customer experience that was broken or missing. The words real people used to describe what they needed.
Those things got compressed into user story format — as a [persona] I want to [action] so that [outcome] — which is better than nothing, but loses most of the texture of what customers actually said and how they said it.
The spec was comprehensive and thin on the thing that mattered most: grounding the build in real human signal.
The modern spec inverts this. Thin on everything else. Precise and rich on the human problem.
What I actually do now
When feedback surfaces a priority in Circuit, the spec that gets generated looks nothing like the documents I used to write.
It's short. One screen, maybe two for a complex feature. The five sections, populated with specifics. The customer voice section has real quotes from the feedback that surfaced this priority — the exact words people used, not my interpretation of them.
The files to touch section comes from GitHub — Circuit reads the codebase and identifies the actual files relevant to this feature. Not generic guidance, not "you'll probably need to look at the frontend components." Specific file paths in the actual codebase, with context about what they contain and what needs to change.
I've set Claude up with guidance on edge cases and testing approach. The spec describes the problem and the outcome. Claude assesses the how — the implementation detail, the technical approach, the edge cases worth handling. The division of labour is clean: I know what the customer needs, the AI knows how to build it.
What changed isn't that specs got shorter. It's that the spec now does its actual job — getting the right thing built — rather than its secondary job of aligning everyone around a shared understanding of what's being built.
Those used to be the same document. Now they're different things.
The question specs should answer
Every spec, regardless of format, should answer one question clearly:
What is a real person experiencing — and what needs to change?
If the spec answers that question clearly — with enough codebase context for the builder to know where to work — everything else is optional. The market background, the competitive analysis, the stakeholder alignment, the rollout considerations — these might be useful documents. They're not specs.
The test I use: could the builder start building immediately after reading this, without asking me a single clarifying question? If yes, the spec is doing its job. If no, something essential is missing — and the answer is almost always in the customer problem section, not the technical requirements section.
The documents I used to write would have failed that test. Not because the engineers weren't skilled enough to work from them — but because the most important information was buried in a format designed for comprehensiveness rather than clarity.
A good spec is not a PRD. It's a precisely framed human problem, with the context to act on it immediately.
That's it. Everything else is overhead.
Try it in Circuit: Working with specs →
Catherine Williams-Treloar is the founder of Circuit — the AI product system that turns customer feedback into scored priorities and build-ready specs for Cursor and Claude Code. She has 20+ years leading product, insights, strategy and GTM at scale-ups and enterprises. Circuit was founded in Sydney in November 2025 and launched in February 2026.
Circuit turns customer feedback into ranked priorities and build-ready specs.