You spent twenty-eight weeks wishing your tools were better. Now you build the tool you wished you had — starting with a one-page document that says exactly what it does and why.
Phase 4 · Bonus Build · Week 29 of 32The screenplay is finished. You've spent twenty-eight weeks inside the screenwriting process — drafting, revising, formatting, restructuring, compressing, polishing. Along the way, you encountered the tools of the trade: screenwriting software that handles formatting, outlining tools that organize beats, AI prompts that simulate readers. Some of those tools worked well. Others didn't. Some workflows felt natural. Others felt like wrestling a piece of software into doing something it was never designed to do. This phase takes all of that — the friction, the workarounds, the moments where you thought "why doesn't this tool just do X?" — and turns it into a product. You're not switching from writer to engineer. You're becoming a writer who builds. The screenplay you finished is the primary deliverable of this curriculum. The tool you build in the next four weeks is a bonus — a bridge from the craft of writing to the craft of making things that help writers write.
From frustration to feature. The most useful software tools are born from specific frustrations experienced by the people who will use them. You are that person. For twenty-eight weeks, you've been a screenwriter using tools — and you've accumulated a precise, experience-based understanding of where those tools fail. That understanding is your product spec's raw material. Not hypothetical user needs. Not market research. Not "what would be cool." What did you actually need, when did you need it, and what did you have to do instead because no tool provided it?
The exercise this week is deliberate archaeology: you're going back through your curriculum experience and mining it for friction. Not the creative friction of writing (that's productive and desirable) — the tool friction that slowed you down, broke your focus, or forced you into workarounds. Every moment of tool friction is a candidate feature for your app. The trick is separating the friction that a tool can solve from the friction that's inherent to the creative process. "I couldn't figure out what my midpoint should be" is a creative problem — no software fixes it. "I wrote three midpoint variants in three separate documents and couldn't compare them side by side without switching windows" is a tool problem — software can fix that.
The frustration audit. Walk through each phase of the curriculum and list the tool-level frustrations you encountered. Here's a framework for the audit, organized by workflow stage:
Outlining (Weeks 11, 15). Where did your outlining tool fail you? Could you see the beat-by-beat structure and the 8-sequence architecture simultaneously? Could you tag beats with metadata (promise-map item, escalation level, character-arc landmark) and filter by those tags? Could you test the causality chain without manually checking each junction? Could you rearrange beats and see the downstream effects immediately?
Drafting (Weeks 16–21). Where did the writing software create friction during page generation? Could you toggle between your outline and your draft without losing context? Could you see your scene engine notes (goal, friction, turn) alongside the scene you were writing? Could you track your daily page count and see your progress against the target? Did the formatting ever fight you — margins, character names, scene headings requiring manual correction?
Revision (Weeks 23–26). Where did the revision process exceed your tools' capabilities? Could you see the scene list with purpose tags, page counts, and test scores without building a separate spreadsheet? Could you mark scenes for cut, merge, or protect and then execute those changes without manually copying and pasting across the document? Could you run the four tests (purpose, redundancy, pace, engine) against the scene list automatically? Could you track revisions across multiple passes without losing earlier versions?
AI integration (Weeks 5–28). Where did the AI workflow create friction? Did you have to re-paste context (theme sentence, character dossier, escalation ladder) into every prompt? Could the AI access your draft directly, or did you have to copy sections manually? Could you store and compare Reader A and Reader B responses? Could you maintain the Disagreement Log within the same tool as the draft?
What a PRD is. A PRD — Product Requirements Document — is a one-page blueprint for a software product. It's the product equivalent of a concept document: it defines what the product does, who it's for, what problem it solves, and what features constitute the minimum viable version. A PRD is not a technical specification (it doesn't say how the features are built) and it's not a business plan (it doesn't say how the product makes money). It's a decision document: it defines the boundaries of what you're building so that every subsequent decision (what to code, what to skip, what to defer to v2) can be evaluated against the spec.
The one-page PRD format. Your PRD should fit on a single page and contain five sections:
1. Problem statement (2–3 sentences). What problem does this tool solve? State it from the user's perspective: "Screenwriters working through a structured curriculum need a tool that [specific capability the writer lacked during Weeks 1–28]." The problem statement should be specific enough that someone who hasn't taken this curriculum can understand the gap.
2. Target user (1–2 sentences). Who is this tool for? Not "all screenwriters" — that's too broad to design for. Your target user is someone like you: a writer working through a structured drafting and revision process, probably using AI as a diagnostic tool, who needs a workspace that connects the outline to the draft to the revision passes.
3. Core features (3–5 bullets). The features that constitute the MVP — the minimum set of capabilities that make the tool usable. Each feature should be one sentence describing what the user can do, not how the software implements it. "The user can tag each scene with a purpose label and filter the scene list by tag" is a feature. "The app uses a React component with a dropdown menu" is implementation — save that for Week 30.
4. Out of scope (2–3 bullets). The features you're explicitly NOT building in the MVP. This section is as important as the features list because it prevents scope creep — the tendency to add "one more thing" during the build sprint until the project becomes unfinishable. Common out-of-scope items for a screenwriting tool MVP: full screenplay formatting engine (use an existing library or export to standard tools), collaboration features (multi-user editing), and mobile support (build for desktop first).
5. Success criteria (2–3 sentences). How do you know the tool works? Define specific, testable outcomes: "A writer can create a scene list, tag each scene with a purpose, and filter the list to see only scenes tagged 'complication' — in under thirty seconds." Success criteria keep the build honest: if the tool can't do what the criteria describe, it's not done.
Assignment: Review 2–3 existing screenwriting tools — not as a user, but as a product thinker.
Tools to evaluate (choose 2–3): Highland (Mac), WriterSolo (cross-platform), Fade In (cross-platform), or any screenwriting tool you used during the curriculum. If you used a plain-text Fountain workflow, review the Fountain specification itself as a product design. If you used a general-purpose tool (Google Docs, Notion, Obsidian) with screenplay plugins or templates, review the plugin/template as a product.
Review lens — evaluate each tool along these dimensions:
1. What does it get right? Identify the two or three features that work best — the capabilities that removed friction from your writing process. Why do they work? What design decisions make them effective? 2. What's missing? Identify the two or three capabilities you needed during the curriculum that the tool doesn't provide. Are these missing features that the tool's designers didn't think of, or deliberate out-of-scope decisions? 3. Where does it create friction? Identify moments where the tool actively interfered with your workflow — where the software's design forced you into a process that doesn't match how you think. A formatting tool that requires you to label every element manually when auto-detection would work. An outlining tool that can't connect to the draft. 4. What's its philosophy? Every tool embodies a theory about how writers work. What theory does this tool embody? Does it assume writers outline first and draft second? Does it assume formatting is the primary need? Does it assume the writer works alone? Name the philosophy and evaluate whether it matches your actual process. 5. What would you steal? If you could take one feature from this tool and put it in yours, what would it be? Why?
Journal Prompts:
1. Across all the tools you reviewed, what's the single feature that's most universally well-implemented? What makes it work across different tools — is it a design pattern that all screenwriting tools converge on? 2. What's the biggest gap in the existing tool landscape — the capability that no tool you reviewed provides? Is that gap in your PRD's feature list? 3. If you could describe your ideal screenwriting tool in one sentence — "a tool that ___" — what would the sentence be? Does your PRD's problem statement match that sentence? 4. Which reviewed tool is closest to what you want to build? What's the smallest number of changes that would make it adequate? If the answer is "two or three changes," consider whether building from scratch is the right approach or whether a plugin or extension would serve better. 5. What design mistake do the reviewed tools have in common? Is there a pattern of friction that the industry hasn't solved? Can your tool address it?
Deliverable: 1-page PRD + prioritized feature backlog.
Constraints: Produce two artifacts:
(a) One-page PRD. Follow the five-section format: problem statement, target user, core features (3–5 bullets), out of scope (2–3 bullets), and success criteria (2–3 testable outcomes). The PRD must fit on a single page — if it runs longer, compress. Every feature in the core list must trace back to a specific frustration from the curriculum (you should be able to say "I needed this during Week X when I was doing Y"). Features that don't trace to a real frustration are speculative — move them to out of scope or a v2 list.
(b) Prioritized feature backlog (1–2 pages). A ranked list of every feature you considered — the core features from the PRD plus all the features you deferred or cut. Rank them by impact (how much friction does the feature remove?) and effort (how complex is the feature to build?). High-impact, low-effort features go to the top. Low-impact, high-effort features go to the bottom. The top of the backlog defines what you'll build in Weeks 30–31. The bottom defines what you'd build with unlimited time — the v2 roadmap.
Quality bar: The PRD must be specific enough that someone else could read it and begin building without asking you clarifying questions. The problem statement must describe a real problem you experienced. The features must be stated as user capabilities ("the user can..."), not as technical implementations ("the app uses..."). The out-of-scope section must contain at least two features you genuinely wanted but chose to defer — if nothing is out of scope, the MVP is too ambitious. The success criteria must be testable — you should be able to sit in front of the tool in Week 32 and verify whether each criterion is met.
Estimated time: 6–10 hours (frustration audit: 2–3 hours; PRD writing: 2–3 hours; backlog prioritization: 1–2 hours; tool reviews: remaining time).
The AI Workshop shifts entirely this phase. No screenplay prompts. No Two Readers. No Producer Pass. From here forward, AI is a collaborator on the product — helping you think through product decisions, evaluate technical approaches, and review code. This week's prompt focuses on PRD review: using AI to pressure-test your product spec before you start building.
The hardest transition for a writer becoming a builder is learning that the product doesn't need to be complete to be useful. A screenplay with missing scenes doesn't work — the arc is broken. But a tool with missing features can still be useful — if the features it does have solve a real problem well. Your MVP doesn't need a formatting engine, a collaboration system, and an AI panel. It needs one thing that works better than anything you used during the curriculum. If the scene navigator is that one thing — if it removes enough friction that you'd use it during your next screenplay — the tool is a success, even if every other feature is missing. Build the one thing. Make it work. Everything else is v2.
You've spent twenty-eight weeks as a consumer of tools — using software that other people built to serve workflows those builders imagined. Now you're on the other side. Write about the shift in perspective. When you review existing screenwriting tools, do you see them differently than you did in Week 1? Can you name the design decisions behind the features — the "why" behind the "what"? And more importantly: what does your frustration audit reveal about the gap between how tool builders imagine writers work and how you actually work? That gap is your product's reason for existing. The narrower and more specific the gap, the more useful the tool.
By the end of this week you should have:
• Conducted a frustration audit across all four phases of the curriculum, identifying tool-level friction
• Reviewed 2–3 existing screenwriting tools as a product thinker
• Written a one-page PRD (problem statement, target user, core features, out of scope, success criteria)
• Produced a prioritized feature backlog ranked by impact and effort
• Run the PRD through the AI product spec review prompt
• Classified frustrations as tool problems vs. creative problems
• Identified the simplest viable version of each core feature
Week 30: Build Sprint (Editor + Scene Navigator). The spec is written. The backlog is prioritized. Next week, you build. The focus is the minimum viable editor — a text editing surface with scene-aware navigation, the core feature that makes the tool a screenwriting workspace rather than a text file. You'll use AI-assisted coding (vibe-coding) to generate the initial codebase, then iterate toward the success criteria defined in the PRD. No screenplay work. No revision. Just building. The craft is different — code instead of scenes, components instead of sequences, debugging instead of revising — but the discipline is the same: start with a clear objective, build toward it, and know when to stop.