THE MOVIE ON THE PAGE · WEEK 29 OF 32 · BONUS BUILD
SCREENWRITING STUDIO

Product Spec
Your Screenwriting App MVP

You spent twenty-eight weeks wishing your tools were better. Now you build the tool you wished you had — starting with a one-page document that says exactly what it does and why.

The Movie on the Page Phase 4 · Bonus Build · Week 29 of 32
Commitment
8–12 hours
Craft Focus
Translating writer frustrations into builder features
Cinema Lens
N/A — this phase is about building, not writing
Page Craft
N/A — no screenplay formatting this phase
Exercise Output
1-page PRD + prioritized feature backlog
Budget Dial
N/A — build phase

The screenplay is finished. You've spent twenty-eight weeks inside the screenwriting process — drafting, revising, formatting, restructuring, compressing, polishing. Along the way, you encountered the tools of the trade: screenwriting software that handles formatting, outlining tools that organize beats, AI prompts that simulate readers. Some of those tools worked well. Others didn't. Some workflows felt natural. Others felt like wrestling a piece of software into doing something it was never designed to do. This phase takes all of that — the friction, the workarounds, the moments where you thought "why doesn't this tool just do X?" — and turns it into a product. You're not switching from writer to engineer. You're becoming a writer who builds. The screenplay you finished is the primary deliverable of this curriculum. The tool you build in the next four weeks is a bonus — a bridge from the craft of writing to the craft of making things that help writers write.

The best product specs don't come from imagining what users want. They come from remembering what you needed and didn't have.

Craft Lecture

From frustration to feature. The most useful software tools are born from specific frustrations experienced by the people who will use them. You are that person. For twenty-eight weeks, you've been a screenwriter using tools — and you've accumulated a precise, experience-based understanding of where those tools fail. That understanding is your product spec's raw material. Not hypothetical user needs. Not market research. Not "what would be cool." What did you actually need, when did you need it, and what did you have to do instead because no tool provided it?

The exercise this week is deliberate archaeology: you're going back through your curriculum experience and mining it for friction. Not the creative friction of writing (that's productive and desirable) — the tool friction that slowed you down, broke your focus, or forced you into workarounds. Every moment of tool friction is a candidate feature for your app. The trick is separating the friction that a tool can solve from the friction that's inherent to the creative process. "I couldn't figure out what my midpoint should be" is a creative problem — no software fixes it. "I wrote three midpoint variants in three separate documents and couldn't compare them side by side without switching windows" is a tool problem — software can fix that.

The frustration audit. Walk through each phase of the curriculum and list the tool-level frustrations you encountered. Here's a framework for the audit, organized by workflow stage:

Outlining (Weeks 11, 15). Where did your outlining tool fail you? Could you see the beat-by-beat structure and the 8-sequence architecture simultaneously? Could you tag beats with metadata (promise-map item, escalation level, character-arc landmark) and filter by those tags? Could you test the causality chain without manually checking each junction? Could you rearrange beats and see the downstream effects immediately?

Drafting (Weeks 16–21). Where did the writing software create friction during page generation? Could you toggle between your outline and your draft without losing context? Could you see your scene engine notes (goal, friction, turn) alongside the scene you were writing? Could you track your daily page count and see your progress against the target? Did the formatting ever fight you — margins, character names, scene headings requiring manual correction?

Revision (Weeks 23–26). Where did the revision process exceed your tools' capabilities? Could you see the scene list with purpose tags, page counts, and test scores without building a separate spreadsheet? Could you mark scenes for cut, merge, or protect and then execute those changes without manually copying and pasting across the document? Could you run the four tests (purpose, redundancy, pace, engine) against the scene list automatically? Could you track revisions across multiple passes without losing earlier versions?

AI integration (Weeks 5–28). Where did the AI workflow create friction? Did you have to re-paste context (theme sentence, character dossier, escalation ladder) into every prompt? Could the AI access your draft directly, or did you have to copy sections manually? Could you store and compare Reader A and Reader B responses? Could you maintain the Disagreement Log within the same tool as the draft?

What a PRD is. A PRD — Product Requirements Document — is a one-page blueprint for a software product. It's the product equivalent of a concept document: it defines what the product does, who it's for, what problem it solves, and what features constitute the minimum viable version. A PRD is not a technical specification (it doesn't say how the features are built) and it's not a business plan (it doesn't say how the product makes money). It's a decision document: it defines the boundaries of what you're building so that every subsequent decision (what to code, what to skip, what to defer to v2) can be evaluated against the spec.

The one-page PRD format. Your PRD should fit on a single page and contain five sections:

1. Problem statement (2–3 sentences). What problem does this tool solve? State it from the user's perspective: "Screenwriters working through a structured curriculum need a tool that [specific capability the writer lacked during Weeks 1–28]." The problem statement should be specific enough that someone who hasn't taken this curriculum can understand the gap.

2. Target user (1–2 sentences). Who is this tool for? Not "all screenwriters" — that's too broad to design for. Your target user is someone like you: a writer working through a structured drafting and revision process, probably using AI as a diagnostic tool, who needs a workspace that connects the outline to the draft to the revision passes.

3. Core features (3–5 bullets). The features that constitute the MVP — the minimum set of capabilities that make the tool usable. Each feature should be one sentence describing what the user can do, not how the software implements it. "The user can tag each scene with a purpose label and filter the scene list by tag" is a feature. "The app uses a React component with a dropdown menu" is implementation — save that for Week 30.

4. Out of scope (2–3 bullets). The features you're explicitly NOT building in the MVP. This section is as important as the features list because it prevents scope creep — the tendency to add "one more thing" during the build sprint until the project becomes unfinishable. Common out-of-scope items for a screenwriting tool MVP: full screenplay formatting engine (use an existing library or export to standard tools), collaboration features (multi-user editing), and mobile support (build for desktop first).

5. Success criteria (2–3 sentences). How do you know the tool works? Define specific, testable outcomes: "A writer can create a scene list, tag each scene with a purpose, and filter the list to see only scenes tagged 'complication' — in under thirty seconds." Success criteria keep the build honest: if the tool can't do what the criteria describe, it's not done.

Craft Principle: The best tool is the one that removes specific friction from a specific workflow — not the one that tries to do everything for everyone.
MICRO-EXAMPLE 1: FRUSTRATION AUDIT — MINING THE CURRICULUM PHASE 1 FRUSTRATIONS (Weeks 5–14): - Kept the concept doc, character dossier, antagonist plan, and world rules in four separate files. Couldn't see them side by side while working on the outline. FEATURE CANDIDATE: A reference panel that pins foundation documents alongside the active workspace. - Wrote the promise map as a numbered list in a text file. Had no way to link each item to the outline beat where it would be delivered. FEATURE CANDIDATE: Promise-map items that can be dragged onto outline beats, showing which promises are placed and which are still floating. PHASE 2 FRUSTRATIONS (Weeks 15–22): - During drafting, constantly switched between the step outline and the screenplay file. Lost focus every time. Couldn't see the next 3 beats while writing. FEATURE CANDIDATE: A split-pane view — outline beats on the left, screenplay draft on the right — with the outline scrolling to match the current draft position. - Had no way to track daily page count or see progress across the 7 drafting weeks. FEATURE CANDIDATE: A simple progress tracker showing pages written per session and cumulative total. PHASE 3 FRUSTRATIONS (Weeks 23–28): - Built the scene list with purpose tags in a spreadsheet, separate from the draft. Cut/merge decisions required manual copy-paste between the spreadsheet and the screenplay file. FEATURE CANDIDATE: A scene navigator built into the editor — each scene heading generates a card in a sidebar, with fields for purpose tag, page count, and test scores. AI WORKSHOP FRUSTRATIONS: - Re-pasted context (theme sentence, character dossier) into every prompt. 28 weeks of copying the same paragraphs. FEATURE CANDIDATE: A "context dock" that stores persistent reference text and auto-appends it to AI prompts.
MICRO-EXAMPLE 2: ONE-PAGE PRD — SAMPLE FORMAT PRD: SCENE ENGINE — A Screenwriting Workspace Version: MVP (v0.1) Date: March 2026 1. PROBLEM STATEMENT Screenwriters working through structured drafting and revision processes need a tool that connects their outline, draft, and revision diagnostics in a single workspace. Existing tools handle formatting well but don't support the structural workflow: scene-level tagging, causality tracking, pass-based revision, and AI-assisted diagnostics with persistent context. 2. TARGET USER A screenwriter developing a feature screenplay through a multi-phase process (outline → draft → revision passes), using AI as a diagnostic reader, who wants structural tools integrated with the writing surface rather than spread across separate documents. 3. CORE FEATURES (MVP) • Scene-aware text editor: screenplay text with auto- detected scene headings that generate a navigable scene list in a sidebar. • Scene metadata panel: each scene card shows purpose tag, page count, and engine status (goal/friction/ turn — checkboxes). Filterable by tag. • Split-pane outline view: step-outline beats visible alongside the draft, scrolling in sync. Beats can be marked as drafted, in-progress, or pending. • Reference dock: persistent storage for foundation documents (theme sentence, character dossier, etc.) accessible from any view without opening separate files. • Export to PDF: standard screenplay formatting (Courier 12pt, industry margins, title page). 4. OUT OF SCOPE (v1) • Full formatting engine (rely on Fountain markup or existing parser libraries for formatting). • Collaboration / multi-user editing. • Mobile or tablet interface. • AI integration beyond the reference dock (the Two Readers panel is a v2 feature — see Week 31). 5. SUCCESS CRITERIA • A writer can draft a scene and see it appear as a card in the scene navigator within 1 second. • A writer can tag all scenes with purpose labels and filter the scene list to show only "complication" scenes in under 30 seconds. • A writer can view their step outline alongside the draft in split-pane without switching windows. • The editor exports a correctly formatted PDF that passes a professional formatting check.

Core Reading

Tool Review — Week 29

Assignment: Review 2–3 existing screenwriting tools — not as a user, but as a product thinker.

Tools to evaluate (choose 2–3): Highland (Mac), WriterSolo (cross-platform), Fade In (cross-platform), or any screenwriting tool you used during the curriculum. If you used a plain-text Fountain workflow, review the Fountain specification itself as a product design. If you used a general-purpose tool (Google Docs, Notion, Obsidian) with screenplay plugins or templates, review the plugin/template as a product.

Review lens — evaluate each tool along these dimensions:

1. What does it get right? Identify the two or three features that work best — the capabilities that removed friction from your writing process. Why do they work? What design decisions make them effective? 2. What's missing? Identify the two or three capabilities you needed during the curriculum that the tool doesn't provide. Are these missing features that the tool's designers didn't think of, or deliberate out-of-scope decisions? 3. Where does it create friction? Identify moments where the tool actively interfered with your workflow — where the software's design forced you into a process that doesn't match how you think. A formatting tool that requires you to label every element manually when auto-detection would work. An outlining tool that can't connect to the draft. 4. What's its philosophy? Every tool embodies a theory about how writers work. What theory does this tool embody? Does it assume writers outline first and draft second? Does it assume formatting is the primary need? Does it assume the writer works alone? Name the philosophy and evaluate whether it matches your actual process. 5. What would you steal? If you could take one feature from this tool and put it in yours, what would it be? Why?

Journal Prompts:

1. Across all the tools you reviewed, what's the single feature that's most universally well-implemented? What makes it work across different tools — is it a design pattern that all screenwriting tools converge on? 2. What's the biggest gap in the existing tool landscape — the capability that no tool you reviewed provides? Is that gap in your PRD's feature list? 3. If you could describe your ideal screenwriting tool in one sentence — "a tool that ___" — what would the sentence be? Does your PRD's problem statement match that sentence? 4. Which reviewed tool is closest to what you want to build? What's the smallest number of changes that would make it adequate? If the answer is "two or three changes," consider whether building from scratch is the right approach or whether a plugin or extension would serve better. 5. What design mistake do the reviewed tools have in common? Is there a pattern of friction that the industry hasn't solved? Can your tool address it?

Writing Exercise

Your Project Progress

Deliverable: 1-page PRD + prioritized feature backlog.

Constraints: Produce two artifacts:

(a) One-page PRD. Follow the five-section format: problem statement, target user, core features (3–5 bullets), out of scope (2–3 bullets), and success criteria (2–3 testable outcomes). The PRD must fit on a single page — if it runs longer, compress. Every feature in the core list must trace back to a specific frustration from the curriculum (you should be able to say "I needed this during Week X when I was doing Y"). Features that don't trace to a real frustration are speculative — move them to out of scope or a v2 list.

(b) Prioritized feature backlog (1–2 pages). A ranked list of every feature you considered — the core features from the PRD plus all the features you deferred or cut. Rank them by impact (how much friction does the feature remove?) and effort (how complex is the feature to build?). High-impact, low-effort features go to the top. Low-impact, high-effort features go to the bottom. The top of the backlog defines what you'll build in Weeks 30–31. The bottom defines what you'd build with unlimited time — the v2 roadmap.

Quality bar: The PRD must be specific enough that someone else could read it and begin building without asking you clarifying questions. The problem statement must describe a real problem you experienced. The features must be stated as user capabilities ("the user can..."), not as technical implementations ("the app uses..."). The out-of-scope section must contain at least two features you genuinely wanted but chose to defer — if nothing is out of scope, the MVP is too ambitious. The success criteria must be testable — you should be able to sit in front of the tool in Week 32 and verify whether each criterion is met.

Estimated time: 6–10 hours (frustration audit: 2–3 hours; PRD writing: 2–3 hours; backlog prioritization: 1–2 hours; tool reviews: remaining time).

AI Workshop

Phase 4: AI as Collaborator

The AI Workshop shifts entirely this phase. No screenplay prompts. No Two Readers. No Producer Pass. From here forward, AI is a collaborator on the product — helping you think through product decisions, evaluate technical approaches, and review code. This week's prompt focuses on PRD review: using AI to pressure-test your product spec before you start building.

Product Spec Review
Prompt
I'm building a simple screenwriting tool as a solo project over 3 weeks (design + build + ship). Here's my PRD: [Paste your one-page PRD] And here's my prioritized feature backlog: [Paste the backlog] I have experience writing screenplays but limited coding experience — I'll be using vibe-coding tools (AI-assisted code generation) to build this. Evaluate along these lines: 1. SCOPE CHECK: Is the MVP achievable in 3 weeks by a solo builder using AI-assisted coding? Be honest — if any core feature would take a professional developer more than 2–3 days, it's probably too complex for this timeline. Flag anything that's more ambitious than it looks. 2. MISSING FEATURE: Based on the problem statement and target user, is there a high-impact, low-effort feature I haven't included that would significantly improve the tool? Sometimes the most useful feature is the one the builder forgets because they've been doing it manually for so long it feels normal. 3. SIMPLIFICATION: For each core feature, suggest the simplest possible version that still delivers value. The scene navigator doesn't need to be a full outlining tool — maybe it's just a list of scene headings extracted from the text with click-to-jump. What's the minimum version of each feature? 4. TECH STACK SUGGESTION: Given the features described, suggest a simple tech stack — what language, framework, or approach would let me build this fastest? Assume I'm using AI-assisted coding and prioritize simplicity over scalability. 5. RISK ASSESSMENT: What's the single most likely reason this project fails to ship in 3 weeks? Is it scope creep (too many features), technical complexity (one feature that's harder than expected), or motivation (the builder loses steam)? How do I mitigate that risk? Help me build something I can actually finish.
Frustration-to-Feature Translation
Prompt
I just completed a 28-week screenwriting curriculum. Here are my top frustrations with the tools I used during the process: [Paste your frustration audit — the specific tool-friction moments from each phase] For each frustration, help me: 1. Classify it: Is this a TOOL problem (software can fix it) or a CREATIVE problem (no software fixes it — the writer has to do the thinking)? 2. Name the feature: For each tool problem, state the feature that would solve it in one sentence, as a user capability. 3. Estimate effort: For each feature, rate the implementation complexity as LOW (a text field, a checkbox, a filter), MEDIUM (a panel, a sync mechanism, a data structure), or HIGH (a parser, an editor engine, a real-time collaboration system). 4. Suggest alternatives: For each HIGH-effort feature, suggest a simpler alternative that delivers 80% of the value at 20% of the effort. I'm trying to build the smallest tool that removes the most friction. Help me find the highest-leverage features.

Student Self-Check

Before You Move On
Does every feature in your PRD trace back to a specific frustration you experienced during the curriculum — a real workflow friction, not a hypothetical one?
Is the MVP achievable by one person in three weeks using AI-assisted coding? If you're uncertain, have you flagged the risky features and identified simpler alternatives?
Does the out-of-scope section contain at least two features you genuinely wanted but chose to defer? If nothing is out of scope, the project is too ambitious.
Are your success criteria testable — specific enough that you could sit in front of the tool and verify whether each criterion passes or fails?
Have you reviewed at least two existing screenwriting tools and identified what they get right, what's missing, and what design philosophy they embody?

Editorial Tip

The Builder's Eye

The hardest transition for a writer becoming a builder is learning that the product doesn't need to be complete to be useful. A screenplay with missing scenes doesn't work — the arc is broken. But a tool with missing features can still be useful — if the features it does have solve a real problem well. Your MVP doesn't need a formatting engine, a collaboration system, and an AI panel. It needs one thing that works better than anything you used during the curriculum. If the scene navigator is that one thing — if it removes enough friction that you'd use it during your next screenplay — the tool is a success, even if every other feature is missing. Build the one thing. Make it work. Everything else is v2.

Journal Prompt

Reflection

You've spent twenty-eight weeks as a consumer of tools — using software that other people built to serve workflows those builders imagined. Now you're on the other side. Write about the shift in perspective. When you review existing screenwriting tools, do you see them differently than you did in Week 1? Can you name the design decisions behind the features — the "why" behind the "what"? And more importantly: what does your frustration audit reveal about the gap between how tool builders imagine writers work and how you actually work? That gap is your product's reason for existing. The narrower and more specific the gap, the more useful the tool.

Week Summary

What You've Built

By the end of this week you should have:

• Conducted a frustration audit across all four phases of the curriculum, identifying tool-level friction
• Reviewed 2–3 existing screenwriting tools as a product thinker
• Written a one-page PRD (problem statement, target user, core features, out of scope, success criteria)
• Produced a prioritized feature backlog ranked by impact and effort
• Run the PRD through the AI product spec review prompt
• Classified frustrations as tool problems vs. creative problems
• Identified the simplest viable version of each core feature

Looking Ahead

Next Week

Week 30: Build Sprint (Editor + Scene Navigator). The spec is written. The backlog is prioritized. Next week, you build. The focus is the minimum viable editor — a text editing surface with scene-aware navigation, the core feature that makes the tool a screenwriting workspace rather than a text file. You'll use AI-assisted coding (vibe-coding) to generate the initial codebase, then iterate toward the success criteria defined in the PRD. No screenplay work. No revision. Just building. The craft is different — code instead of scenes, components instead of sequences, debugging instead of revising — but the discipline is the same: start with a clear objective, build toward it, and know when to stop.

Your Portfolio So Far
Week 1–28: Complete screenplay + packaging ✓ PHASES 0–3 COMPLETE
Week 29: Product Spec — PRD + feature backlog (THIS WEEK)
Week 30: Build Sprint — editor + scene navigator
Week 31: AI Features — Questions-Only Script Doctor
✦ ✦ ✦