AI has read more writing than any human alive. That makes it powerful. It also makes it have very strong opinions about what writing is supposed to sound like — and those opinions skew toward the average.
February 26, 2026There is a particular kind of workshop feedback that every writer has received at least once. It arrives in a reasonable tone, from a reasonable person, and it is completely technically correct. The sentence it flags is grammatically unusual. The rhythm it questions is deliberately broken. The word it suggests replacing is, objectively speaking, a perfectly good synonym for the one you chose. And yet, if you followed every note, the piece would be worse. Not different — worse. Smoother, safer, less itself.
AI gives this feedback constantly. Not because it is wrong, exactly. Because it is right in the way that averages are right. It has been trained on an ocean of human writing and has learned, with frightening precision, what most good writing looks and sounds like. The problem is that your writing — if it has a voice worth keeping — is almost certainly not most writing.
This is what we mean by the flattening problem. It is not that AI will ruin your voice in one dramatic intervention. It is that it will nudge, gently and persistently, toward the mean. And if you're not paying attention to which notes to take and which to refuse, you can revise your way into something technically polished and utterly anonymous.
The feedback that flattens doesn't feel like flattening when it arrives. It feels like an improvement. That's what makes it worth understanding.
Consider a writer whose voice is built on long, winding, clause-stacked sentences — sentences that accumulate like weather, that make the reader lean in before they know why. An AI trained on writing advice, style guides, and workshop feedback will flag these sentences every time. Too long. Consider breaking this up. Readers may lose the thread. The note is not wrong. Most readers do prefer shorter sentences. Most writing does benefit from tighter syntax. But most writing is not this writer.
The same is true of unconventional punctuation, deliberate repetition, abrupt tonal shifts, fragments used for rhythm, the refusal to resolve what the reader expects to be resolved. All of these are potential fingerprints — the things that make a voice a voice — and all of them will trip the AI's pattern-matching in the direction of "this could be cleaner."
Remember what we covered in Issues 1 and 2: the AI processes tokens, and it predicts what comes next based on statistical patterns in everything it's been trained on. This means that when it reads your deliberate fragment, it is not asking whether the fragment is intentional. It is asking whether, in the vast landscape of writing it has seen, a fragment in this position is more likely to be intentional or a mistake. And the honest answer, statistically, is: usually a mistake.
The model isn't wrong to flag it. It just cannot, on its own, distinguish between an error and a choice. That distinction requires something the AI doesn't have access to: your intention, your awareness of the pattern, and your knowledge of whether this is the third time you've used this device or the first.
A sentence fragment where a complete sentence is statistically expected. A word repeated where variety is the norm. A comma splice where most style guides say no. A tonal shift where consistency is more common.
That the fragment is doing the emotional work of a full stop. That the repetition is a drumbeat. That the splice is the breathless pace of panic. That the tonal shift is the point. The AI is missing the intention. Only you have it.
This is not an argument against using AI for revision — it is an argument for using it with your eyes open. The flattening problem is real but entirely manageable, once you know it's there. Here are the practices that keep your voice intact.
Underneath all of this is a single distinction worth carrying into every revision session: the difference between a mistake and a choice.
A mistake is a place where the writing failed to do what you intended. A choice is a place where the writing is doing exactly what you intended, and the question is only whether it's succeeding. These require completely different responses. Mistakes should be fixed. Choices should be interrogated — pushed on, defended, occasionally abandoned — but always from a position of awareness.
AI feedback cannot make this distinction for you. It doesn't know what you intended. It only knows what landed. Which means that when you bring your work to an AI for revision, the most important thing you can do is know, before you start, which category each element of your prose falls into. Armed with that knowledge, the feedback becomes genuinely useful — a skilled outside eye on the places where your choices aren't yet succeeding, and a prompt to look harder at the places where they are.
The flattening problem is real. But it only has power over writers who have forgotten the difference between what they chose and what happened by accident. Know your own work that well, and no amount of helpful suggestions can sand you down.
END OF ISSUE NO. 4 — AI & THE CRAFT