unwrite.it

unwrite.it

An experiment against AI slop

We built a prototype using structured feedback and read-only editing to improve AI content. The result: workflow matters more than the model.

Question

AI content is easy to generate but often generic, predictable, and lacking depth. We wondered: is it the AI, or how we use it?

Method

We built unwrite.it as an experiment—a prototype combining structured input, inline feedback comments, and read-only editing to test whether a better workflow produces better content.

Discovery

A working prototype proving: the problem isn't the AI, but the process. Structured feedback and iterative refinement work better than endless regeneration.

The AI slop problem

You scroll through LinkedIn and see article after article with the same predictable opening lines. “In today’s rapidly evolving landscape…” “Let’s delve into…” It’s AI slop—content so generic it hurts to read.

The irony? Everyone uses AI for content, but nobody’s truly satisfied. We thought: what if the problem isn’t the AI, but the process? So we built unwrite.it—an experiment showing that workflow matters more than the model.

The predictable patterns

Open LinkedIn. How many articles do you see that are clearly AI-generated?

“In today’s rapidly evolving landscape…” “It’s important to note that…” “Let’s delve into…” “Navigate the complexities…” “Revolutionize your approach…”

It’s flooding the internet. Content that’s technically correct but has no personality, no depth, no unique insights.

Why does this happen?

AI models are powerful. GPT-4, Claude, Gemini—they can produce impressive text. The problem? How we use them. Most people open ChatGPT, type a vague prompt, and hope for the best.

Our experiment

What if we fundamentally rethink the process? Not improving the AI, but changing how people work with it.

The result is unwrite.it: a workflow engine combining structured input, inline feedback, and read-only editing. AI writes, we “unwrite”—we break down the process and rebuild it with human expertise baked in.

Why does AI content feel so predictable? And more importantly: can we fix it?

The hypothesis: the process is the problem

We thought about how people use ChatGPT. Open ChatGPT, type “Write an article about X”, copy the output. Not satisfied? Click “regenerate” and hope for better. Repeat until “good enough”.

This process has three fundamental problems:

1

Too little context

The AI gets no tone of voice guidance, no brand voice, no audience information. Just a title or vague prompt. How can the AI create something unique without context?

2

No expert validation

The people who truly know the subject aren't involved. Feedback happens ad-hoc via email or comments: "This is wrong" without explaining why. Subject matter experts see the draft and have to rewrite everything because the AI misses crucial nuances.

3

Regenerate ≠ iteration

Every time you click "regenerate", the AI throws everything away and starts over. Good parts that were correct get lost. You don't learn from previous versions. It's content roulette: keep spinning until you get lucky.

What if we redesign the process? What if we keep the AI model the same, but improve the workflow?

The experiment: building unwrite.it

We decided to build a prototype. No product, no business plan—just an experiment to test: does a structured workflow make AI content better?

The three-step workflow

1

Settings phase (v0)

Configure your standards once: tone of voice (describe your writing style or paste examples), content type (blog, case study, technical article), model selection, desired word count. And crucially: forbidden phrases—a list of words and phrases the AI must avoid. AI models have patterns they like to use. By explicitly saying "don't use these phrases", you force the model to be more original. No "delve into", no "it's important to note", no "revolutionize".

2

Input phase (v0.1)

Dump your raw material: article title, keyword (optional), and especially your input notes. Meeting transcripts, interview notes, bullet points, raw ideas—anything goes. You don't have to clean it up. The AI will organize it. This is when you give the AI the context it needs.

3

Feedback phase (v1+)

The editor is read-only—you can't type. You select text and add a comment: "This is wrong because X" or "Missing nuance about Y". Then click "Apply Feedback". The AI reads ALL comments, understands the context, makes targeted adjustments, and creates v2. Repeat until satisfied. Full version history.

unwrite.it settings phase - configure content standards, tone of voice and forbidden AI phrases

unwrite.it input phase - dump raw material like meeting transcripts without cleaning up

The controversial choice: read-only editing

This was intentional. After generation, you can’t edit the text. Only via comments. Why? Because it enforces structured feedback.

If you can just type, you fix things ad-hoc. “Oh, this word fits better”—type, done. The AI learns nothing. Next article: same problems. It’s a quick fix without learning effect.

If you have to write comments, you have to articulate WHAT is wrong and WHY. “This sentence is too formal for our tone” is more valuable than just changing the word. The AI can recognize that pattern and apply it to the entire article. And to all future articles. That feedback compounds.

Read-only feels frustrating at first. “Why can’t I just change this word?!” But it forces you to think: why do you want to change it? That context is more valuable than the change itself.

What we discovered while building

1

Workflow matters more than model

GPT-4 with a good workflow performs better than GPT-4o with vague prompts. Through structured input, forbidden phrases, and iterative feedback, the output consistently improves. Not because the model changes, but because the process is better. The model gets better instructions, more context, and structured feedback.

2

Constraints force better feedback

Read-only editing irritates people. "Why can't I just fix this?!" But it works. When you can't quickly type something, you have to think: what exactly is wrong? Why is it wrong? What should be different? That feedback is ten times more valuable than "fix this word". It forces you to identify the underlying problem.

3

Streaming feels faster

Text coming in real-time during generation feels much faster than waiting for a block of text. Even if the total takes just as long. Psychological effect, but important for user experience. You see the AI "thinking" and that builds trust that something is happening.

4

Version history is gold

Being able to see what v1, v2, v3 were provides context. You see what feedback did. "Oh, after that comment about tone, the entire next section was written differently." That helps with the next iteration. You learn which feedback is most effective.

The limits: it’s a prototype

Let’s be clear: this is not a product. It’s a working experiment.

Structured feedback workflow

Comments as feedback units, targeted adjustments, and iterative refinement instead of endless regeneration.

Read-only editing principle

Enforces structured feedback and prevents ad-hoc changes without learning effect.

Version control

Complete history of all drafts and feedback for transparency and analysis.

Streaming generation

Real-time text rendering for better user experience and sense of speed.

Forbidden phrase enforcement

Prevent generic AI language by explicitly blocking words and phrases.

What it doesn’t have: collaboration features, team workflows, automatic expert assignment, auto-research for SEO, CMS integrations, or cross-device synchronization. Enough to validate: does this approach work? Answer after a few weeks of testing: yes.

We deliberately chose a fully client-side app. Everything runs in the browser with your own OpenAI API key. Advantages: quickly built, privacy by default (your data stays local), no server scaling issues. Disadvantages: no cross-device sync, no real-time collaboration. For a prototype we wanted to validate quickly: worth it.

The bigger picture: the fight against AI slop

We believe AI slop doesn’t come from bad AI. It comes from bad process design. The standard workflow—prompt, generate, publish or regenerate—is too simple. It lacks structured input, context, expert validation, and iterative refinement. If we want to make AI content better, we need to fix the process. Not wait for GPT-10.

The future isn’t “AI replaces people”. It’s: AI does the heavy lifting, people provide expertise and judgment. For content, that means: AI writes and structures, people validate and refine through a structured feedback loop, iterate until it’s right. Nobody gets replaced. Everyone becomes more effective. Domain experts don’t need to be writers—they only review accuracy. Writers don’t need to be experts—they orchestrate the process.

Why “unwrite”?

We deconstruct the entire writing process and rebuild it with human knowledge as the foundation. It’s not AI versus people, but AI with people. That’s the only way to truly fight AI slop: not wait for GPT-10, but create better workflows now.

Try it yourself

unwrite.it is a working prototype showing that workflow matters more than the model. Curious how structured feedback improves AI content?

What are your technical challenges?

We’re happy to start with a good conversation about your challenge and explore the best solution together.

010 Coding Collective hackathon
worth € 3,750

Risk-Free Hackathon

Join a one-day hackathon where we test and build on your idea. Not planning to use the outcome? Then you don’t pay – no questions asked.

Clarity on technical feasibility in just one day
Immediate validation with users or investors
Avoid costly technical mistakes early on