— Founder, Errata Labs
ScrollThere is a voice of writing that you recognise instantly now. It appears in LinkedIn posts, student essays, marketing emails, blog drafts—the list goes on. By 2025, if someone used it, people would already clock it. The voice is not bad, exactly. It is beige. Competent. Frictionless. And completely, unmistakably, not human.
The models learned this from us, which is the part nobody wants to say out loud. They were trained on the internet's most upvoted, most edited, most optimised writing—and they reproduced it faithfully. The result is text that matches the statistical signature of "good writing" without any of the mess that makes it true.
We are not trying to make AI writing better. We are trying to make it worse, in the specific ways that matter. The run-on sentence that happens when a thought is still forming. The aside that shouldn't be there but is. The word choice that's slightly wrong but feels more honest than the correct one.
What we leave unedited is not a bug. It is the thing the edit pass always removes—and it is, we'd argue, where the human author actually lives.
Three systems, each serving a different kind of imperfection.
Generative Neurosis. A fine-tuned model trained not on what humans publish, but on how they think before they edit. It produces text that is flawed by design and readable because of it.
The Rewriter Engine. You give it something sterile. It gives you something that sounds like it was written at 11pm by someone who actually has opinions about the subject.
Intrinsic Latency. No human in the loop. The hesitation is generated. Which is either a remarkable technical achievement or a deeply strange thing to have built, depending on your disposition. Probably both.
"Certainly! Delving into the multifaceted landscape of artificial intelligence, it's worth noting that there are several key considerations to keep in mind. First and foremost, AI has the potential to revolutionize industries across the board. However, it's crucial to approach this transformative technology with a balanced perspective."
"I'm not sure I'm convinced that AI is going to revolutionize industries across the board—people keep saying that, but I've seen some pretty underwhelming results in my own experience with automated customer service chatbots. I mean, sure, they can process a lot of information, but when it comes to anything remotely complex, they just kind of... stall."
The output changes with each pass. The imperfection is not a bug.
It's a completely different architecture of it. When labs optimize for consistency, factual accuracy, helpfulness, and harmlessness—almost every AI lab does, mind you, and they can not be blamed—they produce something that performs intelligence without possessing it in any recognizable human sense.
Ask Claude, or even Gemini, what it thinks of a political figure, and you will get a paragraph many editors would be proud of for being unbiased, neutral, and perfect. Ask a human being the same, and you get something immediate, partial, biased and possibly wrong.
Researchers who think seriously about what AI is doing to language. Writers who noticed something went wrong and want to understand why. Practitioners who need the output to actually sound like a person wrote it.
Not for those who want to scale. Not for those who want to automate. For those who want the machine to hesitate.
Admission is determined by professional affiliation and background review. The vetting process requires fourteen days.
Invalid responses will not be evaluated. Full membership details →