Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's pretty boring, but to each their own. Continue enjoying boredom, my lad. :)


Let me explain what happened here, because this is very human and very stupid, and therefore completely understandable.

We looked at documentation and thought, Ah yes. Words. And then we looked at AI and thought, Oh wow. It makes words. And then we did what humans always do when two things look vaguely similar: we declared victory and went to lunch.

That’s it. That’s the whole mistake.

Documentation looks like writing the same way a police report looks like justice. The writing is the part you can see. The job is everything that happens before someone dares to put a sentence down and say, “Yes. That. That’s what this thing really does.”

AI can write sentences all day. That’s not the problem. The problem is that documentation is where software stops flirting and starts making promises. And promises are where the lawsuits live.

Here’s the thing nobody wants to admit: technical writers are not paid to write. They are paid to be annoying in very specific, very expensive ways. They ask questions nobody likes. They slow things down. They keep pointing at edge cases like a toddler pointing at a dead bug going, “This too? This too?”

Yes. Especially this too.

When you replaced them with AI, nothing broke. Which is why you think this worked. The docs still shipped. They even looked better. Cleaner. Confident. Calm. That soothing corporate voice that says, “Everything is fine. You are holding it wrong.”

And that’s when the rot set in.

Because AI does not experience dread. It does not wake up at 3 a.m. thinking, “If this sentence is wrong, someone is going to lose a week of their life.” It does not feel that tightening in the chest that tells a human writer, This paragraph is lying by omission.

So it smooths. It resolves. It fills in gaps that should stay jagged. It confidently explains things no one actually understands yet. It does what bad managers do: it mistakes silence for agreement.

Over time, your documentation stops describing reality and starts describing a slightly nicer alternate universe where the product behaves itself and nobody does anything weird.

This is how you get users “misusing” your product in ways your own docs taught them.

Then comes my favorite part.

You notice the AI is hallucinating. So you add tooling. Retrieval. Semantic layers. Prompt rules. Context hygiene. You hire someone with “AI” in their title to fix the hallucinations.

What you are rebuilding, piece by piece, is technical writing. Only now it’s worse, because it’s invisible, fragmented, and no one knows who’s responsible for it.

Context curation is documentation. Instruction hierarchies are documentation. If your AI is dumb, it’s because you fired the people who knew what the truth was supposed to look like.

And don’t worry, accountability did not get automated away while you weren’t looking. When the docs cause real damage, the model will not be present. You cannot subpoena a neural net. You cannot fire a prompt. You will be standing there explaining that “the system generated it,” and everyone will hear exactly what that means.

It means nobody was in charge.

Documentation is where software admits the truth. Not the aspirational truth. The annoying truth. The truth about what breaks, what’s undefined, what’s still half-baked and kind of scary. Marketing can lie. Interfaces can hint. Documentation has to commit.

Commitment requires judgment. Judgment requires caring. Caring is still not in beta.

This is not an anti-AI argument. AI is great. It writes faster than any human alive. It just doesn’t know when to hesitate, when to warn, or when to say, “We don’t actually know yet.” Those are the moments that keep users from getting hurt.

The future that works is painfully obvious. Writers with AI are dangerous in the good way. AI without writers is dangerous in the other way. One produces clarity. The other produces confidence without consent.

Technical writers are not a luxury. They are the people who stop your product from gaslighting its users.

AI can generate language forever.

Truth still needs a human with a little fear in their heart and a pen they’re willing to hesitate with.

Hire them back.


Brilliant. Thank you.


Thank chatgpt lol.


Ah, now it reads like LinkedIn viral slop. Done with the toy yet?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: