Hacker News new | past | comments | ask | show | jobs | submit login

Oversimplifying, there's two sorts of writing out there: the sort from people who put a lot of focus on developing writing and language skills; and the sort from people who don't.

For the folks who aren't writing with care, does a language model flowering-up a message like "be there tomorrow to paint walls 7am" reduce any thinking effort? There are people who are making their way in the world without a detailed writing-driving thought process; I don't see how this harms them. (If anything, an inquisitive, precision-driven LLM bot could help, by asking questions to clarify when it detected ambiguity in their original message and such - but that would be a specific product with a specific tayloring towards helping like that that I have rarely seen out of ChatGPTs default "write me a..." behavior.)

And on the other hand, if you care about the LLM-bot-generated message's clarity, accuracy, etc, you're gonna have to proofread it, you might go back and forth, and you will still be going through that process of asking yourself "does this actually say what I mean? did I fully know what I meant when I started"?

I guess the suggestion is that this is going to push people who today struggle through writing because they care about the accuracy of the result into a lower-struggle process where they might unconsciously get worse results... but I'm not sure I agree. If I'm anxious about what the final text I send looks like, I'm anxious regardless of if I used a bot, or asked a friend for help, or whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: