Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It sounds like your big objection here is to using ChatGPT to generate entire paragraphs of text.

I agree: I think it's rude to expect people to spend more time reading something than you spent writing it!

But... there are still plenty of ways to integrate ChatGPT-like-tools into a writing process that I think avoid disrespecting your audience like that. I use it as a thesaurus, or to suggest ways my writing could be improved, or as a brainstorming companion for example.



I think we're close enough to an understanding that I'll let you have the last word, except to correct one more misunderstanding.

My objection to ChatGPT is yes, about generating full sentences.

I object to it for many reasons, but not necessarily just because it's rude. My biggest concern is actually that it takes out the voice of the human.

I have a hard enough time figuring out what people are thinking in person (I'm on the spectrum) that doing it through text is like trying to sense the movement of fish with my eyes closed on shore. Making everybody have the same bland voice would turn that up to 11.

I want the human, even if the human isn't perfect.


> figuring out what people are thinking in person

Have you seen the latest research, where GPT-4 is shown to have "Theory of Mind"-like ability (what people are thinking) at a human 7-year old level?

https://arxiv.org/abs/2302.02083


Didn't see it, but they only use one type of test to test theory of mind. Excuse me if I'm skeptical that that proves it has theory of mind.

And that certainly doesn't help me with theory of mind problems.


> Didn't see it, but they only use one type of test to test theory of mind. Excuse me if I'm skeptical that that proves it has theory of mind.

Well, it looked convincing to me. But sure, this is a new field, maybe they made some mistake. This is what science is for. Skeptics need to review this claim and try to poke holes in it.

I just can't see how a "stochastic parrot" could get anywhere near generating such accurate answers to "theory of mind" questions?

And we see this in the development of LLMs over the last few years. The earlier ones were much closer to stochastic parrots. But once the models got big enough, something happened. The performance in these type of questions jumped dramatically.

It was like the LLMs had been forced through training to evolve processing-like capabilities, and not just pattern matching inputs to outputs.

Can you think of counter-examples, where you think GPT-4 can't answer a theory of mind question many people would be able to?

> I have a hard enough time figuring out what people are thinking in person (I'm on the spectrum) that doing it through text is like trying to sense the movement of fish with my eyes closed on shore. > And that certainly doesn't help me with theory of mind problems.

Sorry to hear that! I have similar issues, but not nearly as bad. I sometimes catch myself afterwards having responded to an email just writing basic responses about how I am doing, without reciprocating with similar questions, etc.

If you get an email where it is hard to understand what the person is thinking, wouldn't GPT-4 help in giving you a list of possible alternatives?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: