Hacker News new | past | comments | ask | show | jobs | submit login

ChatGPT doesn't "recommended" anything. It just recombines text based on statistical inferences that appear like a recommendation.

It could just as well state that humans have 3 legs depending on its training set and/or time of day. In fact it has said similar BS.




> ChatGPT doesn't "recommended" anything. It just recombines text based on statistical inferences that appear like a recommendation.

I think that’s a bit pedantic and not very helpful… I’m not typing this comment, my brain is just sending signals to my hands which causes them into input data into a device that displays pixels that look like a comment


>I think that’s a bit pedantic and not very helpful… I’m not typing this comment, my brain is just sending signals to my hands which causes them into input data into a device that displays pixels that look like a comment

Well, if you're just fed a corpus, with no real-time first-person strem of experience that you control, no feedback mechanism, no higher level facilities, and you're not a member of a species with a proven track record of state-of-the-art in nature semantic understanding, then maybe...


Does YouTube recommend you videos to watch? Does Amazon recommend you products to buy? Or do they just recombine text based on statistical inferences that appear like a recommendation?


Obviously they "just recombine text based on statistical inferences that appear like a recommendation".

And even that, they do badly.


>ChatGPT doesn't "recommended"

I mean, you could say that about a person too, as you don't know how much that they are saying is bullshit.

For one, you are technically correct about ChatGPT not recommending. It cannot perform such action. On the other hand, from the POV of the questioner, it's hard not to feel being recommended something when you ask "What do you recommend" and it says "I recommend that...". You are, for some intents and purposes, being recommended something at that point.


What would you call it instead?


"Makes stuff up." And it's us, the users, who have to realize this. I mean, I wouldn't blame OpenAI for this, at least not at this point, and the company will have to live with it, look how it can turn it into something useful instead, since there's no one to complain to.


> I wouldn't blame OpenAI for this

They're offering the tool, it's at least partially their responsibility to tell people how it should and should not be used.


Why wouldn't you blame OpenAI for creating a harassment campaign against the business based on nonsense?


A glorified Markov chain generator.

Now, humans could very well also be statistical inference machines. But they have way more tricks up their semantic-level understanding sleeves than ChatGPT circa 2023.


Markov chains are great for modeling human language and human decision making. ChatGPT demonstrates this and the results are not trivial. I don't see it being glorified beyond what it plainly does




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: