Hacker News new | past | comments | ask | show | jobs | submit login

> But to boil it down to autocomplete is just totally disingenuous.

It is though from Ilya's own words: "We weren't expecting that just finding the next word will lead to something intelligent; ChatGPT just finds the next token"

Ref: https://www.youtube.com/watch?v=GI4Tpi48DlA




if you'd like to take a quote out of context in order to equate two different technologies, you just go to town.


Ilya says it's pretty clear that a very very smart autocomplete leads to intelligence.

OP was using the word autocomplete as a pejorative, but it's actually not and it is the strategy LLMs are pursuing.

The example that Ilya's was using: If you feed the LLMs a mystery murder novel and it is able to autocomplete "The killer is" from just clues, you have achieved intelligence.

Nothing wrong with autocomplete being AGI.


yeah, when I think of autocomplete I definitely think of predicting the next thing a person is going to type, which is related but basically equivalent to "a horse and a plane to both get to a destination, whats the difference" in my mind.

If you're going to speak so abstractly you really lose the forest for the trees IMO. I do like the murder mystery example


> "a horse and a plane to both get to a destination, whats the difference"

That's actually a good analogy with autocomplete. A plane reactor still measures itself in "horse"-power. It's technically a very very powerful horse. Trying to get better and better horses got us here. Like GPT-4 is a very very powerful GMail-like autocomplete. :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: