Hacker News new | past | comments | ask | show | jobs | submit login

This makes sense. Ilya can probably raise practically unlimited money on his name alone at this point.

I'm not sure I agree with the "no product until we succeed" direction. I think real world feedback from deployed products is going to be important in developing superintelligence. I doubt that it will drop out of the blue from an ivory tower. But I could be wrong. I definitely agree that superintelligence is within reach and now is the time to work on it. The more the merrier!




I have a strong intuition that chat logs are actually the most useful kind of data. They contain many LLM outputs followed by implicit or explicit feedback, from humans, from the real world, and from code execution. Scaling this feedback to 180M users and 1 trillion interactive tokens per month like OpenAI is a big deal.


Except LLMs are a distraction from AGI


If brain without language would suffice, a single human could rediscover all we know on their own. But it's not like that, brains are feeble individually, only in societies we have cultural evolution. If humanity lost language and culture and start from scratch, it would take us another 300K years to rediscover what we lost.

But if you train a random-init LLM on the same data, it responds (almost) like a human on a diversity of tasks. Does that imply humans are just language models on two feet? Maybe we are also language modelling our way through life. New situation comes up, we generate ideas based on language, select based on personal experience, and then act and observe the outcomes to update our preferences in the future.


That doesn't necessarily imply that chat logs are not valuable for creating AGI.

You can think of LLMs as devices to trigger humans to process input with their meat brains and produce machine-readable output. The fact that the input was LLM-generated isn't necessarily a problem; clearly it is effective for the purpose of prodding humans to respond. You're training on the human outputs, not the LLM inputs. (Well, more likely on the edge from LLM input to human output, but close enough.)


Well, Ilya doesn't think that. He's firmly in the Hinton camp, not the Lecun camp.


Yeah, similar to how Google's clickstream data makes their lead in search self-reinforcing. But chat data isn't the only kind of data. Multimodal will be next. And after that, robotics.


Who would pay for safety, though?


His idea that only corporations and governments should have access to this product. He doesn’t think people should have access even to ChatGPT or LLMs. Goal is to build companies with evaluations of dozens, hundreds trillions of dollars and make sure only US government will have access to super intelligence to surpass other countries economy and military wise, ideally to solidify US hegemony and undermine other countries economies and progress towards super intelligence.

I mean who wouldn’t trust capitalists that are laying of people by thousands just to please investors or government that is “under-intelligent” and hasn’t brought anything but pain and suffering to other countries.


Personally I wouldn’t trust OpenAi to work on super intelligence - it can indeed cause mass extinction. Government is completely different story they will specifically train AI to develop biological, chemical and weapons of mass destruction. Train it to strategize and plan on how to win war conflicts, social engineering and manipulations, hacking. And obviously will let it control drone planes and tanks, artillery. Give it access to satellites and so on. Nothing can go wrong when jarheads are at work :). Maybe it will even find the trillions of dollars that Pentagon can’t find during every audit they can’t pass.


And obviously one of the points that people don’t need AI tools, because corporations need agent like AIs that can quickly replace all the staff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: