Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>IMO if anything I am coming to the opposite conclusion. Yud and his entire project failed to predict literally anything about how LLMs work.

Or you could take that as evidence (and there's a lot more like it) that AGI is a phenomenon so complex that not even the experts have a clue what's actually going to happen. And yet they are barrelling towards it. There's no reason to expect that anyone will be able to be in control of a situation that nobody on earth even understands.



After watching virtually every long-form interview of AI experts I noticed they each have some glaring holes in their mental models of reality. If even the experts are suffering from severe limitations on their bounded-rationality, then lay people pretty much don't stand a chance at trying to reason about this. But let's all play with the shiny new tech, right?


What have they been wrong about?


What have they been right about is a much shorter list.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: