Hacker News new | past | comments | ask | show | jobs | submit login

At this point you have to start entertaining the question of what is the difference between general intelligence and a "sufficiently complicated" next token prediction algorithm.





A sufficiently large lookup table in DB is mathematically indistinguishable from a sufficiently complicated next token prediction algorithm is mathematically indistinguishable from general intelligence.

All that means is that treating something as a black box doesn't tell you anything about what's inside the box.


Why do we care, so long as the box can genuinely reason about things?

What if the box has spiders in it

:facepalm:

I ... did you respond to the wrong comment?

Or do you actually think the DB table can genuinely reason about things?


Of course it can. Reasoning is algorithmic in nature, and algorithms can be encoded as sufficiently large state transition tables. I don't buy into Searle's "it can't reason because of course it can't" nonsense.

It can do something but I wouldn’t call it reasoning. IMO a reasoning algorithmic must be more complex than a lookup table.

We were talking about a "sufficiently large" table, which means that it can be larger than realistic hardware allows for. Any algorithm operating on bounded memory can be ultimately encoded as a finite state automaton with the table defining all valid state transitions.

This is such a confusion of ideas that I don't even know how to respond any more.

Good luck.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: