Hacker News new | past | comments | ask | show | jobs | submit login

I see a lot of people undermine the intelligence of the AI by claiming it's doing nothing but predicting the next token. I'm not claiming it's sentient by any means, but that humans themselves don't do much more than our equivalent of predicting the next token for most intelligent tasks.

For example: when you speak, do you choose each and every word or do they more or less "come to you" and you choose/veto them based on some preference (or probability)? When you program, do you really think "hm, yes, should put a for here" or do you just look at the code you've written and your fingers start tapping away? It feels like you're choosing what to do since you're "overseeing" the whole process, but another part of you is also involved in producing the actual implementation. It feels very probabilistic -- the kinds of things you said yesterday are probably going to be similar to the ones you're saying today. Most people having a coding style.

When you have a cold for example you can find that you're perfectly able to think of what should be done, but have no ability to do so. The part of you that does coding is too "murky" and can't generate tokens anymore, and the overseeing part is too tired to motivate it back into action. A lot of programmers have ideas in the shower, in bed, after a run etc., further showing that there's a certain circuit we're unaware of that is doing some sort of work for us to be able to program.

In effect, what the AI missing is the "overseer" part, which perhaps is what we would identify as our own consciousness (the ability to think about the process of thinking itself). Given the incredible progress I think it's fair people have all kinds of SF ideas nowadays. I would have never thought something like chatGPT would be achieved in my lifetime, I've been playing with this thing for a while and it's amazing how well it can envision the physical world and interactions between objects it cannot possibly have examples for in the database.




> In effect, what the AI missing is the "overseer" part, which perhaps is what we would identify as our own consciousness (the ability to think about the process of thinking itself).

Term is "metacognition," and that's an interesting concept.


It’s essentially two parts right? I feel like we have something now that I would call cognition. They can probably add math and physics knowledge as their own models. Now you “just” need the part that drives it towards some goals. And you have essentially a bug or an animal at the least. It can also have scary goals that living creatures can’t - like try to grow exponentially bigger.


In retrospect I realized that it's missing not just the higher part of human consciousness but the lower one too (survival, approach/flee), which is what you mentioned. One wonders if perhaps it's not better if AIs do not have those circuits in the first place?

> It can also have scary goals that living creatures can’t - like try to grow exponentially bigger.

This is funny given this is what humans have done. I don't think you can calculate it but when you go from flying an airplane to flying to the moon in a lifespan then that's definitely steeper progress than anything that's ever come before. Population growth has only recently stopped being (truly) exponential too.


Maybe we were the GPT guessing the next token all along.


Really makes me wonder if a fellow human didn't have consciousness if anyone would notice.


That is an age-old question: https://plato.stanford.edu/entries/zombies/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: