Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The stateless/timeless nature of LLMs comes from the rigid prompt-response structure. But I don't see why we cant in theory decouple the response from the prompt, and have them constantly produce a response stream from a prompt that can be adjusted asynchronously by the environment and by the LLMs themselves through the response tokens and actions therein. I think that would certainly simulate them experiencing time without the hairy questions about what time is.




It is not about stateless nature of LLMs. The problem of time-series A versus B is that our mathematical constructions just cannot describe the perception of time flow or at least for over 100 years nobody managed to figure out how to express it mathematically. As such any algorithms including LLMs remains just a static collection of rules for a Turing machine. All the things that consciousness perceives as changes including state transitions or prompt responses in computers are not expressible standalone without references to the consciousness experience.

All of us trained our human "LLM" in the same environment (a human baby body) so it's easy for us to agree. I think once we have LLM-like entities that are always on and output a constant stream of thoughts, lines are going to get real blurry. Things that always used to be coupled and so had one name might need to be split. I think consciousness is one of those. Consciousness does not have a single definition as far as I am aware but one definition is something like the feeling of a potential future I am passively predicting happening and becoming the past. Riding that "now" wave. This definition seems extremely substrate specific. What if this sensation is just an implementation detail of an evolved intelligence in an Earth animal? The feeling of information being processed. I suspect this is just what consciousness feels like, not what it is. I don't know what you're feeling but from observing and interacting with you I assume and act like you are conscious. You are "functionally conscious." I don't see why AIs couldn't be functionally conscious. I further assume that you are human and so I extend even more consideration to how I talk to you. I assume you have feelings that you like to feel and those you don't and I prefer to trigger the former and avoid the latter, not simply because I don't want to take the conversation there but because as a fellow animal I care about your feelings. But I can see how there could be entities in the future that are conscious "functionally" but do not have the accompanying human feelings. They would speak human, since thats useful to humans, but wouldn't "be" human. I don't think we need to understand how / why humans feel conscious for that to happen.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: