Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All of us trained our human "LLM" in the same environment (a human baby body) so it's easy for us to agree. I think once we have LLM-like entities that are always on and output a constant stream of thoughts, lines are going to get real blurry. Things that always used to be coupled and so had one name might need to be split. I think consciousness is one of those. Consciousness does not have a single definition as far as I am aware but one definition is something like the feeling of a potential future I am passively predicting happening and becoming the past. Riding that "now" wave. This definition seems extremely substrate specific. What if this sensation is just an implementation detail of an evolved intelligence in an Earth animal? The feeling of information being processed. I suspect this is just what consciousness feels like, not what it is. I don't know what you're feeling but from observing and interacting with you I assume and act like you are conscious. You are "functionally conscious." I don't see why AIs couldn't be functionally conscious. I further assume that you are human and so I extend even more consideration to how I talk to you. I assume you have feelings that you like to feel and those you don't and I prefer to trigger the former and avoid the latter, not simply because I don't want to take the conversation there but because as a fellow animal I care about your feelings. But I can see how there could be entities in the future that are conscious "functionally" but do not have the accompanying human feelings. They would speak human, since thats useful to humans, but wouldn't "be" human. I don't think we need to understand how / why humans feel conscious for that to happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: