Hacker News new | past | comments | ask | show | jobs | submit login

> By that definition, a GPS module with an SSD is an "I".

No it's not. I didn't explain what I meant by "knowing your place in space and time", sorry about that, but this needs elaboration.

I think we'll agree that consciousness (self-awareness) is not binary: it can be present in varying degrees in computer systems, animals and humans. The degree of self-awareness though I think is a function of how rich the model of the world is in your mind. A GPS module can have its own coordinates and a precise UTC clock, but its model of the world is the most primitive.

Thus knowing your place in space means some model of the world and your relative place within it. Domestic cats and dogs have their own models - of the house and possibly some surroundings, but they are very limited. Humans typically have the most elaborate and the highest level abstract model of the world, therefore our self-awareness is the highest among the creatures.

Time is a trickier one though. Our sense of time seems to be the function of the chain of here-and-now moments. You sense your place now, but you also have a memory of the previous moments as one seemingly uninterrupted flow. This I think is what creates an illusion of being, though obviously "I" doesn't exist in the same sense as a physical object does.

Then there's a whole part of how the model of the world that we build in our minds should also be practical, i.e. help us achieve our goals efficiently, but I'll skip it for now as it gets too far from the original question.




Is your definition actually accurate enough to capture instances of intuitive judgements that "this is a conscious being" without being overly broad to include "obviously non-conscious" things?

By calling it a spectrum, the already fuzzy definition becomes so broad as to be almost useless. Your spectrum and "knowing" model admits conscious GPS, albeit "low consciousness." What about spiders or GPT-3 or suitibly chosen digits out of an RNG? At best these all fail the intuitive answer test for most people.

Perhaps you have a mental model of Yudkowsian self-optimizers? Or maybe you are thinking more classically along the lines of Kant?

The whole problem I am trying to communicate is that your are playing and refining a model without defining exactly what your are modeling.


Self-awareness is paradoxical, I am aware of that. What I'm trying to model at least mentally for now, is something that has a chance of becoming self-aware unless we are missing something else.

In other words, let's say these are the necessary but not necessarily sufficient conditions for the emergence of self-awareness: (1) having some model of the world (MOTW), (2) maintaing a relative position of Self in MOTW, (3) maintining a history of positions in time.

The level of self-awareness then becomes a function of the complexity of MOTW and the complexity of the system's interfaces. For example is the model good enough to help it survive or reach whatever goals it's programmed to achieve? Is the system flexible and adaptible enough? Can it discover and learn by itself?

So again, in this regard a GPS system is so primitive that it may be comparable to that of an amoeba or even worse. A coordinate space alone is not a useful or interesting MOTW that would help your GPS box achieve its goals.


I don't think it's paradoxical. I just get the impression you may have jumped to building models without carefully considering the exact phenomenon you claim to deconstruct. "Self-awareness" or "consciousness" are just words. What do they point to in the world?


I experience some sort of an illusion of existence. Then I look at other creatures that look like me and act like me and I generalize, I say: maybe they exist the same way as I do. Then I hear from others that the generalization seems to be called "counsciousness" or sometimes "self-awareness". It's paradoxical because "I" doesn't exist the same way as a rock or any object and yet its continuity creates a resemblance, an illusion of existence.

Try to think of your exact quantum copies - what happens the moment you are cloned? What happens to your "I"? Nothing. The "I" never existed in the first place and therefore nothing happens to it when it's physically copied ;) That's the paradox.


Hi, can you explain what a ' Yudkowsian self-optimizer ' is? I've googled it and the only page that it comes up with is this one! :-)


Looks like I mispelled the name. I was simply referring informally to Eliezer Yudkowsky and the LessWrong-style thoughts on optimization processes. You can read the initial idea here[0].

If you haven't heard of LessWrong before, then you have hit the motherload of all rabbit holes! That site is definitely worth the dive.

[0]:https://www.lesswrong.com/posts/HFTn3bAT6uXSNwv4m/optimizati...


Thanks!

I know about lesswrong, and you’re not wrong


"Thus knowing your place in space means some model of the world and your relative place within it." Soooo you are saying a GPS module plus a GIS database? :)

If someone puts me in a pitch dark barrel, and then carries the barrel to an unknown to me location will I cease to be conscious by that definition even if I'm kicking and screaming and yelling for help?

I'm working on self driving cars. They know very precisely where they are, and they know very accurately what is around them. (They even have a sense of where things will be!) Does that make them self-aware?


The barrel: no, you won't lose your self-awareness because you have a memory and a whole history of your self-aware moments before the barrel. However, staying in the barrel for long enough (say 30 years?) might affect your consciousness though I wouldn't recommend experimenting with it :)

The self-driving cars are interesting in this regard, but in terms of complexity of their model of the world they are still way behind that of the most primitive insects even, I think. This is due to their limited capability of discovering the world and learning about it. A car that can also sense surfaces and hear things might have a chance to build a more complex model. Add the ability to park and connect to a charging socket by itself and you get a very, very basic organism, possibly a bit self-aware, too.

But just because a self-driving car has a model of a macro-world (roads, buildings, other cars) doesn't make it any more self-aware than bacteria.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: