While it does make for some entertaining fiction, and may provide some benefits to the living, I do not believe that 'uploading' is a desirable experience.
In fact, I don't believe that one can experience it at all. Imagine the procedure (for a conscious person): your brain is connected to a computer interface and a copy of your mind is taken. Great. Now there is a digital copy of your mind. So what? You still get to die.
Consciousness is mortally bound to the physical body, and will die along with it regardless of how many mental copies are made.
I much prefer the idea of human metamorphosis. I want to inject nanorobots into my body which will transform it gradually, cell by cell into an improved synthetic one. In this way, immortality and superhumanity can be achieved without loss of continuity. I imagine the experience would be one of waking up a little better (stronger, smarter, etc.) every morning until the cells are all upgraded.
I don't read enough these days because of chronic eyestrain, but if you are aware of anything dealing with this concept of human metamorphosis, fiction or otherwise, I would appreciate a link. It's something I would like to read about (if it's even been dealt with), someday when I have time (and don't have a headache).
I agree that human metamorphosis is extremely interesting, but this line I really don't get:
> Consciousness is mortally bound to the physical body, and will die along with it regardless of how many mental copies are made.
I don't understand how you can put out a statement like that, like it's taken for granted or something. It's far from granted, and I've never heard of something remotely approaching a proof for this type of assertion. To my mind it's the same kind of reasoning which proved we would never fly, nor go to space.
It's an open problem. Please keep it that way until we know more.
We don't have the proper tools to understand consciousness in a scientific way yet (consider the extremely limitations of EEG and MRI) For those interested in this philosophical question, there's an area called Philosophy of Mind [0] which deals with this and related questions.
Granted that consciousness is not fully understood, can we not agree that it is entirely manifest in the brain? I mean, you don't believe that human consciousness somehow survives the body after brain death, do you?
Even if the cloned memories could be streamed from your brain into a running simulation, allowing a memory of crossing the threshold from the physical world to the digital one to be created in the simulation, the 'patient' will not have this experience. The patient will still have to experience death.
Going further, let's imagine that we want it really, really badly. Let's destroy each neuron immediately after taking it's state and pushing it into the stream, so that the whole process of uploading and death are completely synchronized and nothing of the patient's consciousness remains afterward.
I'm sorry, but I still do not think that the patient will experience being uploaded into a computer. It will only experience having it's brain fried until death. For the clone, the memory might just be glorious, like some Hollywood special effect sequence, but personally I find this wholly unsatisfying.
I'm not saying it shouldn't be done, either. I'm only saying that I suspect some people may have the wrong perspective about it. If you sign up for this procedure, you are not signing up for immortality, but merely to donate a copy of your memories to whomever is running the simulation.
You would be like an organ donor, but instead of saving a life, you'd be feeding your mind to the new Zombie Second Life.
It would not benefit you, but it might benefit someone else.
As for me, I'm a selfish, arrogant bastard with a desire for physical immortality and superhuman capabilities.
I'm not sure. Your body changes all the time. Your mind is always running on different hardware than it was just a few moments before. The changes are just small at each step.
>I'm sorry, but I still do not think that the patient will experience being uploaded into a computer.
If the copy/delete process was (perceptually, hence effectively) instantaneous, what is the difference?
Lets say your biological body was going to die, in a day.
But you have the option of being perfectly copied into a new biological body now, with the current body being instantly destroyed. Would you take the option? I think I would.
Whats the problem with that?
How is it any different than what happens to your body on a day-to-day basis, now, anyway?
I don't see the problem, when we put it in these terms.
The question goes to consciousness, and our special identification with our own perspective.
We can posit that our "consciousness", as we experience it, resides entirely in our physical bodies, and that exact duplication of some subset of those bodies can create a consciousness exactly the same as ours. So any observer outside our body will be indifferent to whether they deal with the original or the copy.
But unless there is some communication between the two consciousnesses, you will experience the copy as something outside yourself. You won't have its memories after the moment of duplication, and you won't experience continuation. If your body dies, your consciousness dies. So while the world can have endless copies of you, and perhaps each copy feels like you, you still experience mortality. Your consciousness ends with the particular physical construct sustaining it.
At least, in the posited model.
The gholas of Dune are aware of this, and don't imagine themselves immortal though they know copies of themselves will be made after their departure.
It's he same problem as teleportation (star trek teleportation specifically)
They scan you, break you down, then transmit and create a new you.
Now if you re-order this a little.
Scan, transmit, create and then break down the original.
There are two of you, then the original is killed.
This shows that during the original process the first is scanned and killed, then a new person created. This is exactly how mind uploading would work in terms of continuity for the person being uploaded. (It's also why I would never ever use a teleporter.)
This is exactly what mind uploading is, they scan you and upload you. Creating a new you. Then you die and the other you lives on.
This is very clearly shown if you scanned and activated the copy before the first person died. Are you trying to imply that you would suddenly, as a single entity, be experiencing two perspectives at once? Because that just wouldn't be possible.
It is technically immortality for your memories, but not the specific physical instance of you.
Going to sleep does not cause the brain to actually power off. It's still running, your consciousness is just different. You still wake up from loud noises and light, and external stimuli can still influence your dreams.
>Going to sleep does not cause the brain to actually power off. It's still running, your consciousness is just different. You still wake up from loud noises and light, and external stimuli can still influence your dreams.
Anesthesia, then, or fainting. You cannot wake up from loud noises, and you have no consciousness.
One can also image a transfer process that is just like what naturally happens with regular body cells changing every 7 years or so.
E.g your brain is slowly, over a period of years, replaced cell by cell with new compatible neurons. One by one, your old neurons are replaced with the new, designed ones.
Those can read the state of the neuron they exchange AND exchange signals with your regular neurons, so before and after one is replaced, your brain is still functioning normally. No "power off" phase.
That is what the person above describes when they say they want to be injected with nano-bots to be converted. And that would, I believe, give a seamless transition with no break in the stream of consciousness.
However this is completely different from the concept of mind uploading where your brain is scanned and simulated separately.
> can we not agree that it is entirely manifest in the brain?
This is the old materialist vs idealist, monoist, dualist etc. debate in the philosophy of mind. It isn't as much of a slam dunk as you think it may be, and the gulf in opinion is broad and branches out into many tributaries.
Materialism is the belief that everything in the world, including mind, was material, but that has progressed into physicalism since late nineteenth century physics showed that not every force is made up of matter.
I used to be very interested in this topic and leaned towards the idealist argument of mind being made up of more than material, but I lost interest as the debate on both sides tends to spiral and involve religion and spirituality. But I still tend to believe that if we completely cloned a brain materially, we would still not have a mind, that it would be missing something.
My own conclusion is that we simply do not know, and that to me is more interesting than knowing as the pursuit is more rewarding than the end goal. There will always be open and unresolved questions in science and philosophy. We have yet to explain seemingly more simple phenomena such as gravity, so explaining the mind (or indeed altering it or cloning it) seems so far out of reach.
You express views and attitudes very similar to mine.
For people who find these things interesting, I recommend checking out the so called hard problem of consciousness
James Trefil, physicist - so they aren't all dopey philosophers ;) - notes that "it is the only major question in the sciences that we don't even know how to ask."
> I still do not think that the patient will experience being uploaded into a computer
Creating a copy is like having a Siamese twin you didn't know about, because it was sharing 100% of your body and mind. But with the copy you are split apart for the first time.
If you are being uploaded at the time of the split, then one of you experiences the seamless first-person transformation into digital form. The other you stays biologic and dies, immediately or eventually depending on the procedure.
The irresistible urge is to mope that "you" are the biologic one, the one that dies. But this isn't true, you really are both. The one that became digital is you in every sense, it woke up in your shoes, it took your date to the prom, it has your personality, it will make the decision you would make going forward.
If you think the copy is inferior or inconsequential by virtue of not being the original, then consider what if we right now are all copies? Would our lives and first-person experiences be less genuine and less valuable if it was revealed there the true original versions of us existed elsewhere? In an upload scenario the copy is you.
But why bother? Unless you really think the world is worse off without you and your contributions, why would you participate in a service like this? It's not going to do you any good or let you live a day longer. Your copy will go on having a perfectly happy life being the new you, but why should any of us be excited about that?
You're discounting the experience of the copy and identifying solely with the original biological human. Instead you have to really internalize that both experiences are equally yours.
That said, I wouldn't get excited about it. Uploading is a bald afterlife myth with the same capacity to beguile as the religious versions. Better to focus on the here and now.
I may as well internalize that Warren Buffett's experiences are equally mine, or a strain of bacteria. If we're not defining "self" through any real continuity and instead just making it an arbitrary label, there's no reason everybody can't be me. It becomes a bit meaningless.
Sleep and anesthesia break continuity, as has be reported elsewhere in this thread, so self is not about moment to moment continuity.
The reason your copy is equally you vs. Warren Buffet is that your copy shared every molecule in your body and every thought in your head for your entire life up until the moment of splitting off.
Let's imagine you run the simulation when you are still alive. Are both copies you? Wouldn't each copy believe it's a full consciousness and not half a consciousness?
Assume you do a regular (or even automatic and continuous) sync between the copies. Which copy would you prefer survived, if one of them were to die?
Both copies would claim to be you, and would have equal right to that claim. Yet each copy would be a fully conscious person and would immediately diverge into its own individual from the shared point on.
This of course sounds like a contradiction, both are you and both are individuals? That's because our language and concepts just don't have the muscle for this situation. "Both are you" is short-hand since the wold "you" becomes ill-defined or at least radically transformed. Consider Hofstadter's "twin world" concept from his "I am a Strange Loop" for how a single "individual" could really be made up of multiple individuals.
From an objective point of a view it's much better if the digital version survives, because we're assuming the biological original has a shorter life span.
You avoided the question, what would you prefer? There would exist two (or n) instances of you, with a shared history, but no shared present. Maybe you posit the question is moot because the concept of you dilutes at the point where a copy is made? I can't imagine how it would dilute enough for the physical you change its self-preservation instinct, though.
On the other hand, I'm not so sure that the digital version is preferrable, there are lots of maladies that would be trivial on digital versions, for example destroying the being, controlling it and altering in any way. All that stuff is (so far) harder on the biological world.
(PS: Thanks for the Hofstadter pointer, I stopped following him at The Mind's I)
I don't like "would you kill X or Y" questions. For one details matter, and we have no details. For two it's just impossible to say sitting here in my comfy chair how I would react in some dire life-or-death situation.
Lets go over this carefully as the concepts are slippery.
Lack of continuous experience of an event does not mean you died, it is merely memory loss.
Assuming the very reasonable theory that consciousness is classical and emergent from the network structure and interactions of neurons and glia then it should be possible to encode this consciousness on a turing machine. If we go a step further and replace certain collections of simulated cells with black boxes that behave the same way given an isomorphic set of inputs then we can have consciousness even more cheaply.
Your idea is not necessarily more grounded. Assumptions you make are: the substrate does not matter as long as it is not digital, you assume the replacement material will not itself have effects that result in vastly divergent actions in the long run. Yet the perturbations a neuron suffers will differ from those of nanodiamodons, will this be significant? You also take for granted the new replacement brain will not allow a very large viewpoint shift due to the new complexities and speeds of thought possible. That some invariance of self remains from morphism to morphism. That there is more resemblance between you and the the final being than you and a lemur like ancestor.
It is likely that the beings of the future will have far more sophisticated definitions of self and identity.
I personally think that physical bodies will give way to digital minds and its only a matter of time. Whether linear time or log time I can't say. But physical bodies are resource hogs. Progress is energy intensive and so is ever more complex thinking. Imagine a being whose memory is so dense that its thoughts have a gravitational pull of their own and its mind risks collapsing into a blackhole... Eventually theres going to be a lot of pressure to compress thinking beings and squeeze as much thinking capacity from matter as efficiently as possible.
You not only misstate my assumptions, but seem to miss my point entirely.
Here is my point: the original human will not experience continuity, therefore it's instinct for self-preservation will not be satisfied. The best it can hope for is to take comfort in knowing that a copy will survive. Personally, this does not comfort me.
I don't doubt the possibility of a very high fidelity copy and supporting simulation, for all intents and purposes. I also think there may be sound reasons to pursue it. But I don't think it will benefit those who are copied, beyond any positive thoughts and feelings it may give them before they die.
Being selfish and programmed for self-preservation, I desire physical immortality instead. I have no interest in donating a copy of my memories to a simulation project.
I hope this clears it up; my interest is quickly waning, and I have work to do.
The claim to the identity is not the issue, the issue is that the person who got copied will die and experience that death.
The existence of a copy does not make the original person to resucitate or otherwise keep perceiving and thinking.
To express it in a bad analogy: you can have a bit by bit backup copy of a harddrive, but when a power surge burns the CPU and the disk, you have to throw both away. You can buy a new CPU and place the backup, but the hardware is different, there is a shutdown moment, and when you power back, the continuity is lost, it's a different entity what gets booted up.
To preserve the consciousness of a person, between the "hardware change" I see no other option to the existence of something like a central repository of consciousness outside both the body and the computer hosting the simulation that gets automatically attached to a particular set of memories/perceptions/experiences (and whatever else defines a consciousness), so when you die, it stores your consciousness and when the simulation is booted up, the continuity is triggered. I find that far fetched.
In abstract, philosophical terms, it might not be. Even in policy terms it's probably not (other than it might be cheaper to store people in SANs.) In practical, day to day terms, of course it is! My own instance of self doesn't want to cease to exist.
> Granted that consciousness is not fully understood, can we not agree that it is entirely manifest in the brain?
I don't think we can agree on that. There certainly are many claims of consciousness existing beyond the human body. For example, see a "Unified Field" of consciousness.
> I mean, you don't believe that human consciousness somehow survives the body after brain death, do you?
Literally billions of people believe it does (see: pretty much all major religions). Maybe human consciousness doesn't survive brain death, but some other form of consciousness does?
I think its possible to contend that our physical body is sufficient in providing a framework for our human consciousness to emerge, but it may not be necessary in maintaining it.
Think of a stroke patient. Often stroke patients lose their memories / personality temporarily, but then regain it some time later. The injured portion of the brain heals in some respect, but the memories/neural functions of the injured area often move / reorganize to a different location in the brain (which is why sometimes a stroke patient can feel sensation in say their hand when someone touches their face).
I think this comes to show that our consciousness/memories/etc are not necessarily tied to a specific source of physical matter, as even the brain can reorganize the "coded" functions to different areas inside it.
I believe that human consciousness is entirely manifest in the brain (and, of course, it's effected by other nerves elsewhere in the body). But moreover, I believe that brain is performing "normal computation," by which I mean Turing complete computation. Because of that, there is no real reason why the same computation responsible for human consciousness couldn't run equivalently on other Turing complete hardware.
I could imagine how a suficiently advanced machine could simulate one or more consciousnesses.
What I can't imagine is how you, the person sitting in front of the computer will somehow wake up in the computer. I can only see copies waking up. Even if you are dead. For the people who love you, having a copy would be great, but you will still be dead.
I can't believe I'm bringing this up, but the Arnold Schwarzenegger movie "The 6th Day" really matches my views on the subject, and also matches your views.
(Obvious spoilers for the movie)
In the movie Arnold is cloned, wakes up later, and goes about to find out what's going on. At one point he finds out HE is the clone! Back to our discussion. From the point of view of the Copy, the procedure was successful - the consciousnesses was transferred to the machine and the copy continues existence. From his POV he does wake up in the computer after the procedur. For you, though, it's a copy that wakes up. Perhaps your brain is destroyed during the upload process and you stop to exist. For the copy you (that is, him) continue to exist and avoid brain death.
If a scientist asks you, "Was the upload successful?", you (the original) will answer "No". The copy you will answer "Yes".
For the digital copy to not notice the transition, there would probably have to be more going on—like a Matrix-esque simulation of the physical world or a realistic android to house the copy.
I do know that if you made a bunch of copies of me right now, I wouldn't expect to walk around seeing a superimposed image of my original self in Chicago, a copy walking around in New York, etc. I'd be totally disconnected from my copies, and that is reasonable proof to me that I should not expect to ever have a continuity of experience through a backup copy after my own death.
Are you making the unlikely assumption that you would actually be the original? More likely you'd black out during the brain scan and wake up as one of the copies. You have no good reason to simply assume that you are going to be the original process right after making that fork() call. How might your thoughts about this process change if you woke up as a copy?
If a scientist asks you, "Was the upload successful?", you (the original) will answer "No". The copy you will answer "Yes". That's my view, at least. Same answers for teleportation Star Trek style.
What is fascinating is how strongly and without proof many people believe the argument that "Consciousness is mortally bound to the physical body". If anything, this fixation seems like both an interesting study of the blind-spots of human mental process and as evidence that the situation is, in fact, rather unlikely.
This particular, uh, delusion, seems similar to the recurrent theme that an "intelligent" enough computer would/could "develop self-conscious" (whatever that is) and start "thinking for itself". Here, we have strong ideas about the properties of "self-consciousness", without, as you say, having a ghost of an idea of what it is.
this goes back to the ages old problem of, "In Star Trek, the transporter killed you every time" We have this almost social idea of Body Continuity. As long as there is the same body the entire time you are still you and still "alive". A scarier truth is that all it takes is a decent change to how you brain is structured (bullet, railroad spike, fork through the skull) and you can very suddenly for all intents and purposes be a different person, with the same or similar memories but essentially different patterns of behavior. Metamorphosis may not save you, it might kill the person you were just as if you made a copy and burned the original. Further down the rabbit hole, just how different is your brain now from and instant ago, and your brain versus your brain with the addition of a few ball bearings or other random matter. The person you were an instant ago is just as dead as the original who gets destroyed and you are the copy.
> I want to inject nanorobots into my body which will transform it gradually, cell by cell into an improved synthetic one. In this way, immortality and superhumanity can be achieved without loss of continuity.
Just last night I thought about this and went to sleep in despair :>
If someone scans my brain, builds an artificial copy, instantly cuts out my brain and inserts the new one, I clearly die (because I got my brain/me cut out) and my copy lives on. From the outside, I'm the same, but the first me died.
So instead, I get my neurons replaced with artificial ones, one at a time. If I drink a few beers, brain cells die but I'm still conscious. So gradually replacing my neurons one by one should be no problem; there clearly is no loss of continuity, I'm still me.
But what's the difference? Let's assume one neuron is replaced every second. No problem, nice and gradual. What if it's one every millisecond, every picosecond? Should make no difference, because it's still one by one. But, the faster you do it, the more it approaches "instant". And instantly replacing all my brain cells is the same as replacing my whole brain. Thus, I'm dead and my copy lives on...
Let's assume one neuron is replaced every second. No problem, nice and gradual. What if it's one every millisecond, every picosecond? Should make no difference, because it's still one by one.
It should make a difference how long it takes, though.
Assume that consciousness is a physical process. If new neurons take part in this process after the old ones die (so that we have continuity as part of normal "healing"), there must be some time scale over which those neurons become an effective part of the consciousness process.
At minimum, if consciousness is a "network effect", this would be on the order of the latency of inter-neuronal communications. If the cells are doing some processing which forms part of consciousness, the time scale would increase.
To throw out some wild numbers: neuronal latency seems to vary, but I found a number for visual-cortical neurons of about 80 ms. (http://redwood.berkeley.edu/bruno/npb261b/whitney-reading/kr...) And you can't replace more than a few cells at a time (I wonder what fraction?) if you want to avoid impairing consciousness during the process.
(This also doesn't include whatever the time is to make an effective copy of a given neuron and it's state; we're assuming instant perfect copying.)
>If someone scans my brain, builds an artificial copy, instantly cuts out my brain and inserts the new one, I clearly die (because I got my brain/me cut out) and my copy lives on. From the outside, I'm the same, but the first me died.
If you look at the physics the right way, your brain is cut out and a copy inserted billions of times a second already.
You have an old car, and you want to renew it. If you buy it brand new and toss the old one, you are destroying the old one. If you keep changing pieces of it, testing them to see they work, and keep going until you have replaced the whole car, it's still the old car, just brand new.
There's not much difference at the end, except that you can consider that the second way of doing things will let you still use the old car, only it'll be new.
I largely agree that gradual replacement has the far better claim to preservation of identity. I would have hardly any desire to be uploaded any other way.
However, the car of Theseus example allows for a worrisome wrinkle. What if you kept all the old parts during the process you describe, and then after, you re-assembled them? Which car has the superior claim to being identical with the old car? I think it obvious that the reassembled old parts have the superior claim over the car produced by part-by-part substitution of new parts.
I find that first intuition odd. What if we alter to situation to not involve the new parts? Simply disassemble the old car parts and reassemble them, is that a brand new old-car?
Or how about we make all the new parts plastic so that no functioning "car" is produced by the gradual part-for-part substitution. We end up with two cars, one an all-plastic car model, and one consisting of all the old parts assembled and functioning. Which has the better claim to being the old car?
Yeah, part exchanges quickly leave me with even less salient intuitions about the proper use of old identity terms than these more "splitting" oriented cases. What should we say if a lion talked and all that.
> Great. Now there is a digital copy of your mind. So what? You still get to die.
You get to do both. The concept of "you" changes once there is a copy, see the Ship of Theseus [1]. Each post-you entity has a legitimate claim to being pre-you. The copy woke up that morning, ate your breakfeast, went to the uploading clinic, and went to sleep as a digital being. The copy has perfect continuity. The biological you on the other hand stayed biological and will die.
A gradual replacement doesn't change the fundamentals, the biological you has to die. The gradual replacement just makes for an unusual twilight, its mind commingled with an arising digital being.
You experience discontinuities in consciousness every day: when you sleep.
Why is it that we feel like the same person after we wake up from sleep, from anesthesia, or even a prolonged coma? You could go to sleep, have arbitrary operations done to the neurons in your brain while asleep, and presumably will awake into the "same body".
This thought experiment leads me to believe that the most important consideration for mind-uploading is to ensure that the original body dies/is killed before bringing the copy online.
Considering the metamorphosis idea, by the time your whole brain was replaced, piece by piece, by devices simulating its behavior, you became an upload - it's just that all of your data is running inside your head (or that this illusion is maintained by very low-latency links to your root-self running elsewhere).
Still, the moment you move its dataset to a new device and deactivate the original, aren't you killing the original?
From a body-centric point of view the original dies either way, whether neuron-by-neuron or all at once. From a consciousness point of view it's less clear. Firstly because we don't understand consciousness.
But if we believe consciousness is some subjective experience that results from the functioning brain or similar physical device, what can we say?
Think of a vinyl record and a digital music player. The vinyl record is playing along when it is stopped suddenly, the digital player is started at exactly the right time to produce no break in the song.
If you insist that you are the record player, then you die, you stopped. If you think that you are the song, it continued without missing a beat. Instead of an abrupt stop you can cross fade the record out and the digital one in over a few minutes. Does this make a difference?
Greg Egan's short story Closer has people implanted with a device, a "jewel", that learns to mimic the behaviour of their brain - to the point where people actually choose to switch from their biological brain controlling their body to their jewel....
Egan's story 'Learning to Be Me', available in the same collection as 'Closer' (Axiomatic), is also an exploration of this idea with relevance to sodiumphosphate's fear.
> Consciousness is mortally bound to the physical body, and will die along with it regardless of how many mental copies are made.
Why? Can you explain more? I would believe that you can be a fully functioning human living in a simulated reality if we ever managed to fully simulate the human brain..
However, since our brain modules are so messily coupled I don't know if we can easily "delete" memories and not impact anything else.
Unless you accept dualism (that there is some 'you' separate from the wetware in your head) you are always going to have this problem. A copy of 'you' would only sync with 'you' for an infinitesimal amount of time before you both diverged. The two 'yous' both have valid claim to everything before the Split but afterwards you are effectively two different people.
This is where the following thought experiment comes in:
Imagine science advances to the point where medical science can replace individual brain cells with perfect robot replicas. These robot cells interact with surrounding cells in exactly the same way as the real ones and can be initialized with the state of the real cell they are replacing. They're small though, so all the real computation work they may need to do is offloaded to a computer (over radio signals perhaps, or whatever.) Perhaps we can even optimize their operation such that communication with other robot cells that may happen to be next to it takes place inside the computers modelling both cells, instead of actually firing off the connection "for real".
Preform this operation once, and few would argue that the mind in question has died, been replicated, ceased to operate and been replaced with another... anything. It's just the same mind.
If you are willing to accept that (and if you're not, we'll just have to disagree), then preform the operation again. And again. And again.
This is what sodiumphosphate was referring to by "metamorphosis", and it does not require dualism.
The difference is that the original you exists on a consistent space-time arc. Copy of you diverges from that, and becomes not you. To external observers, not you appears to be exactly like you but it isn't you because your mind is a process that is tied to that arc. Not you starts as you but then operates within alternate time-space arc.
In the case of metamorphosis there is only one "you" at any point in time. The concept of divergence doesn't make any sense if there is only one you. Diverge from what?
We all really want to accept this idea. It is inviting. Not accepting it has a lot of scary implications. If small changes in the matter that makes up my mind "kills" me, then what happens when my body naturally replaces parts. In neurons, the whole cell is rarely replaced, but all components of it all, none of the atoms are there forever, it is only a matter of scale to replace neurons with "working replicas". So, accept that somewhere along the way of "perfect replicas" I think you stop being the same person for any rational reason, then by scaling that reason down simply changing out atoms in the natural process of life kills "me" just as surely.
Replacement with the subtly different is interpreted by our minds as "change". You no more "die" when the cells or atoms in your brain are replaced than a river dries up when all the water molecules you could see at one point in time flow away.
Exactly. "You no more "die" when the cells or atoms in your brain are replaced than a river dries up when all the water molecules you could see at one point in time flow away." when we find a way to measure this little death relative to the bigger more permanent one we can make interesting arguments. My advice, assume the transporter/uploading-machine will kill you. You will use it anyway for the same reason you use a credit card at the grocery store or corner market, you will be worried that the people behind you will be annoyed by the delay caused from your hesitation and the person who comes out the other side will not begrudge you for creating them.
If you want to define change to be a type of death, go ahead. As far as I can tell your position is philosophically sound though I disagree that it is makes for interesting discussion.
It would need to be really exactly the same, otherwise it is actually just a damaged neuron and the rest of the brain will continue normally, working around it just as it would a normally dying cell. Until some tipping point was reached, and what would you be - perhaps a read-only version of yourself?
It hinges not on engineering (the mechanism for inserting an artificial cell and it communicating with an external computer) but whether consciousness is computable in a Turing machine. The new "you" might be deterministic.
Can a human solve the Halting Problem (without necessarily being able to express that solution as a program)? Does even recognizing the Halting Problem take us out of what is computable?
Show that a human can decide if an arbitrary program terminates. You are the one making an extraordinary claim (and make no mistake about it, the suggestion that humans can be used as oracle machines is an extraordinary claim.)
Simply assuming it is the case because you want it to be the case is not science, it is religion.
I'm just a bit skeptical since I have never encountered a program so far which I couldn't decide whether it would halt or not. You would completely convince me if you could describe such a program. Also, this is not an extraordinary claim. According to Wikipedia, it is an open question:
> It is an open question whether there can be actual deterministic physical processes that, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of the human brain, and whether humans can solve the halting problem.
When you do that you are doing nothing more than applying heuristics, preforming limited simulation, looking for repeated states. All of these things are things that may be preformed by algorithmic computation. That we don't have IDEs already doing all of this for us is a reflection of nothing more than the current state of machine learning and the relative computational densities of today's electronic computers and the human brain.
Being able to solve the halting problem for programs generally seen in real life programming does not imply in the slightest that you can solve the halting problem for arbitrary programs. Can you look at arbitrary Turing machine tapes (large enough to eliminate the possibility that you are simulating it without realizing it of course) and non-algorithmically decide if they will terminate? Can you in fact do anything non-algorithmically? That is in essence what you are suggesting, the non-algorithmic nature of the human mind.
You are of course not the first to suggest a non-algorithmic mind. The likes of Roger Penrose have also suggested similar things (see his The Emperor's New Mind and later Shadows of the Mind). His reputation is enough to give me serious pause, and the fact that he does not seem to be a dualist further inclines me to listen to what he has to say.
If you want to delve further into this subject I suggest you check out Penrose's work, then read the criticism of it from his peers. Penrose has mounted the most qualified defence of the non-algorithmic mind of which I am aware, but even so acceptance of his thesis is rather rare in academia. It is not light subject matter but the general consensus among experts in the fields which he touches is that in order to support his thesis he made numerous errors. In absence of their expertise, I must defer to their assessment of Penrose's work.
I have never encountered a program so far which I couldn't decide whether it would halt or not. You would completely convince me if you could describe such a program
Does this halt? Assume all variables are arbitrary precision integers.
"Consciousness is mortally bound to the physical body, and will die along with it"
I take it you would find it very hard to meet your copy, then? If for some reason your copy goes live and you don't die, I bet you two would get into a fight :-).
As it is now, if I'm alive and I meet my exact copy, he'd agree to die ("just do it quickly") until I really die.
And then the copy would become me - it's that simple. Maybe later it'd be possible to download myself into a body again, in which case I would retain all the knowledge that I acquired inside the virtual world.
Um, that would be murder. Furthermore, it wouldn't even be necessary: just go your separate ways, and when you die and "become" your (second) copy--this time, you time it correctly--you will still live on, even though you have a clone out there.
This was my favorite subject for a while and I arrived to a similar idea of transformation, except without nanobots: the computer becomes a continuation of your brain and then your biological brain is gradually killed cell-by-cell.
However, the idea that you still get to die after backup is based on an assumptions that I don't find neither obvious nor true: that the stream of consciousness is fully continuous and never stops. I've recently seen a documentary about a guy who was declared brain dead and yet recovered into being himself again. So it might still be possible to "upload" a dead brain in which case your experience is that of a resurrection which doesn't sound so bad.
I'm wholly relieved that someone else thinks this way too ( still aware of the fact that most thoughts aren't unique). I never came to the conclusion of metamorphosis, however.
Just felt a need to express my experience of camaraderie on the subject.
Your body is transforming at this very moment through mitosis. Gradually, cell by cell and depending on what kind of activity you are performing you are becoming stronger, smarter, weaker or dumber. Regarding consciousness, my personal belief is that there is no continuity and it's more like the frame rate of a video or sample rate of a song.
You could read about the Buddhist concept of impermanence and the five aggregates of being which sounds similar to the idea of a metamorphosis, only not just human but everything changing all the time.
In your metamorphosis example, is the finished project essentially be a robot? Would you have the same problem when a software agent is transferred from one computer to another?
Assume a negative answer. Then, the interesting implicit belief, is that there is something special about body composition. Whereas in physical law, there is nothing unique in principle about the body. So, the laws would have to be wrong or incomplete(fail to include consciousness) in an important way.
An interesting thought, there is a difference between gradually replacing your cells with mechanical/electrical equivalents, and copying your consciousness to a machine.
How? Well, think of replacing a hard drive under a running process. Compare that to migrating a VM to another machine. In one, the process is truly continuous, it just begins using new hardware; in the other, the process is cloned and the original process killed.
I disagree. Your individual consciousness will diverge of course (you are just a meat machine) but your computer copy will continue on and your ideas, ideals, and personality would continue.
Writing this I just had a major existential crisis because I realized that at the exact point of divergence there is a point where 'you' are both alive and dead at the same time, Schrodinger's Tomb.
I agree with you in that I think a gradual change is less crisis inducing even though it equals out to the same thing. I think that I'd get a brain backup anyway, just in case.
There is a lot of the classical discussion on personal identity here.
The argument about a copy being "identical to me but it's not me" is a very natural and intuitive reaction, but it's probably wrong. A very illuminating explanation is http://lesswrong.com/lw/pm/identity_isnt_in_specific_atoms and related posts.
I'll try a personal approach: imagine a country with no computers, no photocopiers, no presses, and a very strict law that forbids making copies of books which is indoctrinated from birth. There is only one copy of each literary creation, guarded in the National Library.
We can imagine that the people of that country have a very hard time to distinguish between "The Lord of the Rings" in an abstract sense and material object #1434 stored in the second floor, section Fantasy. For them LOTR is just that set of paper and ink. Even if someone secretly made a copy, letter by letter, it will be "just a copy", not the "true" LOTR. That's quite obvious to them.
And in a sense that's a bit our current situation with people (not exactly, because books are static and people aren't). We can't scan people-as-information out of people-as-protein-body and store them as people-as-silicon-body, so we mix the concepts. But it's just confusion due to technological limitations.
So in a sense we DO have souls, but it's a mathematical or information-theoretical soul, not a metaphysical one. It's always bound to a material instantation, but that concrete physical manifestation is not the same as the dynamic, interactive information process a person is.
So if technology advances enough we'll be able to copy a person, and it'll make no sense to ask which of them is really "you". Both are.
Yes, of course both are. But still, you're only ever going to be conscious from the perspective contained within the original you. The copy may be mathematically identical, but it does not make sense that there would be any kind of a magical transition of consciousness from one to the other. The other "you" would have its own consciousness entirely separate from yours. I.e., if you copy yourself and kill the original, you will lose consciousness. You will not wake up.
Imagine I have a magical device with which I can "stop time", or more precisely, stop movement of all particles except mine. In the beginning I am at our right. I stop time, move leisurely to your left and resume time.
What you see is me teleporting from your right to your left. You don't feel "trapped in a body who can't move", because your neurons are also magically frozen. Your consciousness feel continuous, even is you have been "suspended" for a while, because your memories are properly structured. Consciousness is not "something else", it is a property of your memories. (The blog post on lesswrong.com about timeless physics is important to see this point of view, I think, even if you don't buy the theory).
So if my copy is mathematically identical then there is nothing else left. There is no extra consciousness stuff that the "original" has and the "copy" does not.
Sorry, I really can't explain myself better. I'd suggest reading the link provided.
I will read what you suggest, but perhaps you can illustrate how would the sensation of identity will transfer from the physical body to the digital body? Because your example does not explain it.
Let's suppose we believe that everything about me can be contained in the physical processes that happen in my body.
Now suppose that we have the ability to digitally simulate those processes perfectly. Since my existence is purely physical, the "sensation of identity" you describe is perfectly captured by this digital simulation, by definition.
It could be duplicated, but then it would be a copy and not the one you currently have. I expect you can easily imagine this by picturing your copy uploaded to the simulator while you live and then running the simulation.
Whatever the copy does after that you won't experience and it won't be part of your memories. If you say that's not the case and that you will experience and be aware of what the copy is doing, then you are proposing some sort of metaphysical connection between the two beings, which I find hard to swallow.
I like this definition of identity: "A person's identity is defined as the totality of one's self-construal, in which how one construes oneself in the present expresses the continuity between how one construes oneself as one was in the past and how one construes oneself as one aspires to be in the future"
I don't know where you're getting the idea about a psychic connection. That's a strange idea, and I don't see anyone suggesting that.
What I see you doing is presupposing the "I" and "it" beforehand as if the future clone does not share all your memories and experience. Right now the clone and you are one and the same. Just as you can imagine a copy uploaded to a simulator and ran, it is equally valid for you to imagine blacking out during the brain scan and waking up in a virtual world. Before you undergo the brain scan it would be wise to prepare yourself for possibly waking up as the clone.
I and my copy are the same only in the instant the copy is made 'alive', the second after that we will not be equal, and every second that passes we will diverge more, unless there is some kind of metaphysical connection.
The virtual world copy will of course be (just like) me the instant it wakes up, but I (the original) won't experience the virtual world. I don't see how the original could wake up in the virtual world.
There are three states of being here, and you're confusing two of them. First, there is pre-you: the you before the cloning point. After that there is the original and the clone, both of which share the exact same pre-you experience. Just because you are pre-you doesn't mean you will be the original after the cloning point, since both the original and the clone branch from pre-you. If you are the copy, your experience is that you were pre-you first and then suddenly you woke up in a virtual world. This is illustrated simply by looking at the fork() command. You cannot make a fork() call and then have the code afterwards presuppose that it is the parent process without checking the return code. In the same way, you cannot be sure that you are the original after the cloning point unless you have solid evidence of it.
It would be an exact copy of your mind. Think about that. All of your experiences, memories, thought-patterns, etc., including all of your presuppositions that you will be the original. You sit there and smugly tell yourself that you will not be the clone. You couldn't possibly wake up as a clone, right? That the clone is going to be this "other" thing over there that has nothing to do with you. Guess what? The clone wakes up having had all of those exact same thoughts and experiences. The clone is you. I wouldn't recommend that anyone in this state of mind go through a cloning process because it would just end up with a confused, depressed and generally fucked up clone.
I fully agree that the copy will rightly believe it's the original, and for all practical matters to the rest of the world he can very well be considered an original, if he can at least communicate with the outside.
But this just doesn't consider the fact that in the real world, there was a real original who went to a copying facility and then went home, in the physical world. This person does have the return value of fork() [$] and does not experience the virtual world.
It's in this sense I'm saying I can't imagine going to a copying facility and waking up in a virtual world. I can perfectly imagine a copy doing that, but it won't have my future experiences. In fact, going further, given my beliefs, were "I" to wake up in a virtual world I'd be sure I'm a copy, because I'm certain the original could not wake up in a virtual world.
[$]: As long as we don't get fancy with psychothriller manoeuvres where the original is drugged and the copy has a body clone that returns home to his wife, while everyone tells the real original he's in a virtual world.
Yes, there is a "you" who went to the facility and then went home. There is also a "you" who went to the facility and subsequently blacked out only to wake up later somewhere else. An outside observer sees you walk in, a scan made, and you walk out. From the perspective of the post-original, you walked in, scanned, then walked out. From the perspective of the copy, you walked in, blacked out, and then woke up later.
I guess my point is just that when you refer to yourself pre-cloning, you have to realize that you're speaking (and thinking) for the copy as well. It's fun to think about. Makes for great sci-fi.
Yes, but it will create a new sensation of identity on the copy, which would be equal but separate to yours. That's the gist of the point. See my other answers around this subthread.
I only object to the privileging of "your" consciousnes. I don't know how to express it. Yes, of course when the copy is made each person-thread evolves on its own. My point is that you can't distinguish one being "the original" and the other "the copy".
Just imagine that, in the normal course of things, a person at time t is constantly being copied into a slightly different new person at t+dt and then destroyed. There is a causal connection and thus an inheritance of memories, but not single "essence about yourself" being conserved.
With current technology we only have a directed linear graph, like o -> o -> o -> ... . With uploading tech this graph will fork into two paths, but neither of them can claim to be "the original", because there is no such thing.
I mean, maybe this concept of "personal identity" is simply wrong. It's an illusion because currently threads of causality on people are linear, but that's just a technological limitation.
It's very rational for you to avoid privileging any single one of my copies, but for me it's very rational to privilege my own instance. And likewise, I wouldn't care which one of your copies thrive, but I would expect that you would hope your instance is the one that survives.
And I don't see how linearity has to do with self-preservation instincts. Maybe you are arguing that once we have several diverging copies the self-preservation instinct will change and be content as long as one copy still survives? I just can't imagine it.
Let's say we can copy me. As in, we make another me (let's call him me* ) that is physically identical to me in every possible way.
I believe your argument is that I will continue to be me, and me* will essentially be a different person. If I die, that's it. me* lives on, but I am not me* , I am me.
The interesting question, which your parent (in the thread) posits an answer to, is what is the actual difference between me and me* ?
If you believe that humans exist entirely within the physical world (as in, there's no mystical/religious/spiritual element to our existence--we're just matter and energy), then me and me* are no different. Suppose we go back to your idea, that we kill me after creating me* . Since I am purely a physical being, I have no way of distinguishing between me and me* . In particular, if you never told me that I am me* , I would have absolutely no way of distinguishing the difference between me and me* .
Wouldn't me and me's consciousness diverge after a few hours experiencing different things in different places? If so, would me agree to suicide knowing that me is still alive?
If by consciousness you mean introspection or self-modeling or something like that then consciousness is definitely real. But it's just a property that some minds have, and that someday will probably be analyzed and replicated in silico.
But if you mean some kind of special conserved "essence" that makes you "you" in addition of your memories... then yes, it's probably just an illusion.
The "rate limited" option in the video reminds me of slow clubs from that book.
For people who haven't read it, in the novel there are virtual parties called slow clubs where poor consciousnesses go to meet. Everyone agrees to limit themselves to the processing speed of the slowest (poorest) person there, so that people who can only afford the occasional spare clock cycle can have meaningful human contact.
One idea that I found quite haunting from Permutation City is that idea that you could set up a higher level controller for your running Copy mind that would periodically chose a new obsessive interest and then provide you with a rich environment to support your obsession.
As someone who is prone to rather obsessive interests (as I suspect many are on HN) I'm increasingly conscious of that fact that I can't directly control what I find interesting.
Under current laws, wouldn't the way the Terms of Service are presented be considered a contract signed under duress? I mean, they are saying that if you don't sign the contract, they'll kill you, which is kind of the textbook definition of duress.
However, before we get to a point where people are being uploaded into a system, we have a lot of legal president to set. The legal rights of the bits that are copied from your consciousness, for one.
To be boringly practical: The contract would almost certainly be determined when you first signed up for the service (i.e., when you were still alive).
It's tough not to immediately draw the many parallels to what we see happening in society right now and the scary part is that this is so easily conceivable as a realistic future. Well done!
Lets just hope the use of "traffic accident" as a euphemism doesn't increase in order to end a trial prematurely. I would prefer to use my trial version for as long as possible (or would I really? Look at all those nice plugins!).
Some (rather lame) lingo for the coming pay model of life:
Cheapskate, you're such a trialler
You haven't really lived until you lived with Life TM.
He hasn't upgraded in a long, long time
Do this for me and I'll make you live forever
You gotta commit suicide at least once, the rush beats base jumping hands down, and the white tunnel effect is pure bliss - and it's legal
Well you know, I could sponsor your life subscription if that would help the situation
Great video, i find it amusing but also quite depressing at the same time. I would be utterly unsurprised if the original premise was possible, this is what would happen.
BUT.. are people really so pessimistic about technology and the future that the main thing that comes to mind when they think of digital immortality is.. this?
Your virtual body can be anything you want. You can't die as long as your digital code is backed up somewhere.
If you can't see the bright side of that stuff, there is some kind of psychological issue.
It has both effectively infinite possibility for goodness and infinite possibility for badness. Your virtual body can be anything you want... as long as you control the execution parameters. This is not guaranteed! What if $YOUR_LEAST_FAVORITE_COMPANY is the one in control? Or worse? Even the option of suicide can be removed from you. If Hell does not exist, there is a non-zero chance Man will make it for himself.
Indeed, if things are just left alone I would not even say the default state is that you'll be in control of your own code. It will be a company that develops this technology first, since the alternative is inconceivable, and they will have their own agenda. There's a lot more possibilities that could emerge than just a happy utopia.
Iain M Banks most recent Culture novel, Surface Detail, dealt with the idea of Digital Heavens, and of course Digital Hell's. They are run by religious high technology civilizations.
A virtual hell is a nasty place where sinners are copied into post death, for eternity. Sometimes young sinners are given day trips there to straighten them out, before death.
>Indeed, if things are just left alone I would not even say the default state is that you'll be in control of your own code. It will be a company that develops this technology first, since the alternative is inconceivable, and they will have their own agenda. There's a lot more possibilities that could emerge than just a happy utopia.
One might argue that these technologies are so advanced that they will not be developed until the current resource scarcities which govern our lives have been removed, and as such we wouldn't be subject to the same degrading economic pressures.
This makes some sense to me, but maybe I've just been watching too much Star Trek.
I'm pessimistic about this type of technology because it isn't "you" that we are talking about; it's a copy of you; you still get to die, and I don't think that knowing a copy of your mind exists will be of any real comfort.
Exactly. It baffles me that even very smart people can't see this bug. If it can theoretically exist separately while you are alive, it can't be you.
For example, if You and Digital You (tm) were alive at the same time, I could put you in one room, and Digital You in another, and smash Digital You without You having any idea. You could live your whole life not knowing if Digital You was "alive."
Whatever your definition of me is, it seems like me dying should have an effect on me.
In "Altered Carbon", a character that has lived for hundreds of years destroys an instance of his self in order to prevent the upload of a mind virus into a backup copy of his self. In doing so, he loses a few days of experience.
Is that situation utterly contrived because he destroyed his mind, or are you insisting on a limited definition of self?
(Let's set aside the part where there is mind replicating technology but no snapshots or version control)
The ability to see a bad side, and the inability to see the bright side are two different things.
Of course digital immortality is inestimably cool. There are so many amazing and obvious positives that we can even imagine on this side of the singularity.
However, if we only think of the positives, without considering the negatives, then we run the risk of stumbling blindly into a dystopia created by the few who did think of the negatives (and how to profit from them), or by optimists who failed to plan for a suboptimal outcome.
From another point of view, fiction about a perfect utopia is boring. You need some imperfection in order to provide the requisite tension to make a story.
I'm not excited about this future. Whatever is uploaded isn't me, it is merely like me--a copy. Anything that is theoretically capable of running while I'm still alive is just a copy--it isn't me.
The "copy" becomes the "real" you because it has extended life. Both you and the "copy" consider yourselves to be the "real one" (or maybe you accept the coexistence), but you are going to die and the copies will live forever.
You think: "That copy isn't me, ericb!" * dies *
Copy thinks: "Great! I, ericb, just got a new lease on life!"
That is great for the copy, but not helpful for me. Death of my consciousness can't inflict a new identity on the copy. The copy is a copy--he's awesome, just like me, but not me because I am dead.
It wouldn't be me that woke up as a copy. It would be a copy that feels just like me. However, since he would have my intellect, he would reason that he is not the original, and if it had died, he would feel sad. On the other hand, he might be excited if he were in a machine body as that fate might not befall him for a very, very long time.
Here is where you're wrong. You presuppose the "he" and "I" when they are perfectly interchangeable right up until the cloning process. You can't presume that because you're the original now that you will be after the cloning process, because that has a 50% chance of being wrong. Right now, you and the clone are one and the same. Any reasoning you do about "me versus him" before the cloning is shortsighted. You have to reason like this: "If I wake up as the clone, then X. If not, then Y." It's like calling fork() in your code and then writing the code after that to presuppose that you're the parent process without even checking the return code. You don't know that you're the original after the cloning unless you see evidence that you are.
Think of it this way: let's say instead of a brain copy it is a full body copy. You walk into a dark room where you can't see anything. They knock you unconscious, and then you wake up lying next to what appears to be you. How do you know if you're the original? Because you were the original before? Sorry, but no.
Each running instance of me is a separate, alive, person. Forget about the memories for the moment. If you kill that person, they are dead. It doesn't matter who is the copy. I could be a clone for all I care. Doesn't matter. If you kill this body--the one I'm IN then my stream of thoughts ends. Other similar streams of thought offer me no comfort.
True, but in that case, the motivation for having a copy in the first place gets a lot lower.
It isn't immortality. At most it's another version you create so your friends and family don't have to give you up. Which I can see being pretty motivating... but on the other hand, I can see the friends and family being totally unwilling to have a meaningful relationship with a second, slightly-diverged person after the first one died. And then the copy would be lonely. :-(
If I don't have access to their mental state, they are not me. I think I am like an instance of an object, and my identity is related to my path through space and time. To be me, they'd have to be standing where I stand, and have stood where I stood.
A 100% copy of the mona lisa is still not the mona lisa--it lacks the history.
You don't always get what you want. When mona lisa gets destroyed 100% copy becomes just as good with it's own history of being perfect copy of something that had long history.
I like the idea i read somewhere about "Spinning off" versions of "you" to go do/learn something, then re-integrating that version of you back into yourself! I still agree with sodiumphosphate however, in that i prefer immortality and dodging the "end" of this mind. I MUCH prefer the idea that nanites can make us better, rather than "uploading" ourselves totally.
This is also why i don't like the idea of a Star Trek style transporter. At what point do you stop being you? I'd much rather stay alive and in one piece thank you. 8)
Hmm, if this was possible, I'd rather rig up an autonomous backup machine that I would periodically backup to and which would start automatically if I miss a scheduled backup (which would mean I'm dead). It would be like using hosted vs self-hosted, buying CTO or pre-built computers.
Great book, but I don't think the treatment of virtual hells was very convincing. First of all, the characters in hell would have gone insane with agony, but they just kind of wince it off and keep chatting. It reminded me of Niven and Pournelle's awful 1976 adaptation of Dante's Inferno, in which, for example, the macho protagonist grits his teeth and swims across a boiling lake.
Anyway, secondly I don't understand where all the devil-with-pitchfork stuff comes from in Banks's hells. Why not just have everyone floating in blackness and experiencing the maximum possible amount of pain at all times? To prevent them from becoming catatonic you just keep resetting their brain state to the moment of their death, so they're always feeling the first shock of agony. And it scales fantastically; just copy that small simulation a billion times and run them concurrently.
Of course, there will be some philosophical problems (are two identical concurrent brain simulations really two distinct subjective consciousnesses? If so, what are the implications of error-correcting server memory that store everything twice? And does resetting someone's memory so that the same simulation plays out repeatedly actually cause the person to experience things multiple times, or is it immaterial how many times you replay it?) but that will be of no comfort to you when you wake up in the pain box hell.
"I don't understand where all the devil-with-pitchfork stuff comes from"
I assumed that it was all cultural (note the lowercase c there) - societies would construct, using technology, the Heavens and Hells that their religions had told people already existed.
Other writers have covered the idea that higher powers might use some of their resources tormenting virtual copies of lesser beings - I'm sure this is mentioned as something that some Transcendent Powers do in A Fire Upon the Deep. Not to mention Charlie Stross's excellent A Colder War:
You can wander in purgatory forever as a row in the database, or you can go to Hell, where you will be forced to hack on the codebase for Life - in Brainfuck. Fortunately you may be able to scrape together your wages after a while to purchase your way to a Tier 1 Life. Some find it worth the trouble, just to enjoy the real sky.
This. It's all but explicitly stated on the screen. Notice the notice informing you that your "simulated personality" will be terminated upon refusal to accept the TOS.
Not that it matters, clones don't have souls, like twins. [0]
Thinking about it, OPs concept may have potential for a game. Something in the style of Introversion's "Darwinia" (I highly recommend it btw.), but with first person gameplay. The linked video could perfectly serve as an intro to the game. It could have a very immersive effect, e.g. no menus or splashscreen or anything. The gameplay could adopt some elements from the "Myst" series. The setup for these games could easily be a passage of the player's character to the netherworld anyways.
sigh damn. Too many ideas, not enough time. Imagine if you could program in your afterlife. Imagine all the projects you could do with infinite amounts of time. Humanity's progress would get another exponential factor attached.
That's why some people can recite whole poems and other people can play songs from memory while singing them too. That's why you know just when to pause when you're singing to a song on the radio and the singer stops, and you know when the pitch changes, and what that particular beat is in the background that you follow.
There's more than enough details to be considered a violation of copyright.
Only few people remember thole poems and minority of people remember whole songs.
In any case, the number of fully copied items would almost always stay below one hundred (not hundred thousands).
And even those "copies" would not really be considered a copy by our contemporary judge (circa 2012). These are "backups" at best.
As the embodied soul continuously passes, in this body, from boyhood to youth to old age, the soul similarly passes into another body at death. A sober person is not bewildered by such a change.
The change of body is like changing your clothes. Does anyone really think conscoiusness is limited by matter?
This is a side point, but to me, it was amazing how well the video captured the usability frustration of our time.
Every time during watching the video that we're jarred out of the 'future', with an unexpected Rejected, first are offered an option but can only choose one with others greyed out, etc, it feels so much like the awful experience we all have using just about anything today.
I mean, even down to the error BEEP jarring you out of the faux-pleasant music. Maybe the video creator didn't have this in mind at all, but just wanted to make it seem 'realistic' - but they got the usability nightmare we live in spot-on!
Very cool. There were a lot of parallels to a handful of different things. I loved it but if you're looking for so,e feedback I'd suggest maybe a bit more focus on any one subject. If I didn't know better I'd guess this was inspired by Vanilla Sky. But back to focus, I saw references to Vanilla Sky, tiered subscription services, and copyright. While the message of each gets across fine they all don't seem to fit really well. It was kind of awkward.
If this is an after-death experience then it really wouldn't matter if you couldn't remember copyrighted works. But beyond that I don't think copyright holders would care if you remembered their works because you're no longer able to be a customer. Same with the ads. If the service works like it did in Vanilla Sky then advertising is also pointless. Dead people don't buy things again. They pay for the after death experience and that's it.
But otherwise it's cool and interesting. Maybe I'm reading into it too much. Maybe there was no message and it was just a fun joke. Or maybe it wasn't based on vanilla sky. I don't know, I feel bad offering that criticism but while I was watching it that's what popped in my head.
The copyright issue that they've raised is actually one of the parts of copyright that I consider mind-blowing. The idea that memory is copying changes a great deal of the discussion -- for example, when the movie industry says "oh man, we're losing profits because all these people are getting free copies from the Internet," there is a direct parallel to them saying, "oh man, we're losing profits because all these people are remembering." Imagine, they spend all those millions of dollars on new films when they could just spend that as a one-time cost and zap us all with a Forgetting Ray as we leave the theater. Did you feel entitled to keep those memories? Ugh, kids these days, feeling entitled to copy our movie and share it with their friends.
What is not really addressed here, and you're hinting at it, is whether your after-death experience requires working for a living. The premium service mentions a 'subscription' so I assume it does. It's not strictly true that "Dead people don't buy things again." I mean, you might not buy physical clothes again, but you might pay the Life Store for trendy clothing-data that someone has coded for them.
I especially liked the "you can either agree to our Terms of Use or die, your choice."
This seems to be based on the assumption that you continue consuming content and services in the Life network. That way the advertisements and copyright parts make a lot of sense.
Yeah, but I think it's still something to keep in because it calls into question the rationale for 3 "tiers" in the first place. The anti-features involved in making tier 2 and 3 probably have higher costs than the supposed "open" model in tier 1. Also, I would really be keen on having the option of looking at the source code of something that can mess with my conciousness.
In fact, I don't believe that one can experience it at all. Imagine the procedure (for a conscious person): your brain is connected to a computer interface and a copy of your mind is taken. Great. Now there is a digital copy of your mind. So what? You still get to die.
Consciousness is mortally bound to the physical body, and will die along with it regardless of how many mental copies are made.
I much prefer the idea of human metamorphosis. I want to inject nanorobots into my body which will transform it gradually, cell by cell into an improved synthetic one. In this way, immortality and superhumanity can be achieved without loss of continuity. I imagine the experience would be one of waking up a little better (stronger, smarter, etc.) every morning until the cells are all upgraded.
I don't read enough these days because of chronic eyestrain, but if you are aware of anything dealing with this concept of human metamorphosis, fiction or otherwise, I would appreciate a link. It's something I would like to read about (if it's even been dealt with), someday when I have time (and don't have a headache).