At least you're openly admitting that's your plan. So, you know, we can drag you behind the chemical sheds and shoot you for Criminally Irresponsible Use of Applied Phlebotinum.
In others words make a domesday robot that remembers people before it kills them.
As they die, some will take solace in a religious belief that numbers in the machine represent everything they were and ever will be. Others will just die.
In his defense, if you're dying anyway, you might as well leave a "ghost" behind. The ghost might not be you, and it will certainly have some psychological issues to deal with due to knowing that it's one ontological level "down" from a real, flesh-and-blood person, but you were going to die anyway.
I assume that ontological security matters. If I know my consciousness runs on meat, I know that I have my own personal substrate. If I know I'm in the Matrix, I know that whoever has `root` access can alter or deceive me as they please.
The one thing nobody ever specifies about these crazy schemes, which would otherwise be a great way for humanity to get the hell off of Earth and leave the natural ecosystem to itself in our absence, is who will be root, and how he's going to forcibly round up everyone who doesn't like your crazy futurist take-over-everyone's-minds scheme. Hell, what's going to stop him from rampaging across the real Earth and universe, destroying everything in sight, while everyone else fucks around having fun in VR?
I'm really wondering why this nasty, insane idea has been cropping up more frequently lately in geek circles.
And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!
> And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!
That's a bug to fix in implementation accuracy. I'd obviously prefer more accuracy, but if it comes down to a choice between less-than-perfect available implementation accuracy or dying of old age, I'll happily take a less accurate implementation, especially one that preserves enough information to fix that issue later.
The much more serious bug I am concerned about is the continuity flaw: a copy of me does not make the original me immortal. I'd like the original me to keep thinking forever. Many proposals exist for how to ensure that. The scary problem that needs careful evaluation before implementing any solution: if you do it wrong, the copy will never know the difference, but the original will die.
No human should ever be root. But we might just trust a Friendly AI. Well, provided we manage to make the AI actually Friendly (as in, does exactly what's good for us instead of whatever imperfect idea of what's good for us we might be tempted to program).
The question is not really whether such and such implementation is best. The question is, does changing implementation preserves subjective identity?
I bet many people here would not doubt the moral value of the emulation of a human (feelings and such are simulated to the point of being real), but would highly doubt that it would be, well, the "same" person as the original.