Hacker News new | past | comments | ask | show | jobs | submit login
Will You Ever Be Able to Upload Your Brain? (nytimes.com)
102 points by andres on Oct 11, 2015 | hide | past | favorite | 154 comments



The author seems to have written the article on the assumption that it will be humans figuring out a brain upload mechanism.

While his time frame estimates might be sound given that context, I'd argue they're wildly inaccurate given a more likely scenario: such things will be brought into existence first via a super-intelligent agent, if at all.

If that proves to be the case, the time frame for brain uploading becomes roughly bound to the advent of AGI. When that happens, the same entity capable of creating our brain upload mechanism would likely be capable of endowing humans with effectively immortal bodies just the same.

In other words, technology like this is almost certainly going to be part of a post-singularity world (assuming there is a singularity in the first place), and who knows what that will look like.


There is a huge range from slightly smarter than human AI vs the kind of 'magic' AI it would take to make the singularity happen. An AI that's 10% smarter than any human would probably not be that noticeable outside of testing, as it's not obvious who the smartest person in a room is. And it would take a decade of study to catch up to the cutting edge on just one field. 10x that and your doubling the rate of progress that a single person was making, but that's not really going to change much.

Suppose we develop an AI that's 100x better than that, or 10 times as fast as any person as in they can do in 1 hour what it would take a highly trained and educated person 10 hours to do. Well, after a year or two they could probably be roughly as capable as a really good 20 person team due to lower communication overhead, but adding one more 20 person team is not going to change the world much. A million of them, might double the rate that the world progresses, but that's both a long time coming and probably not going to change things much.

To really get the singularity type progress, you need an AI that can make it's self smarter though better software and hardware, which seem like a far harder problem than just building a working AI. After all what if the first thousand just want to sit around and read Fan-fiction all day.


Most fast-takeoff AGI scenarios usually involve a singular intelligence created from an algorithm that easily scales based on how much computing power is allocated to it.

Combined with the notion of computing overhang[0], it's possible that our very first AGI could far exceed human intelligence, such that it would have capacity for rapid self-improvement right from the start.

I'm not saying this will necessarily be the case; slower takeoff scenarios are certainly a possibility, albeit an unlikely one (IMHO).

Another point to consider is that computing hardware is vastly superior to the human brain both in its capacity for raw calculation, and its sheer speed in terms of raw latency.

Keeping that in mind, assume we have created an AGI simulation that's approximately equal in complexity relative to the human brain. Not intelligence, just complexity of the simulation itself.

Now consider that AGI is not bound by biological constraints, and would not necessarily be structured in similar fashion to the human brain. This lends itself to potentially far more efficient architectures, and with that, the realization of AGI at a computational cost far below that of simulating a complete human brain.

Moreover, given a completely alien architecture, it's not hard to imagine the cognitive side of such a simulation interfacing rather directly with traditional modes of computation.

To illustrate, imagine comparing a human being to an identical human being that has a microprocessor integrated into their brain. Both humans may have nearly identical capacity for abstract intelligence, but the one with the microprocessor is going to be effectively far more intelligent, simply because their capacity for dealing with raw data and calculations is vastly superior. It's not hard to imagine a similar dynamic coming into play with AGI.

[0] http://wiki.lesswrong.com/wiki/Computing_overhang


> Most fast-takeoff AGI scenarios usually involve a singular intelligence created from an algorithm that easily scales based on how much computing power is allocated to it.

Still, it's pretty easy to imagine this not resulting in the sort of "runaway" intelligence most often associated with the notion of singularity. What if general intelligence is inherently extremely computationally complex? Imagine that the level of intelligence is represented by a number n, and say that human intelligence is n=100. If the computational complexity of intelligence is, say, O(1000^n), we won't expect to see hardware that's twice as intelligent as humans any time soon even if we know the algorithm.


If that were the case, I would agree we would be quite far away from simulated human-level cognition.

My main point was that either we have the computational capacity, or we do not. If we do, the chances of overshooting the human-level mark by a large margin are quite high, hence the risk.

Another angle on this is that perhaps human cognition has an unnecessarily high degree of computational complexity, a degree that's far above the threshold required for an AGI to be dangerous.

The Paperclip Maximizer[0] is an excellent example of this. Perhaps due to low computational complexity of its architecture, the thing wouldn't even be sentient in any sense of the word, and yet it'd still be perfectly capable of destroying the world.

Why? Because its initial intelligence would be determined by the amount of computing power allocated to it. Allocate enough computing power, and assuming you've passed the magic threshold, it would start allocating itself computing power in short order.

Perhaps what we should fear most is not sentient AGI capable of thought similar to humans, but AGI that's a structured amalgamation of existing AI technology—if only because the latter is far more likely to be feasible first.

[0] http://wiki.lesswrong.com/wiki/Paperclip_maximizer


Latency and cross bandwidth is a major issue with just adding computing power. At some point your going to be better off adding new AI's vs. trying to make the first AI better.


Once the work of getting the first human-level or better AI is out the door, engineers are really good at mass-producing it or optimizing it.

I could see someone investing in a factory to manufacture a billion of them within 10 years, once someone has demonstrated that it's possible.


You’re implicitly assuming the first AI runs on cheap hardware. It seems more likely the first general AI to take more far more than 1 million in hardware and requires a lot of hands on interaction. This makes pumping out 1 billion of them far more difficult than your suggesting.


> who knows what that will look like

Some brains will want to run on some sort of trusted infrastructure and be stored in an immutable datastore. Mine for example. Or...

Consider for a moment there will be a singularity in the future. What if it's a critical, recursive part of existence? If there is a singularity coming, it's likely to have happened before now. I'm not saying we're not experiencing it for the first time here, but maybe this is the nature of things...to be recursive in creation of a thing and then that thing eventually discovering its true nature of oneness. Maybe we're the big uploaded uni-brain already, but sliver of it chose this reality for ourselves because we can all come here and exist together separately on a somewhat level playing field. Some may choose something else for themselves eventually, like uploading their brains to...wherever this is running.

Me? I sorta like it here and want to stick around for a while.


It's certainly possible that our existence is the result of one or more ancestor simulations.

My personal cracked-out theory is that time and space are both infinite, and thus computation is infinite. As such, our observable universe may be the result of an infinite number of ancestor simulations.

Another facet of that theory which I enjoy pondering is the notion that the supernatural may exist (souls, ghosts, reincarnation, etc)—just in a simulation layer that exists above our own, but one that is still tied to our physical universe. Inaccessible but real.

A more grim yet equally interesting twist on that would be the supernatural existing within our current physical universe, quantifiable and ultimately accessible (some day), but presently not yet discovered by science. Often all things supernatural or spiritual have a sense of morality imposed upon them, but I imagine such a finding would strongly suggest that the supernatural, if it existed, would be just as uncaring and cold as the physical universe, since it would itself part of the physical universe.


Remember, bandwidth doesn't increase on the same curves as compute and storage. That means, regardless of where you are in this universe, you can't upload everything in one spot to the other spot. This would work well for arguing for a recursive universe model. Compute may be unlimited, but you are limited in your ability to hog all of it all at once.


>The author seems to have written the article on the assumption that it will be humans figuring out a brain upload mechanism.

I don't see that assumption anywhere in his text, nor in any of his arguments, and I don't see how a non-human mechanism is not susceptible to any of the physical requirements that he lays out.


  > technology like this is almost certainly going to be 
    part of a post-singularity world
That's some magical thinking, right there.


While I'm flattered you created this account just to troll me, I can't help but agree.

Perhaps you've heard of Clarke's Third Law?


I think discussions around this are ill-served by the "upload" metaphor, as it somewhat implies that the new brain, the original brain, and the uploading process are all largely separate things. It seems much more likely to me that the new brain will take the form of an augmentation that survives the death of the original brain. The personality would be "uploaded" not by deliberately sampling parameters to feed a model, but by the new brain organically (heh) becoming a component and embodiment of the existing personality. I'm sure this isn't a new idea, but I don't know what it's called.


This is similar to the Moravec Transfer [1], a proposed method of (slowly) transitioning to a non-biological brain. Indeed the whole concept of "the self" is important to maintain, even if it is to some degree an illusion.

1. http://everything2.com/title/Moravec+Transfer


There is no guarantee that the subject will not lose a bit of consciousness on every neuron that is being replaced (even if, to outsiders, it appears not to be the case).


I would expect a hybrid meat-computer personality to notice that at some point in the transition. If not -- if consciousness is so profoundly ethereal that a person's own consciousness can be gradually replaced with a simulated one without the person themselves even noticing -- then what's the difference between the "real" consciousness and the simulation?


>then what's the difference between the "real" consciousness and the simulation?

Well, isn't that the whole idea? Objectively, there is no difference. If there was, uploading wouldn't ever be possible.

The only potential difference is the subjective impression of "this consciousness that I am now is the same consciousness that I was yesterday" versus "this new consciousness is an exact clone of mine, but is not really 'me'".

This incremental upload technique simply tricks your subjective self into thinking there is absolutely no change in continuity.


> This incremental upload technique simply tricks your subjective self into thinking there is absolutely no change in continuity.

In other words, your subjective consciousness is somehow changed but you can't subjectively tell? What does that even mean?


No - rather there is no change in consciousness but if you do the upload in one jump, your subjective self sees the discontinuity and mistakenly thinks there was a change in consciousness. You have to do the process gradually to trick it out of its error.


These are tiny, atomic, piecemeal changes that happen slowly over time.

If one of your neurons randomly dies right this instant, you would probably not subjectively notice, even though that neuron composes your subjective consciousness. Your neural network is highly redundant and fault-tolerant.


Yes, neurons in our brains are dying every day, but it doesn't affect our subjective consciousness. Are you simply saying that uploading, if it were done carefully, would not make any more difference to our subjective consciousness than neurons dying every day?


Well, suppose the entire self-awareness part of your brain simultaneously dies. You wouldn't notice.


You wouldn't notice because "you" would no longer exist, by hypothesis. But I would expect that there would also be a huge objective difference in your behavior, easily noticeable by others. The "self-awareness part of your brain" is not disconnected from the rest of your brain and body; if it dies, the rest of your brain and body is going to be drastically affected.


What if you kept all the original neurons and re-assembled them into your original brain? Who would be the real "you"?


Well, you'd have 2 mental clones of yourself, essentially.

It depends where those neurons were sent off to in the meantime and what they were doing, but with such a big jolt, it's quite possible they would perceive themselves as "clone" (after understanding the situation at large) while you in your new machine self would be the "original you".


Why would it be a clone, and why would it perceive itself as a clone, considering that it is made of up the original matter and in the original configuration? I would assume that a technology sufficiently powerful enough to re-assemble disintegrated matter into something as complex as a brain would also keep the state identical to it's original form as well. Assuming that thoughts and perceptions are completely physical, and the brain is the same state as before, there is no reason for it to experience a jolt or anything else that indicates something special occurred.


The difference would be in the case of a simulated consciousness, I would been dead and wouldn't get to experience any of the pleasure that fake-me experiences, for making a simulated consciousness is the same as making a philosophical zombie.


A philosophical zombie is an incoherent concept. The idea that your consciousness could disappear while leaving all of your external behaviors unchanged makes no sense. Your consciousness affects your external behaviors; if you were not conscious, your external behaviors would be different.


There is no reason to think that it would, either. If you did a Moravec transfer with real neurons that were grown from your own stem cells, such that the resulting brain is more-or-less physically identical to the original brain, do you still think the subject would "lose" consciousness?


I rather think the appeal of a Moravec transfer is that a single result-mind with an upgraded substrate is preferable to two result-minds, one on the superior substrate and one still on the original substrate. Doing it that way is cruel since the original will still decay and eventually die.

(Thanks for this by the way, I've long thought about this idea but didn't know it had a name.)


Wow! And here I thought Greg Egan invented this [1]. Thank you so much for the reference.

[1] Greg Egan - The Jewel


Nitpick: Egan's design is different in principle; it's a self-contained device running a bog-standard neural network that, over a number of years, trains itself on the brain's inputs and outputs. So the actual mechanics of how it operates doesn't necessarily reflect the structure of the biological brain.

I found that really fascinating as an idea. Being identical is one thing, but does being arbitrarily close to your behaviour make it you?


That's all just boiling frogs. Why should we consider the a structure that subsumes the space of an organic human brain to be a living continuation of the original brain?

The deconstructionist concepts that our constituent atoms cycle through a complete change-over in less than a decade, does not equal total organ replacement with a non-biological surrogate. Those sorts of ideas sweep a grand swath of what-it-is-to-be-alive under the carpet with a sentence-long sound bite.

Replace a knee or a hip with a comparable structure of equivalent practical necessity, and that person will still be missing those body parts.

Claims such that, simply because the replacement is infinitesimally complex, that this sufficiently satisfies the requirements of "being a living human" will still be wrong.

You might enjoy a puppet that acts just like your dog, or your grandma, but it won't bring them back, or keep them alive beyond their expiration date. Put your grandma's Moravec replacement brain inside your dog's body, and tell me she's still alive.

A human without a human brain is not a human, but instead a soothing, reassuring puppet, perhaps hoisted aloft by hypothetical autonomous, and nigh-imperceptible nano-cyber-strings.

But still a lifeless corpse all the same.


>Put your grandma's Moravec replacement brain inside your dog's body, and tell me she's still alive.

That's an ontological conundrum that's far from settled. I'd argue the information we have points to our biological intelligence not being particularly special or more "real" than other forms.

What I can tell you is that your hypothetical monstrosity there would probably play fetch and bake some really good cookies. Heck, I'd probably name it Dogma.


> But still a lifeless corpse all the same.

How can you possibly know this? And I'll point out that no one but you has posited that the result would still be a human brain.

What do you think happens if you freeze a brain in liquid nitrogen, then safely thaw it out and restart it?


Agreed. Especially given our limited (read: virtually nonexistent) understanding of how, exactly, consciousness is created, it seems to me that this gradual transition through an organically integrated component is the most likely scenario by which a digital consciousness could be created.


Or perhaps a Siri like AI could try to imitate someone or 'be' them in a way that an actor does. That seems the kind of tech we are closest to actually being able to try to build.


I'd say that's certainly what we're closest to, but isn't the point to actually experience the transition? If all you have is an imitation, then you're still dead. Though, it might be nice to create duplicates of yourself to get more done in the day.


You could argue that when you wake up in the morning it's a new you anyway.


I thought it was mutually agreed that consciousness is a while loop that needs to be paused every once in a while for garbage collection. ;)


IMHO - making a static full copy of my current personality is useless to me. there is no immortality of myself, just copy of mine, and we already have that in a bit different fashion - kids (I would say that's even better approach compared to 100% copy).

facing my copy, there wouldn't be a reason to feel immortal - I would die anyway, and the other guy would continue to live. so immortality for me means preserving my own instance of me, anything else is just not there. it seems to me this would be much harder to implement compared to creating a copy, so this would be available much later, and who knows with what consequences.

this is not possible in our timelines, and frankly, I don't mind. kids are good enough solution, and as previous generations made space for our, I will make some space for next ones, hoping they'll get a bit closer to stars than we did. once we settle through Milky way, our extinction on short or medium time scale should be less probable compared to single-planet situation.


>facing my copy, there wouldn't be a reason to feel immortal - I would die anyway, and the other guy would continue to live. so immortality for me means preserving my own instance of me, anything else is just not there. it seems to me this would be much harder to implement compared to creating a copy, so this would be available much later, and who knows with what consequences.

Do it while you're unconscious. You go to sleep, then wake up in a different body. No existential crises needed.



They should just call it copying/copies. The prospect of having your copy managing your assets could be desirable, for example.


When I look at efforts to synthesize the human brain within a computer I see an effort to make a synthetic bird rather than a machine that flies.

If this analogy has any use as a model beyond a cursory observation then a focus on machines that solve hard problems will bear more fruit and earlier (maybe ever) than attempts to fully replicate the human brain.

Today is it possible to build an (outwardly) anatomically correct and functioning sparrow (in all respects) out of synthetic materials? I don't know but surely it is hard. To make one certainly would have been impossible for the Wright brothers.

Today we have things like the F22, A380, and little drones. Each is quite complex but the complexity accreted over time, each layer a pragmatic solution to a problem at hand.

If we take the same approach what kind of "thinking machine" might we end up with in 100 years time?


Thinking (or thoughts) are different from consciousness. A simple experiment can tell you this. Sit silently and observe your mind. You'd see thoughts keep on popping up, if you don't indulge with them, each would fade away quickly. If you do indulge, you get on a ride of similar chain of thoughts. This riding is termed as "thinking".

IMO the neuron pathways that exist in our physical brain are determined based on "rides of thoughts" we have taken in our life so far. But there's no visual or physical proof to locate the consciousness that does this observation and takes the ride.

So uploading brains means uploading only "the experiences" you have got in your life but as long as the rider isn't there, these are just mere memories with no further rides possible.


Are you advocating for an immaterial soul?


Is there any other kind of soul? It sounds like kr4 is arguing for consciousness as a higher order emergent phenomenon separate from (but built atop) thoughts.


I mean, it sounds like they're saying that the physical (and thus uploadable) brain is not itself conscious, but only a highway for thoughts to ride on and a storehouse for memories. That clearly implies that there's a non-physical thing that is actually conscious, whose existence kr4 takes on "gut feeling" despite lack of evidence.

If consciousness is an emergent phenomenon of the physical properties of the brain, then surely it will also emerge from a sufficiently accurate simulation of those physical properties.


> If consciousness is an emergent phenomenon of the physical properties of the brain, then surely it will also emerge from a sufficiently accurate simulation of those physical properties.

It's possible, but it may not be the "same" consciousness, and indeed, may not even consider itself to be. Perhaps better terminology to explore this line of theory would be to replace "thoughts" with "memories" and "consciousness" with "self".


The primary problem with the phenomenology approachieve to AGI is that we have so little understanding about the mechanism of general intelligence. Some aspects of intelligence, we understand, but as is analogous to flight it's nowhere close.

Whereas with flight the Wright brothers and before them Richard pearce understood it well enough to make an approximation and thus working implementations, we don't have anything close to that kind of understanding.


This article is not about how to create an strong AI from a brain simulation, but to simulate our brain to make our consciousness and mind immortal, hence all the discussion about death.


> We all find our own solutions to the problem death poses.

No! No body does this. People make up excuses. People spin death to be a positive, often under some guise of Deep Wisdom. People come up with all sorts of ways to cope with death, but calling any of them a solution is just false. Death is a vile, atrocious thing, the biggest enemy humanity has.

We don't say that slaves all found solutions to slavery. Or that everyone finds solutions to domestic abuse. Or solutions to dementia or Alzheimer's. Death is a far greater evil.[1] So how disgusting is it to say everyone finds a solution.

I admit the author probably didn't intend to imply this, but it's exactly that kind of thinking that we should be aware of and fight. Just because it seems inevitable, we should not make it socially acceptable to give up and view death as anything but the wickedness it is.

1: Yes there are atrocities worse than death, but many of those involve death, or are worse because of the limited timespans caused by death. Apart from having your mind destroyed, I'm guessing most things would be healed by rather long periods of time, than sufferers would prefer a period of suffering+long OK life, vs suffering+death.


A world without death is a world where flowers never change into fruit, a world where fruit never change into a new plant. Remember there's a theory of the universe ending in heat death, where entropy is maximum, where there is no more change possible. To prevent death is to prevent change, and you bring about a world, in the long term, becomes changeless, like a giant museum, flowers and fruit and plants stay in the same state and never age, caterpillars never turn into butterflies, where squid never mate and reproduce, everyone immersed in their VR, day after day, for eternity. Philosophically, what difference is that to the heat death of the universe then?

The experience after death is exactly the same as the experience before birth. Was it a vile, atrocious thing you weren't born until you were? Death is about returning to that state. The disintegration of your mind is the mirror image of the creation of it. A world where you have left without right, top without bottom, beginning without end is like the universe before the big bang, or the universe after the heat death, where we, you, me, and everything else, is returned to being one.

Lengthening life span? That's a different matter. But I can tell you, you will never stop death from happening. It's perfectly acceptable to accept death will happen, and more isn't always better. ;)


If death is so noble and necessary, why should anyone have a problem with murder? Why should we bother trying to cure diseases? Do you think the world was twice as dynamic and full of vitality 100 years ago, when the global average life expectancy was half what it is today?

I don't think making people immortal would "prevent change" because there are plenty of kinds of change that don't involve ending a person's life without their consent. Is that not obvious?


I feel that you're strawmanning the original post by changing it from something like "death is necessary" to "death is good". Clearly there's a difference between a body becoming frail vs. somebody intentionally fatally injuring it. There's also a difference between stopping disease and stopping aging (well, at least if you define aging as not a disease, which I think is fair since AFAIK it has a different mechanism than disease.)

> there are plenty of kinds of change that don't involve ending a person's life without their consent

Is "without consent" another reference to murder, or just the fact that most people don't want to die? I assume the latter. How do you know that the kinds of change that do happen, aren't mostly because death as we know it exists and subtly influences all our decisions? Maybe there really would be less change without death.


This is an interestingly twisted line of thinking. RE aging vs. disease, how the two are different? Aging looks exactly like a genetic disorder that slowly kills you, the only difference being that this is the single disease we all share.

As for "death is good" vs. "death is necessary", I disagree and it looks to me that "necessary" comes out of trying to come to terms with the fact it's inevitable. You could argue in the past in the same way that diseases are necessary for various reasons, and yet we decided to cure them instead, invented antibiotics and hygiene and other medicine.

> Clearly there's a difference between a body becoming frail vs. somebody intentionally fatally injuring it.

Comparing to murder may not be the best. Try comparing to accidents. Both dying of old age and of an accident is unintentional. Yet as a society we accept the former while rejecting the latter. This is strange and IMO happens only because we know we can prevent accidents, but we don't know how to prevent aging. Yet.

We'd already know if more people cared instead of delegating the problem to deities or thinking it's Good to die because it's the Natural Order of Things.


We'd already know if more people cared instead of delegating the problem to deities or thinking it's Good to die because it's the Natural Order of Things.

And it's socially acceptable for people to choose that. People ought to be allowed to delegate the problem to deities, and they ought to be allowed believe in a myth for death, and allowed to believe it's part of the natural order of things.

We should never make it socially unacceptable.


I'm not advocating banning religion here, I'm only expressing my wish more people cared about solving death - especially those who just give up and accept it as "natural order". But still, this is just a social attitude that people acquire as they grow up, and with contemporary science and technology we should revisit that attitude, since we're finally capable of making concrete progress towards life extension.


Until we can easily send billions (at least) to other worlds, be it in Solar system or further away, death is +- necessary, like it or not.

This planet can feed many more people than it does now, without famines. But there is a limit, and looking how most people just don't care about these things as long as they get their paycheck and can watch their favorite TV show, let's be realistic.

Honestly, I don't see what's all the fuss with death - I am atheist/agnostic, so no hope for some eternal afterlife (but happy to be proven wrong :)). But in no way I find it disturbing or worrying, just human business as usual (there have been roughly 120 billions of human deaths already, and I am not special anyhow). It's actually quite nice to be perfectly OK with death internally, it brings peace with oneself and the world on more than one level.


I suppose I mean "necessary" more as "necessary to the way life works right now." Existence with immortality or very-long life expectancy could be completely different in ways we can't fathom, so I'm opposed to the idea of automatically accepting it as more desirable than what we have now. (Yes, this is a conservative and somewhat closed-minded position. If nothing else I hope people will come up with a detailed vision of what conquering mortality would look like, rather than just assuming it will be better.)

On the disease comparison, admittedly I was only thinking of viral and bacterial disease. Death is much like a genetic disorder we all have. On the other hand, isn't "health" usually defined as how a "normal" person's body works? If death is normal, then it can't be a disease. OK, enough semantic games. ;) I still see a qualitative difference between keeping a body healthy enough that it doesn't die until that universal process wears it down vs. something earlier.

I'm not sure I agree with the idea that society accepts death and rejects accidents. Is the grief of accidental death really universally distinguishable from that of old age or illness? On both sides, you'll have anger/sadness it happened so soon, or reluctant acknowledgement that it was going to happen some day and we can't predict it or prepare.

(The following is somewhat OT in the context of "uploading the brain":) On grievous injury, I don't know if commenters have been including vastly improved ability to recover from those in "immortality." I wonder though, if fatal injury were the only way to die, would it cause people to lock themselves away in a bubble to live forever, rather than risk dying? Maybe in the first stages of the science, the body will still get frailer as the years pass, so that sudden accidental death becomes more likely the longer you live. Would such an eternal sheltered existence, lived in constant fear of losing it one day, be worth having? If not, why do you want it? If so, why is it really that different than what you already have? After 1,000 years are you going to say "I've accomplished enough, experienced enough, that I am now more comfortable with possibly dying than I was when I was 25."?

Presumably the likelihood and number of possible accident scenarios will decrease over time, so maybe most people won't worry too much. Murder could still be a big fear/threat though.


I can understand that conservative view, and while I personally feel that "not dying" is obviously better than dying, one can't deny that a lot of systems that underpin our civilization depend, implicitly or explicitly, on the fact that people generally don't live longer than 100 years. Therefore just suddenly making people not age (or age much slower) would likely cause a very bad disruption and possibly lots of suffering. This topic needs to be discussed and "detailed vision of what conquering mortality would look like" needs to be specified, but personally I don't think those issues are a showstopper, or something that should discourage us from fighting death.

> I'm not sure I agree with the idea that society accepts death and rejects accidents. Is the grief of accidental death really universally distinguishable from that of old age or illness? On both sides, you'll have anger/sadness it happened so soon, or reluctant acknowledgement that it was going to happen some day and we can't predict it or prepare.

The difference is that an accident is treated as an unnecessary death. After it happens, changes are introduced to improve safety and to make sure such accident doesn't happen again. Yet in case of aging, we're just accepting death and moving forward, and only few of us ask how we could make so that such deaths don't happen anymore.

As for your last paragraps - I don't think that longer lifespans will necessarily make people more risk-averse than they're already are. Why would someone choose a sheltered life having 1000 years in front of him and not choose such life having perspective for only 50 more years? I would expect people to start caring about (what we now call) long-term effects of decisions because the consequences would come within the lifetime of the decision maker.


Death happens, if not now then when entropy is at maximum, and birth happens too. Murder happens, and people angry at murders also happen. Diseases happen, and cures for diseases happen also. People go to work late happens, people go to work on time happens too. People fall in love and is rejected happens, and people fall in love and is reciprocated happens also. If you were to tell me what life is, that's life. That's vitality. You can have both left and right, good and bad, or nothing. There is duality, and then there is oneness, and you can't have one without the other. Life is duality, non-life is oneness. Maximum oneness is maximum entropy.


That's the current unavoidable state, yes. What if diseases could all be prevented though? Is life no longer "life" if an all-cure is produced?

>The experience after death is exactly the same as the experience before birth. Was it a vile, atrocious thing you weren't born until you were? Death is about returning to that state.

This argument never really re-assured me. I know it's most likely the case that that's what happens, but it brings me no peace of mind. The issue is not the state transition or the fear of "where I will be". My issue is the permanent loss of all potential future utility to oneself, to loved ones, and to the world at large. Before I was born, there was no concrete potential. After you are already alive, you are forever erasing proven potential.

This is why it can be ethical to kill an early-stage zygote, but it isn't ethical to kill an infant.

It is important to accept what is and isn't possible, but this discussion hypothesizes a world where what isn't possible can actually be possible. And in that world, I think biological immortality (no death by aging or disease) is far preferable to mortality. I think death is a powerful evil in this world, except in cases where someone chooses to die of their own free will.


> If you were to tell me what life is, that's life.

That boils down to "life is stuff happening", which is true, useless, and does nothing to answer the question posed. It is trivially obvious that it's impossible to stop all death, or even all human death, so I think it's also obvious that what's being talked about is not the elimination of death, but the elimination of the current main cause of death, which is aging.


I was replying to MichaelGG's statement:

Just because it seems inevitable, we should not make it socially acceptable to give up and view death as anything but the wickedness it is.

As I said before,

    Lengthening life span? That's a different matter.
EDIT: Reply to below, that's a good explanation, thanks. I don't think I could come up with it so clear and succinct, if I could I would have!


Yes, I was trying to address specifically that I think you are arguing a points with is somewhat orthogonal to the current discussion, because both are using similar terms to mean different things. Your terminology is more correct (you are addressing death as a specific concept, and it's inevitability), while they are addressing age extension, possibly to it's logical conclusion a the end of the universe, but using "death" to denote that, when it's obvious there are other causes of death that cannot be stopped.

In other words, I think there is no real difference of opinion, just a difference in terms making it seem so. This appears to have been exacerbated slightly through your flowery description of your point, which I take to be: "Death of is impossible to stop in the end, and giving the false impression that it is may be harmful. That said, if we can retard it in humans to a large degree, that's useful. Additionally, death serves a useful purpose in many systems, so we shouldn't lose sight of this."


Sorry if it wasn't clear I was referring to death of intelligent life.

Heat death of the universe is definitely a problem but for a later time. I think having lifespans on the terms of millions of years or longer is qualitatively different than what we have now.

I don't find the before-birth argument very compelling. I just don't see how it's relevant.


You want society to view death as being wicked rather than acceptable. Since death is inevitable (either on millions of years or a decades scale), and it isn't that horrible of a state to return to anyway, viewing death as a kind of wickedness is like telling a Christian to accept he is going to hell, and make life miserable. I think it's important for people to be allowed to accept the inevitable, to accept they will return to a state that isn't that vile or wicked.


I just shake my head at:

> Since death is inevitable (either on millions of years or a decades scale)

Think of the difference even a single order of magnitude would make. The Founding Fathers would still be here to settle the arguments about what exactly they meant in this or that amendment when the Supreme Court argues guns or personhood, for instance.

They fact that you completely disregard the difference between decades and millions of years because if someone is eventually going to die anyways, it's all the same....just wow.


> The Founding Fathers would still be here to settle the arguments about what exactly they meant in this or that amendment when the Supreme Court argues guns or personhood, for instance.

Would we really want that? They would be elevated to an even greater stature than they are now when they're dead, greater than what they probably wanted individual men to be in the government they devised[0]. They could become corrupted, or change their minds, so that "what the Founding Fathers wanted" becomes an actual moving target, rather than just something that can appear to change by us debating it. What's the point of writing a law if you plan to indefinitely query the writer on what they "really meant"? Why have a representative government at all, if the Founding Fathers are so great that they did everything perfectly and they are still around to run it for us?

Maybe it's better that people die, and all we're left with is our memories of them, or written statements of their ideas or accounts of their deeds. We pick up what we find useful from them, and have the freedom to discard what isn't. Every political movement has a built-in time limit based on the deaths of its most powerful leaders. Greater diversity in people and thoughts grows, because we don't have the luxury of keeping them the same forever. Many mistakes will die with their creator, and everyone else can move on.

Longer lives would amplify the negative effects of human frailties, just as you hope it would amplify the positive effects of smart and noble people. It's possible the Founding Fathers' ideas were only good for 50-100 years around their lives, and that as rigid adults they would be unable to formulate better versions for the changing world.[1]

The world would be a vastly different place without death, or with much longer lives. I think it's naive to simply assume it would be better. I've explained some reasons it could be worse, but I'm open to more detailed analysis of why it would be better (for humanity collectively, not just the recipients of longer lives.)

[0] Though to be fair, maybe they would have thought differently if they planned to live hundreds, thousands of years.

[1] Assuming immortal or very long-lived people become "set in their ways" as we do now. How much of this is internally self-generated, based on watching our "biological clock" tick away? To be fair, maybe we would behave differently in this respect if we had more time. Or, maybe we would just reach peak open-mindedness (or whatever the metric) at 300 years instead of 30.


I understand the practical issues with older generations lasting forever, but I think it's better for society to place limitations on these matters, rather than nature placing the permanent limitation of death.

Presidents have 4 year terms. Why can't people have 80 years where they're allowed to vote, and then lose that right forever? Or maybe they can be sent to a separate society where they're permitted to vote and debate things that apply to their own sphere, but must still follow universal laws set by a "central society", to prevent deeply immoral actions. For example, maybe alcohol is permitted to be permanently illegal in their society, but they still aren't allowed to lynch people or prosecute people without due process.

These are certainly very difficult problems to solve, and there are many counter-arguments to that particular. But I think there are solutions to these issues other than death, and I think many of those solutions are much preferable to death.

A society that adapts to become biologically immortal will find their own ways to adapt to dealing with zeitgeists and progress.


>A society that adapts to become biologically immortal will find their own ways to adapt to dealing with zeitgeists and progress.

I don't think so. That is just wishful thinking. Our current society cannot even produce leaders who are remotely capable of solving current problems. And a group of immortals, once in power, cannot be trusted to not try to remain in office indefinitely using the powers they already have. They can subtly influence society and people with their power so that they can continue in power without actually rigging the elections..


True, it's a bit idealistic. I absolutely agree there is serious potential for mis-use here.

However, I think the benefits of life longevity are very serious, and if we can overcome some of the issues they cause, we should do everything in our power to find new ways to extend life longer and longer (or at least provide it as an option for people).

In the next 100+ years when this starts looking a little more feasible, hopefully people will have better perspective and insight into these issues, and will establish ways of allowing society to progress and not be anchored by a group of early-immortal voters, legislators, or perhaps tyrants.


Hopefully their immortality would put in perspective the usual bullshit most politicians care about. Short-term decisions and wealth-seeking is in a big way driven by both short life and short office terms.


>A society that adapts to become biologically immortal will find their own ways to adapt to dealing with zeitgeists and progress

I think that the adaptation phase would be a very turbulent one,because we would experience the giant gap between reality and our heritage. Most our motivators, values and culture in general are founded around the idea that we do not have that much time (think about stuff like taking care of others, parenthood, work-life balance, achievements etc.) Without the time constraints many of these would become obsolete. But they're "hard wired" inside our organisms. And would cause problems just like adrenaline rushes and other (once useful) reactions to stress cause problems today.


>Maybe it's better that people die, and all we're left with is our memories of them, or written statements of their ideas or accounts of their deeds. We pick up what we find useful from them, and have the freedom to discard what isn't. Every political movement has a built-in time limit based on the deaths of its most powerful leaders. Greater diversity in people and thoughts grows, because we don't have the luxury of keeping them the same forever. Many mistakes will die with their creator, and everyone else can move on.

Fantastic points. I was about to type something along the same lines when I saw your reply. You said it much better than I could have.


That's exactly the Deep Wisdom. Ask yourself - do you want to die? We'll all have your memories. Do you want your father or your mother to die, and if they already did, are you happy about it? Why were you mourning?

Death is bad, it's scary and meaningless, and we construct really complex and elaborate excuses to deal with it.


You're conflating wanting death and accepting that it will happen, in both the case of oneself and family/friends. You think that "acceptance" is some BS we cooked up because we had no other choice, and that's a fine position, but don't caricature acceptance as actively wanting death.

> do you want to die?

No. I also want an infinite supply of resources (for me, and for everybody else, why not), the ability to experience any scenario I can dream up with any people I want even if they don't want it, omniscience, omnipotence, and omnipresence. The inevitability of death is perhaps a classic denial, but there are infinite other ways you can't always have what you want. (At the risk of sounding defeatist, trite, and in favor of The Order of Things.) If you could wave a wand and force any person to love you, would it be meaningful? If you were guaranteed to get whatever you wanted in any other sphere? Maybe I'm rambling here, but the point is, how will removing death solve all the other problems that come with billions of autonomous agents with generally incompatible wills existing together in a single, inherently meaningless universe with (probably) immutable physical laws and finite time, matter, and energy?

OK, so solving death is an incremental improvement, and thus may be worth it to some... but perhaps I've described a reason not everybody is going to be obsessed with it -- it's just one form of a vastly large general problem.

Also, I'm wondering what "more people car[ing]" about eradicating death would look like to you. The article discusses "digitizing the brain", there are thousands of scientists and dollars spent on eradicating certain cancers and other diseases, and many other sciences that could be applied to this problem. You're welcome to fund/conduct your own research, but apparently other people have more pressing concerns.


> Maybe I'm rambling here, but the point is, how will removing death solve all the other problems that come with billions of autonomous agents with generally incompatible wills existing together in a single, inherently meaningless universe with (probably) immutable physical laws and finite time, matter, and energy?

> OK, so solving death is an incremental improvement, and thus may be worth it to some... but perhaps I've described a reason not everybody is going to be obsessed with it -- it's just one form of a vastly large general problem.

You answered it yourself. It won't. Removing death will just solve the problem of death. But having more time to live and study would probably be a big step in the direction of solving other problems.

> I also want an infinite supply of resources (for me, and for everybody else, why not), the ability to experience any scenario I can dream up with any people I want even if they don't want it, omniscience, omnipotence, and omnipresence.

Well, you can't get absolutely everything you imagine, but that doesn't mean it's not worth going after things you can. Getting rid of death is something we could get. Full omniscience and omnipotence maybe not, but a subset of it is something worth aiming for, and personally I am going to try. That's what science and technology is for - multiplying our power, both collective and individual.


I think you are overestimating the importance of biological nature of live and underestimating the nature of intelligence.


This article talk about brain simulation, so bodily death is implied, so bodily-able people could still communicate with their bodily-dead friends.


No. Death is really not that bad; to be dead is no more evil than not to exist, because to be dead is to not exist. Dying itself is evil in that it causes grief to others still living; but while grief is painful, I do not think anyone would compare the evil of suffering grief to the evil of suffering slavery.

One might argue that death is bad on counterfactual grounds: if a person had not died, they could still be alive and happy, and isn't that better than them not existing? But this argument leads to the Repugnant Conclusion (https://en.wikipedia.org/wiki/Mere_addition_paradox)! So be very careful if you take this line of argument that you understand what you are saying.

I happen to be a total-happiness utilitarian, and do accept the repugnant conclusion. But ultimately all this means is that death is a mirror-image of birth - it ceases one hopefully-happy life; birth begins one. There is grief attached to death and joy attached to birth, but apart from that there is just as much imperative to maximize births as there is to prevent deaths - and one of these is far easier. This is, however, a deeply counterintuitive position that I don't expect many others to share.

This isn't Deep Wisdom. It's just utilitarianism, taken seriously.


If you can hypothetically prevent death and also increase birth rates (without causing problems resulting from overpopulation), wouldn't you be increasing both average and total happiness?


Sure. I'm not opposed to ending death, and certainly not to work on extending healthy lifespan. I just don't think it qualifies as humanity's greatest problem.


>Death is a vile, atrocious thing, the biggest enemy humanity has.

You have a time on this earth. When it is up, you make way for new generations. What is atrocious about it? What is atrocious and vile is the longing of some people for living indefinitely. We have a name for things that does not die and multiply uncontrollably. Cancer. Human beings are already cancerous in nature by their disregard/disrespect for everything that has been given by nature. Now you want to perpetuate that indefinitely by not dying?

>Just because it seems inevitable, we should not make it socially acceptable to give up and view death as anything but the wickedness it is.

Socially acceptable? Death? You put it as some kind of social custom that we obediently follow.

>People spin death to be a positive, often under some guise of Deep Wisdom.

Deep wisdom? How much wisdom do you need to see that the resources are limited? Human beings have no right to even want to live indefinitely until they have figured out how to live the limited time we got here without fighting and killing each other, how to live and expand without reckless destruction of nature/environment, how to expand life to other worlds, planets etc etc.


> Socially acceptable? Death? You put it as some kind of social custom that we obediently follow.

Because we do. We mourn and go on, and we expect to obediently die when our time is up. Your whole comment is the reflection of that custom.

If it weren't for it, then any reasonable human would be doing everything in their power to get rid of death entirely. As a civilization we'd be probably already done, but we aren't, because almost everyone is used to death as a "natural order of things" and/or delegates ending it to a deity.

> Human beings have no right to even want to live indefinitely until they have figured out how to live the limited time we got here without fighting and killing each other, how to live and expand without reckless destruction of nature/environment, how to expand life to other worlds, planets etc etc.

Human beings have all the rights they give themselves, there's no external entity to tell us what we're allowed or not allowed to do. Maybe you want to die. Go ahead then, nobody will stop you. But I, and many others, don't. Living is better than being dead. And it doesn't mean being in conflict with limits of our planet, maybe if people lived longer than ~80 years they'd actually start thinking about the long run, since they'd have a stake in it. Part of the reason we're exploiting the planet so much is that because we're not going to be around to see the consequences.


>because almost everyone is used to death as a "natural order of things"

It is. That you don't like it doesn't make a difference.


It is. But so is polio, and so was the smallpox. So was dying of common cold. And so is warfare, since living things are in constant state of war with each other.

We are the first beings on this planet with capability to transcend it all. We invented aspirine, got rid of smallpox, are in progress of eliminating polio, and we do our best to limit war and destructive competition. We're the first living things to decide what's Good and what's Bad, and we know that natural order of things is not Good, it's just the default.


> natural order of things is not Good, it's just the default.

Ok. But human beings are horribly short sighted. Something that may appear beneficial in the short term and on a small scale. but may turn out to be horrible in the long term and on a larger scale.

We have eliminated some diseases. But we have also created new, much horrible and powerful ones. We have discovered antibiotics, but that have also created disease causing agents that are resistant to them.

When ever we change something fundamental, that has resulted in the creation of much bigger evil. I think this is because we don't have capability to see the whole picture. We look at a small portion of it and think. "hey, we can make it better". But ends up messing up the bigger picture making it grotesque. So my point is, this is not an easy decision that should be made so lightly.


> We have eliminated some diseases. But we have also created new, much horrible and powerful ones. We have discovered antibiotics, but that have also created disease causing agents that are resistant to them.

> When ever we change something fundamental, that has resulted in the creation of much bigger evil.

I disagree with the assertion. Sure, sometimes we screwed things up big time (e.g. climate), but sometimes the change really was for the better. For instance, even though we're facing super-germs, life is much better for everyone than before we had antibiotics. Sure, we may end up defeated by the resistant germs, but it's not certain - that's why we have to push science and technology further, develop new methods like bacteriophages, because we can keep up with nature and have it our, better way.

> I think this is because we don't have capability to see the whole picture. We look at a small portion of it and think. "hey, we can make it better". But ends up messing up the bigger picture making it grotesque. So my point is, this is not an easy decision that should be made so lightly.

That I will agree with. We often suck at seeing the bigger picture. That doesn't mean we're screwed, because a lot of things can be dynamically adjusted and fixed as needed, but it does mean we should be careful with significant interventions into established systems. Death itself, for instance, is something that underpins a lot of our social structures and systems, so it is something we need to be extra careful about, but it doesn't mean we can't, or shouldn't change it.


Longer lives would help people be more sustainable because the problems of future generations would become problems of this generation.


I have thought about this. How do people who are at the heads of large corporations okays destruction and pollution of environment. Is it because they don't care about the world their kids will inhabit? Or they just don't understand the seriousness and view it that way.

If it is the former (which I really doubt), then longer lives can help. Or else, it will be just the same.

A more realistic answer might be publically funded companies are answerable to the masses (people who buy shares), and the masses only care about the quarterly reports. And again, longer lives will not help much here. Because you buy a share for you to make a profit out of it. Even if the public is fully aware of the environmental issues, I am not sure people will take that into consideration while buying shares.

Again, longer lives won't help us there.


Life is filled with suffering..death is an end to that. The end of suffering is a vile evil? I know this logic is common, but it always confuses me.


"Life is filled with suffering" - and joy. Sure, if you only take one half of what life is then ending that miserable experience sounds like a perfect idea, but there's the other half too and that's the reason most people prefer life to death.


Is it really a 50/50 balance for you between joy and suffering? Do you think that's the case for most people? Also, my claim was a weak one - simply that death is not the most vile thing imaginable/humanity's greatest enemy, because, at the very least, it does represent an end to suffering. In fact, I believe suffering is our biggest enemy..


What about "non-consensual death is the most vile thing"? Or maybe not "the most vile", but "an extremely vile thing". In this hypothetical new world, people may still choose to die if they will it.

Also, presumably in this future world, most suffering will be greatly, greatly reduced through other advances.


I agree that suffering is a bigger enemy than death.

Unfortunately any viable technology for brain uploading would also allow bad actors to simulate huge amounts of suffering and conceal it from detection. I guess some technologies really are too scary to be released into the world, unless we can create a central benevolent overseer first.


So, my concern about uploading my brain, is whose cloud service do you trust to run your consciousness? Google? Microsoft? Amazon? Facebook? Apple?

The level of trust that I'd require is pretty high. You could imagine it once again being relevant to check the pedigree of a company. Ideally there'd be a cloud provider already running now that is known for its trustworthiness. 'established 2013' might actually mean something one day.

My main hope is for indistinguishability encryption to reach the level where I wouldn't really have to trust the provider, but as long as you pay a significant performance penalty you're going to end up massively disadvantaged.

https://www.youtube.com/watch?v=IFe9wiDfb0E


'When the construct laughed, it came through as something else, not laughter, but a stab of cold down Case's spine. `Do me a favor, boy.' `What's that, Dix?' `This scam of yours, when it's over, you erase this goddam thing.'


It's approaching two decades since I read Neuromancer. I think it's time to revisit, since I barely remember it. Amazingly enough though, I still have very distinct memories of the CGA graphics and some situations from the game. The coffin hotel, the diner, pawning organs. And I played that a decade before I read the book. How has nobody remade this game yet, with the indie resurgence of adventure games?


If you want a hard science-fiction novel about digital humans you could read Greg Egan's Permutation City, it's a masterpiece.


Greg Egan is fantastic; I particularly liked his short stories.

One should note that he says people shouldn't expect to just sit down and read a book, but should use a pencil and paper to help figure things out. Though that doesn't apply so much to Permutation City, it does to other stuff like Orthogonal and Incandescence. (From what I've heard Schild's Ladder, too).


While I have no hard evidence for this, I expect we will eventually find that human intelligence and consciousness depends heavily on quantum effects. Thus it will always be impossible to scan and upload a human brain in a way that captures the essence of a person's mind.

Even though I don't think it will ever happen I enjoyed reading the hard science fiction novel "Hegemony" by Mark Kalina. It presents an interesting vision of what life would be like in the far future with mind uploading.

http://www.projectrho.com/public_html/rocket/atomicnovel.php


> expect we will eventually find that human intelligence and consciousness depends heavily on quantum effects. Thus it will always be impossible to scan and upload a human brain in a way that captures the essence of a person's mind

And you have a strong reason to believe that emulating or implementing those quantum effects in a structure other than the human brain is impossible?

As a secondary argument - if an reacts to all stimuli and behaves in all ways exactly as the corresponding human would, claiming that it "lacks the essence of a person's mind" prompts a pretty obvious (and well-considered) question.. how do I know you're not just an essence-less entity that acts exactly like a Real Human would act?

Or are you claiming that correct emulation of a human's behavior is impossible, because 'quantum'?


At some point in the far future we may be able to implement an AGI as a quantum computer. However I don't think we will ever be able to scan and upload an existing human mind into that computer; there isn't even any theoretical way to measure and store the whole quantum state.


I suspect you aren't all that familiar with quantum mechanics - you seem to think that an object as complex as a brain could have a single quantum state (true) that cannot be decomposed into many orthogonal localized systems (extremely unlikely).

There isn't a theoretical way (yet) to measure and store the whole electrical state either - why bring quantum mechanics into it?


There's a hint of "god of the gaps" about that argument.

I don't think we know enough about consciousness to even speculate usefully about it.

See e.g.

http://www.telegraph.co.uk/news/science/science-news/1114444...

And I think uploading has become a bit of a cliche now. If brain copying were possible, then brain merging and direct experience/memory sharing would likely be possible too.

What if instead of worrying about uploading brains for individual survival, we started turning into a connected colony organism? We already are in physical ways, but our consciousness is lagging behind.


>What if instead of worrying about uploading brains for individual survival, we started turning into a connected colony organism?

Many (maybe most, maybe even all) agents will still desire fundamental autonomy and privilege/consciousness separation, though. Perhaps humans may eventually all be set up with a "dual-homed" network, but there will be many humans who won't want to disconnect from their original LAN. And to do that, there needs to be a way of uploading what they were initially given and subjectively "switching" to that upload.

At the least, the option needs to be there, I think. I think entirely merging your mind into a "hivemind" is a form of death, even if it's something less bad than conventional death. People can choose to do that if they wish it, but those who do not want to to should still have other alternatives.


I don't think they will.

Firstly our perceived autonomy and freedom of choice is an illusion. Humans are more similar than different, and if we were genuinely individualistic we wouldn't be nearly as primed to create collaborative social structures such as countries and corporations - which can be so anti-individualistic they regularly create significant individual harm or death.

Secondly, there seems to be a trend in evolution towards greater collaborative complexity at the expense of individual organisms.

So I think we're at the pre-Cambrian soup phase of consciousness, with a few random and not very coherent evolutionary structures emerging from a mess of interactions and acting in rather blind and unsentient ways.

A conscious network would be a reasonable extrapolation, and if it was stable it would have the potential to be self-aware at a higher level.


I can think of only two explanations for what brains manage to do in such small volume and on such meager power:

1) P = NP or something almost equally amazing is possible algorithmically, and nature found it.

2) quantum computing

So I definitely concur.


> While I have no hard evidence for this, I expect we will eventually find that human intelligence and consciousness depends heavily on quantum effects.

Agreed, seemingly simple processes like photosynthesis cannot be effective without considering the quantum weirdness at play. Certainly the human brain, one of the most complex systems known to us would also heavily rely on something so foundational as quantum majesteria.



So, to justify their "just accept death, guys" conclusion, the author makes broad sweeping statements like "details quite likely far beyond what any method today could preserve in a dead brain" without providing any specific arguments about the limitations of vitrification whatsoever. Not even a bald assertion that "cryonics fails to preserve structure X". Yup. Okay.


Assuming that the ability to upload our entire brain to a computer will not be available during out current lifetime, what about the idea of mass data collection of our experiences, storage of this data, then periodically run improving algorithms that combine all our experiences into a consciousness?

This would maybe involve wearing a camera 24/7 to record everything we see and hear, and also some feedback on our own inner thoughts and our own recording of our emotional responses to things. When we finally die, a computer crunches all this data using neural networks to create a consciousness based on our life experiences and emotional responses.

This would initially be maybe not fantastic, but as technology progresses the crunching of the source data improves and every decade a new iteration of our consciousness can be produced, hopefully coming closer and closer to the real us.


Permutation City by Greg Egan explores what it would mean if you could easily upload your consciousness but wealth meant access to better, faster, hardware to store it.


His short story, Learning to be me, is a really great piece exploring a related technology. Recommended!


An additional complication is the role of the billions of glial cells that are present in the brain. Their function of supporting neurons is quite well characterised, but they can also more specifically modulate neuronal function, so their place in a comprehensive connectome model shouldn't be ignored.

Then there is the network of vasculature, the flow of cerebrospinal fluid, interactions with hormones, and just generally all the interfacing with other bodily systems. All this would have to measured and modelled somehow too, in addition to all the billions of neurons.

I agree with the author, there's no way all this is going to solved any time soon.


What would I do with an uploadable version of my brain? I'd send it off to an accelerated university where it can learn at a rate far faster than I ever could. Plus, if we could upload a brain, then there's a good chance that we understand how to manipulate the state of an existing brain. So after my brain has graduated, we'll simply upload that virtual brain's state back into my skull! All this would take about 2-3 min tops? But somehow I imagine this will still cause university enrollment prices to climb. It's student loans all the way down.


It's actually interesting to consider the divergent paths followed by a merge scenario presented here. If we assume that uploaded-you is put through a simulated "going to university" experience, then it's a few years older than you are. Even if it was _only_ simulating classes and studying, somehow turning off any "human" needs for social interaction and enteratinment, it's had years of time to think new thoughts, assimilate new ideas, and become a fundamentally different person.

If it's completely overwriting your brain's current state, then you're replacing yourself with someone similar to you but who'll be _obviously_ different to anyone who knows you.

If it's more a merge scenario, then you're (effectively) killing yourself and the copy, and maybe the copy won't want to rejoin you if it means losing its own distinct state.

Continuity of consciousness is weird to think about.


A couple of quibbles with the article:

>While progress is swift, no one has any realistic estimate of how long it will take to arrive at brain-size connectomes. (My wild guess: centuries.)

It's not so hard to estimate. Just extrapolate progress on scanning, computing etc. About 2050 plus or minus a couple of decades. We had a 20um scan in 2013 and you'd probably want to get that down to 20nm for a connectome so if you assume resolution doubling every couple of years that would be about 2035.

(2013 scan: http://io9.com/see-the-first-ultra-high-resolution-3d-scan-o...

Images showing neural connections: http://book.bionumbers.org/how-big-is-a-synapse/)

Of course as the article points out a connectome misses a lot of chemical detail.

>It will almost certainly be a very long time before we can hope to preserve a brain in sufficient detail and for sufficient time that some civilization much farther in the future, perhaps thousands or even millions of years from now, might have the technological capacity to “upload” and recreate that individual’s mind.

Or quite possibly we can do it just now for $30k or so by sticking the body in liquid nitrogen. (http://www.cryonics.org/membership/ ). Maybe that won't work but maybe it will.


A couple of quibbles with this: microscopy technology doesn't advance that quickly, certainly not doubling every few years.

A new, entirely different technology is far more likely than creeping improvements, as in semiconductors. However, these are far harder to predict.


I was thinking about the details. We have good enough microscopes already resolution wise but to cut up a brain fine enough and picture it with existing electron microscopes would take ages, probably centuries with current tech so it needs a speed improvement more than anything. Also a 20nm scan could produce an awful lot of data ~ a billion TB which could be an issue even allowing for Moores law. Still there's quite a lot of research money going it to this stuff.


With modern ML techniques it looks feasible to recreate at least online behavior of an individual - facebook likes, comments etc. The datasets are here, in facebook/google datcenters, and DL models are already used to model conversation. It would be interesting to know just how many megabytes of logs of your online activities is really necessary to extrapolate your behavior into the future.

Facebook AI research is probably playing with such models right now.


Eh? Humans are not fixed entities, they adapt and learn. You can't extrapolate into the future based on my previous anything. Six-year-old me can't tell you who thirty-six year-old me is.


Given enough data and a good model (recurrent neural network can model arbitrary algorithms, for example) you can learn algorithmic regularities in data, including learning itself. There is a paper just about that: http://link.springer.com/chapter/10.1007%2F3-540-44668-0_13#...

If this can be scaled to human-like learning remains to be seen, but training a conversation RNN model that remembers some details of past conversations and acts on them should be possible.


There' a Black Mirror episode built around that very idea.


TL:DR; The brain is very complex so it will take long, I don't know how long and I don't dare to make an estimate, but, as I'm getting older I am more and more at peace with dying.


I sense this will be one of those articles that will be a perfect example of someone who had a very firm position against progress is clearly proven wrong. I wonder if we could automatically flag articles like these based on the number of tautologically negative arguments that appear...


Upload my brain, so people in the future can use it as a DIY kit for Artificial Intelligence?

This article sounds a lot like SOMA the video game. brain scans, uploading your brain, etc http://somagame.com/


i find it hard to imagine how to replicate the smaller connections of my brain, and the inter-connections that they all have. With that being said, i hope that it could be done, because i would be super down, and would want to be uploaded into the net.


Small volume of mouse cortex is already successfully scanned at 3x3x20nm voxel resolution, with smallest synaptic details being visible http://www.cell.com/abstract/S0092-8674(15)00824-7 The tool is called ATLUM. The process could be scaled up.


Yeah, it's fundamentally not "this is impossible" but "this is very hard and expensive", and very hard and expensive tasks have a remarkable way of becoming cheaper and easier over the long term.


Great Mambo Chicken and The Transhuman Condition

http://www.amazon.com/Great-Mambo-Chicken-Transhuman-Conditi...

Very interesting book.


How would you program mind if you were writing a simulation?


seems hardly worth the trouble. don't flatter yourself your brain needs to stay around - highly unlikely and very un-ecological to power up a machine to maintain a presence of virtualitzed shit for brains. -Just saying. Next Question?


So you don't use prescription drugs of any kind or ever visit a doctor? Since it would be hypocritical of you to waste those resources on yourself.


> seems hardly worth the trouble

Assuming it works well why not keep in touch with the loved ones. Funerals are so gloomy. Why not an upload party? You could make the virtual environment like the versions of heaven in the various religious books and have that stuff for real rather than make believe.


Not just heaven. If the right religious nutjobs get ahold of an upload...

https://en.wikipedia.org/wiki/Surface_Detail


You need not look to fiction in order to be completely terrified:

https://en.wikipedia.org/wiki/Mormon_Transhumanist_Associati...


>You could make the virtual environment like the versions of heaven in the various religious books and have that stuff for real rather than make believe.

Have you seen religious beliefs about the afterlife? Actually attempting such a virtual environment will wind up being the nail in the coffin for religious belief in a bodily afterlife, because it will be unenjoyable and ridiculous.


What's the matter? (Tastefully) nude harpistry not your thing?


Tastefully nude harpistry isn't even what's actually believed about the afterlife. It's the nicer version they use for folk-religion. The truth is more like spending eternity as a disembodied voice singing Psalms to the glory of God's name -- or in other words, not something that anyone ever actually thought would be a good idea.


The beliefs I have would not be immitateable by such methods, even if brain uploading is achieved.

It could not properly deal with the problem of my moral imperfection.

Also, a computer would have only finitely many states, so it would only delay death, not prevent it, as having the same exact mental state an infinite number of times would, I think, be indistinguishable from having it only once.


> The beliefs I have would not be immitateable by such methods, even if brain uploading is achieved.

Feel free to skip it, then, but many others want the option.

> It could not properly deal with the problem of my moral imperfection.

Morals depend on the ability to empathize, evaluate the effects of your actions on others, and act accordingly. So, you can indeed give yourself the tools to become far more moral.

> Also, a computer would have only finitely many states, so it would only delay death, not prevent it, as having the same exact mental state an infinite number of times would, I think, be indistinguishable from having it only once.

There's a huge amount of theory on how fun and novelty work with enhanced cognitive ability, with many potential solutions to that problem.

Beyond that, long before we deal with the finite number of interesting mental states, we'll have to deal with the heat death of the universe. Given a few billion years to work on both problems, I think we'll manage a solution. I'm far more concerned about whether we can solve mortality before the next existential threat to humanity than about whether we can solve problems like the heat death of the universe in billions of years.


Far more moral, perhaps. Perfectly moral? I doubt it. I don't trust ourselves to be able to perfectly resolve the difficulties between being and having choice, and not making any immoral choices.

Also, I'm not a consequentialist; I don't think morality is entirely determined by what impacts the choices have on others.

I'm not saying that it would be bad to extend life in this way, only that it would not be the same as what I believe is after life here.

And it would not be an end to death.

> There's a huge amount of theory on how fun and novelty work with enhanced cognitive ability, with many potential solutions to that problem.

I am not speaking of becoming merely bored. Mere entertainment is nothing much.

I am speaking of one's experiences ceasing to have an after which one can distinguish from one's before.

I am speaking of death, of ceasing to be * .

A finite state machine must either reach the same exact state more than once (in which case it cannot distinguish whether it is the first time it has entered that state or not, as that would be part of the state), or it must halt. And if it does reach the same exact state more than once, there is no difference for it whether it reaches a state a third time or not. If its experiences map 1 to 1 with its state, then its experiences have a finite length corresponding to something not more than its total number of states.

(If you don't find this convincing, I expect I could provide a more convincing argument based on secret sharing schemes.)

Current understanding of physics includes the Bekenstein bound. I see no justification for your apparent belief that a problem with the Bekenstein bound will be found. If the understanding of the Bekenstein bound is correct (and I see no reason to assume that it is not), then to have an unlimited lifespan in the universe, one's volume must increase without bound. This is of course, a very slow rate of expansion, with the lower bound being something like, your radius has to be at least sqrt(log(age × K_1) + K_2), where K_1 and K_2 are some constants.

I see no reason to expect that the Bekenstein bound will be broken, and not replaced with something at least similar.

* rather, ceasing to be/live in the universe as it is, which I think you would consider to be what death is.


> Also, I'm not a consequentialist; I don't think morality is entirely determined by what impacts the choices have on others.

This goes beyond consequentialism. Purely saying that the ends justify the means can lead to some really broken morality, mostly because people don't normally count everything as "ends" that should be.

Of course, knowing all the consequences of an action doesn't, prevent you from knowingly doing something wrong anyway. And I certainly agree that increased cognitive function does not automatically imply increased morality, let alone perfect morality. (Assuming the value system in use can usefully define the latter at all.)

> A finite state machine must either reach the same exact state more than once (in which case it cannot distinguish whether it is the first time it has entered that state or not, as that would be part of the state), or it must halt. And if it does reach the same exact state more than once, there is no difference for it whether it reaches a state a third time or not. If its experiences map 1 to 1 with its state, then its experiences have a finite length corresponding to something not more than its total number of states.

I understand your argument. First, however, note that that applies to an entirely self-contained system, devoid of external stimulus; you can't necessarily evaluate a single mind in isolation, so you'd have to account for the state of the entire state machine that handles all consciousnesses and their interactions. Second, your consciousness and continued awareness/existence does not depend on having novel states in a state machine; if you've ever re-read a book purely for fun, without expecting to get anything new out of it, you can perhaps imagine finding the same state satisfying more than once. And third, the sheer number of states here dwarfs the age of the universe by myriad orders of magnitude; beyond just "age of the universe" problems, you'd have to start asking "what form does life take when it spans the course of that many eons?".

While I can see a parallel between this problem and the existential failure mode of becoming a wirehead, I see massive and fundamental differences there as well.

In short, I don't find this even remotely problematic compared to not existing at all. And given the current rate at which we permanently lose sapient minds, I see a strong argument for fixing that now and taking eons to think about some of the other problems, rather than using it as a reason to not improve the situation due to lack of perfection in the solution.

> Current understanding of physics includes the Bekenstein bound. I see no justification for your apparent belief that a problem with the Bekenstein bound will be found.

That a finite space can contain only a finite entropy seems obvious. We don't, however, yet know that the universe is finite, or self-contained, in either space or time.


If we ever do get to the point of even attempting this, it won't be because the source brain is in any way special. They'll just be medical volunteers, and their best hope is complete failure, because any success will (I think) result in complete insanity.


>because any success will (I think) result in complete insanity.

Why's that?


Because brain is a highly nonlinear system, and these are hard to model. You cannot model every metabolic process in every neuron and astrocyte, you have to take computational shortcuts and use approximations. These approximations may have severe global effects.

To demonstrate the nonlinearity of these models there was an experiment http://www.pnas.org/content/105/9/3593.full where researchers excluded a single spike from the large scale brain simulation. The global state of simulation has diverged after 100 timesteps, compared to the version where this spike was present.


I'm trying to integrate your linked paper (thank you for that) to a discussion about philosophical zombies and parallel yet alternate behavioral heuristics but I'm afraid I don't quite have the energy.

Thank you for your response.


If someone can pay for these computing resources, why shouldn't he do it? Also there is no physical law that forbids cheap brain-scale computing. 1 exaflops could easily fit in less than 1 cm^3, given sufficiently advanced logic technology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: