Hacker News new | past | comments | ask | show | jobs | submit login
H5N1 (samaltman.com)
394 points by olivercameron on Dec 11, 2013 | hide | past | favorite | 184 comments



>We now have the tools to create viruses in labs. What happens when someone creates a virus that spreads extremely easily, has greater than 50% mortality, and has an incubation period of several weeks? Something like this, released by a bad guy and without the world having time to prepare, could wipe out more than half the population in a matter of months. Misguided biotech could effectively end the world as we know it

Sam is a smart guy, so I really don't want to come off as sounding like a jerk here, but this grossly underestimates the technical feasibility of creating such a virus. Computer folks routinely overestimate how much biologists actually know about the systems we study. We know jack about how the vast majority of biology works. We have the most fleeting glimpses of understanding that are regularly crushed by the complexity of dynamic systems with nested feedback loops and multiple semi-overlapping redundancies. I won't say it's impossible, but we don't even know enough to know whether the three things: high mortality, long incubation, and ease of transmission are even possible. While we can imagine it, there might be biological and epidemiological factors that prevent such a thing from existing.

This also commits the logical fallacy of ascribing superpowers to the bad guys cooking up viruses while assuming the good guys are sitting on their duffs letting bad things happen. H5N1 was a pretty good example of international collaboration. There were academic competitors and industrial labs working around the clock collaboratively on it in the early days before much was known. Whole vaccine divisions at pharmas were all over it. If we're instead talking about a mythical time in the future when we do understand enough biology to engineer something like this, one would have to assume the good guys possess the knowledge to develop countermeasures.

I'm not arguing that pandemics aren't something we should worry about. Europeans were almost wiped out by the plague and in modern times Africa has been decimated by HIV. These are real problems that the human race has faced and will likely face again, irrespective of lab-created stuff. Biotechnology is the primary mechanism by which we're going to be able to survive when the next one comes, wherever it comes from.

EDIT: Fixed wrong word usage in 2nd sentence.


       Sam is a smart guy, so I really don't want to come off
       as sounding like a jerk here, but this grossly 
       underestimates the technical feasibility of creating 
       such a virus.
Nature already created viruses like this [1]. Although the mortality rate was "only" 30% but no biotech was required. Also, people are seriously studying what would happen if H5N1 was released into the wild [2]

That being said, "Don't Panic (tm)". The likelihood of any of these scenarios is extremely extremely low and people have been thinking about and preparing for them for decades.

As for what is much more likely to happen with naturally occurring viruses like the H7N9, natural variants of the H5N1, etc... see my comment from a few days ago [3].

Disclaimer: I'm the first author of [1] and a collaborator of the two first authors of [2].

[1] http://www.nature.com/srep/2013/130717/srep00810/full/srep00... [2] http://www.biomedcentral.com/1741-7015/11/252 [3] https://news.ycombinator.com/item?id=6839147


You are right that no biotech is required to find terrible viruses from history. Access to those viruses is pretty well controlled though: you can get a stern letter from the CDC if you try to order DNA that looks like smallpox or RNA that looks like the 1918 Spanish flu. It's happened to my roommate during his virology research--they do monitor these things[1].

BUT, more importantly--you don't even need fancy biotech to engineer a terrible virus. No sequencing, recombinant DNA, fancy BSL-4 labs, well-educated virologists, none of that. All you need is a captive population, something any self-respecting evil warlord should be able to get. (Certainly the North Koreans, the Taliban, etc. have access to plenty of prisoners.)

The same simple mechanism used by Fouchier (the guy who created the controversial bird/ferret superflu) can be applied to humans: serial passage. Put any moderately bad flu virus in a certain number of prisoners, then expose them via air circulation to other prisoners. Take the sickest people from the second group and expose them to another uninfected group via air circulation. After five or six passages the virus in the last group will show extraordinary virulence and transmissive capabilities because you've applied artificial selection.

To seed the virus and start an attack, you take a few of these prisoners, expose them to the worst virus, and put them on planes to your target country while they are still in the incubation/transmissive period.

Obviously, this is a nightmare scenario that I hope never happens, but the idea that terrorists or evil dictators need fancy science to engineer superbugs is false. The same methods farmers have used for centuries to grow taller corn and leafier lettuce can be applied to viruses by anybody with enough prisoners and moral depravity.

[1]: It should be noted that in-house synthesizing costs are coming down, though, and we won't be able to rely on the safeguard of companies automatically BLASTing ordered sequences against a CDC blacklist for much longer.


I have been thinking exactly this. And one can't help wondering... hasn't this already been attempted?


because it's probably more effective just to use chemical weapons (or other WMD tech) that already exists.


I'd also add that the risk of "friendly fire" with bioweapons is very high.


Thanks for injecting some actual peer reviewed research into this discussion. As I mentioned elsewhere, I was mostly responding to the notion that one can engineer a virus with specific properties as opposed to relying on natural pathogens. No argument that there are no shortage of naturally occurring bugs that can kick our butts.

Can you comment on the models that are used for simulating outbreaks? Do they factor in natural evolutionary pressure and change of the virus? In the case of smallpox, you have something fairly stable, but influenza is highly variable over time. I can't imagine how one does that but it would be cool to know if it's possible.


        Do they factor in natural evolutionary pressure and change of the virus?
Not at all. Meta population models use some type of stochastic discrete PDEs and agent based models just infection probabilities. This is an over simplification but just so you have an idea. You can get the gritty details in the papers above.


Starting from scratch, sure (everyone always seems to think of DNA and RNA as super-Legos; I blame Hollywood), but I think it's worth being at least slightly afraid of things like lab-engineered influenzas, given that flu is a very small, well-studied, and easily mutated RNA virus. Cooking up a more deadly (and, yeah, I recognize that we haven't proven that the Unholy Grail of "high mortality, long incubation, and ease of transmission" even exists in any single wild-type bug) influenza -- even if we don't necessary know the methods of action -- isn't a big lift as far as major bioengineering projects go.

If you're not trying to create a "supervirus," but just shotgun a series of viruses to create maximum health disruption, then (as you know!) it's already within our technical grasp. Expensive, time-consuming, and probably failing 90% of the time, but accessible nonetheless. Considering how bad our hit rate is on the trivalents, I could see a plague of multiple, high-lethality strains with a wide variety of hemagglutinin and neuramidase antigenic shifts as a plausible bioterror scenario.

It's not a doomsday scenario, since we have such good reporting and analysis infrastructure, and much of the heavy engineering on the problem (like alternate vaccine production methods) was done during the earlier bird flu outbreaks, but it would be a very nasty kind of terror attack.


You are correct, I was mostly interpreting his post to be about a purposefully engineered virus with specific properties. One could throw darts and hope to get "lucky" by recombining and tweaking existing pathogens. That individual would be wrong a lot, but with concerted effort might find something. I'm not sure that is new technology though. This could have been done at least 15 years ago.

One point that I didn't have time to make was that high mortality is not generally evolutionarily advantageous. Even if you cooked up a strain that was especially nasty, it could take considerable effort to prevent it from mutating in the wild into something less so, since the mutated virus would have a survival advantage of not killing its host. Unlike machines, a creator can't really control what happens to a biological system in the wild as it interacts with the environment. In the scenario you describe, there is considerable uncertainty as to whether it would actually spread as engineered.

This is the crux of what I've clearly done a poor job of saying: we know so little that all these scenarios still rely on incredible amounts of luck more than technology.


"the mutated virus would have a survival advantage of not killing its host."

Not terribly reassuring :) Perhaps 1918 wasn't a very high-fitness virus, but it still got ~5% of us before burning out.

That leaves the threat level >> (conventional) terrorism.


His scenario is hypothetical with quick estimates of mortality and incubation period. This virus has been created in the lab and the formula is known. How is he then underestimating the technical feasibility?

>If we're instead talking about a mythical time in the future when we do understand enough biology to engineer something like this, one would have to assume the good guys possess the knowledge to develop countermeasures.

This is naive.

Understanding of weapons != Knowledge of countermeasures (since were talking about logical fallacy)


E.g. Atom bombs

It's a weapon that has no counter measures. None that I'm aware of, except prohibition.


but... lasers? ;)


So you mean "overestimate" in the second paragraph right? Because the rest of it reads that way.


Yes, thanks, edited.


The first one is still wrong. "Underestimate the technical feasibility" means, "They think it is less feasible (harder) than it actually is." I think you mean the opposite: they think it is easier than it actually is. I.e. you're trying to say it's harder than we think.

You can say instead: "underestimates the difficulty" or "overestimates the feasibility."


James, you are correct but I think JunkDNA's point is still being made effectively. In fact this was likely an intentional statement meant to illustrate the difficulty involved in intentionally creating a thing--if an error could be easily introduced in a handful of words, how likely is it that a malicious DNA creator will get their supervirus working exactly right?


That is very optimistic of you. Personally I think it was more likely that it was simply a typing error, as he has acknowledged. That's fine, we all make them. I had some difficulty parsing what he was trying to say, which is the only reason I brought it up.

The rest of your argument is very far fetched. One person's difficulty formulating a sentence could not be less related to biologists' collective capabilities in tailoring viruses.


"The rest of your argument is very far fetched."

I have apparently underestimated my ability to craft a joke.


Brevity is the soul of wit.


You are right. I chose my words poorly. This is what I get for trying to make a complex point in a quick post between meetings.


specifically the first "underestimate" is wrong, the second is correct. :-)


The scenario might seem far-fetched today, but what if biotech made the same kind of progress over the next 50 years as computer technology has over the last 50? A human-engineered super virus might seem as unlikely to a virologist today as an iPhone would seem to Alan Turing.


I address this in my second paragraph. You can't assume the advances all happen on the negative side (the ability to perfectly engineer a deadly virus) without corresponding advances on the positive side (enough understanding of biology to combat new viruses).


I think you have a mistaken assumption, though, namely that advances in CREATING dangerous things will be paralleled by advances in the ability to prevent bad things.

Nuclear weapons have been around for over 50 yeras. We do not yet have __in place__ any ability (other than treaties and fear) to prevent a nuclear holocaust. Missile shields, "Star Wars" -- all of those are of questionable capability, and none of them are deployed.

Given that our only way of preventing nuclear winter is to agree not to launch (and go to war to prevent Bad Guys from getting them?), an option which is not available when dealing with a disease, I'm not optimistic about our future ability to prevent a superbug from wiping out humans.


Certainly, I will grant that it is possible to blindly shoot in the dark and get lucky creating something that we don't understand, but is nevertheless deadly. Absolutely that could happen, and another commenter upthread gives some very plausible scenarios for this that I had not considered. But we've had the technology for blindly shooting in the dark in the lab for probably 20 years at least.

But that's not the point I think Sam was making. I read the article as discussing a purposeful designing of a virus with specific properties. My contention is that the knowledge required to engineer something that can evade the immune system, spread easily, and cause high mortality is very likely the same mechanistic knowledge that would help you to defeat such a virus.


The ballistic technology that can deliver a warhead to a target 12,000 miles away can also deliver a constellation of remote sensing satellites into orbit.

Remote sensing is what held off nuclear holocaust during the Cold War. The ability to reliably and quickly detect and respond to nuclear first strike creates the "mutually assured destruction" strategic framework aka deterrence.


That still boils down to "let's agree not to launch". It doesn't force anyone not to launch.


I think the analogy is broken. In case of nuclear armrace it's a different technology that destroys (bombs), and a different that saves (missile protection?). In case of genetics it's one and the same - engineering organisms.


The thing that defeats viruses is evolutionary pressure, not "engineering organisms".

At this point in time I'd still say that we only have the ability to reintroduce old disappeared pathogens. We do not actually have the ability to design new effective viruses.


for example, in a test case scientists have gone from flu sequence to midscale production (i.e. enough for first responders) of flu vaccine within something like 4 days. The backstory is great - the lead researcher arranged for FedEx to pick up the finished DNA sequences at his home at midnight (I don't remember why it couldn't be picked up at the lab but there was a reason) and he was worried that his neighbors would think that he was a drug dealer.. This means one could go to large scale production (enough for a population) within weeks to months.

Unlike the seasonal flu shot, this vaccine is tailored to the emergent strain.


Frank Herbert wrote this story 30 years ago (White Plague -- 1982). I'd be surprised if he were the first to worry about it. So we may be well along in the 50-year progression is all.


Europeans were almost wiped out by the plague

Uh, nope. What does "almost wiped out" mean to you? 30% of population dead is "almost wiped out"? It's horrible, of course, but there were still 70% who stayed alive. Well, I could say that I can jump almost three meters high with that logic. :-)


What if we lost 50% of the world's population?

We would be knocked back to the ... 1970s!?!?

In the 1970s the world's population was about 3.5 billion and today it is double that. Sure it would be very disruptive, but there would still be plenty of creative people around to pick up the pieces.


There is a point where the breakdown in social order is going to lead to a much greater problem. If the food and energy systems we rely on to supply cities break down then you'll be looking at problems other than the outbreak causing deaths.

If society breaks down then you'll suddenly be looking at countries being unable to support anything close to that number of people.


"Almost" means that the "30%" could easily have been 70% or even 80% were the thing any more potent.


I think I'd also like to add where's the motivation? Basically someone who did this would have to be what? A Serial killer with a biotechnology fetish? I'm sure that a profile could be constructed of someone that would have the motivation to do it, but that would of course be a big limitation of possible people doing it, and then out of that group limit it to the people capable of doing it, and so forth - and actually would it really be likely that someone with the skill to do it would also be in the group motivated to do it? 12 monkeys aside I actually think not.


ascribing superpowers to the bad guys

Grab a copy of The White Plague. It's an entertaining read.


There's a science fiction novel waiting to be written that incorporates the ideas from The White Plague and Ribofunk and the 21st century IT world.

Imagine the genetically-engineered counterpart to the CryptoLocker ransomware. Private and public antivirus research. MacAffee for your white blood cells. Imagine ad-supported biotech.

The internet has created a sort of commoditization of fraud. Scam spam, ad-ware, spyware, etc, often run by companies operating right out in public. Professionals openly discuss security from both sides of the line.

Imagine this sort of bizarre professional attitude in the a low-cost engineered-virus biotech field. What would the Windows of this world be - a suite of programmable bacteria with a solid API?


Imagine ad-supported biotech.

Don't you dare.


Too late - Paul McAuley's _The White Devils_ has ad-covered GM butterflies in it, for example...


>If we're instead talking about a mythical time in the future when we do understand enough biology to engineer something like this, one would have to assume the good guys possess the knowledge to develop countermeasures.

That pressuposes that the "good guys" would be the victims here. How about the opposite?

What about regular "democratic" superpowers doing it to poorer countries they want to control, as they have done similar things all along the colonial and post-colonial era?

The way dictatorships in Latin America and Middle East got weapons, supplies and a helping hand from the US for example against their people or neighborhooding countries.


> This also commits the logical fallacy of ascribing superpowers to the bad guys cooking up viruses while assuming the good guys are sitting on their duffs letting bad things happen

It's the other way around. The "good guys" are cooking up viruses and publishing them as science (in this case at least)


> We know jack about how the vast majority of biology works. We have the most fleeting glimpses of understanding that are regularly crushed by the complexity of dynamic systems with nested feedback loops and multiple semi-overlapping redundancies.

Proof of evolution


The author is only arguing for ramping up funding of defensive biotechnologies. It is much easier to create a new virus than to find a cure for an existing one. The immune system is simply vastly more complicated than a virus.


> This also commits the logical fallacy of ascribing superpowers to the bad guys cooking up viruses while assuming the good guys are sitting on their duffs letting bad things happen.

[...]

> If we're instead talking about a mythical time in the future when we do understand enough biology to engineer something like this, one would have to assume the good guys possess the knowledge to develop countermeasures.

------------------------

The amount you need to destroy is more or less a constant. When the efficacy of your technologies is limited, then a technology that only gives you a small percentage edge over your opponent's technology - say Iron vs Bronze - is survivable for the defender provided they have an edge in some other area, though not necessarily pleasant. However, as the efficacy of technology increases, you only need a small percentage edge over your opponent in terms of relative efficacy of technologies to have more than enough power to destroy all that you need to in order to remove them forever.

Consider that armies could be separated by hundreds of years of technology in the past, and still fight on a roughly equal footing. Technology did not more very fast, nor was it very powerful. Then imagine what an army of today would do to an army of a hundred, or even fifty years ago. In the Iran-Iraq war two armies with Cold-War level technology faced each other off for eight years. The Iraqi army were, however, swept aside very quickly by a more advanced force.

The timespan in which there's a rough parity in power shortening. Even small difference in development with respect to time can rapidly become insurmountable when you're dealing with a high rate of change and powerful technologies. What you're defending is more or less static: people, land, resources, they're not getting any more durable, while weapons are always becoming more powerful.

You only need one world ending plague. The defender has to be on top every time, the attacker only has to surpass them once. There is no chance to adapt to what they make, or to try again any more than biological evolution can adapt people to a bullet in the head - because any minor adaptation in that direction makes no difference when compared to the sorts of forces that are imparted.

And I don't think we should have much confidence in the idea that the defender is going to be on top every time.

The questions seem to me to be ones of whether a reluctance to destroy the world is characteristic of organised systems, and whether organised systems will always have the edge over individual effort. If we get to a level where someone can create a suitably devastating bio-weapon in their garden shed, will we also be in a position where that's effectively analogous to creating any other outdated weapon in your shed?

I don't know, computer viruses don't lead me to much hope on that point. Enormous energies are being expended to restrain the energies of a few, with no clear victory in sight. The playing field is not always slanted the way we might like.


> But another possibility is that we engineer the perfect happiness drug, with no bad side effects, and no one wants to do anything but lay in bed and take this drug all day, sapping all ambition from the human race.

Preface: What we're talking about is probably biochemically impossible (truly no bad side effects, no tolerance, etc.). So, everything that follows is a fun thought experiment, and should be taken as nothing more.

Let's say someone produces a true wonder drug that is relatively easy to produce and produces extreme happiness 100% of the time, with no side-effects, and no diminishing returns due to drug tolerance. This drug produces more happiness than any other activity that we could be pursuing with our time. As a result, all anybody wants to do is take this drug all day.

The author presumes that this is a bad thing, but let's question that assumption.

If everyone is completely happy 100% of the time, and - more importantly - happier than they would be if they were doing whatever it is they would be doing with that extra ambition, why should we assume that this is a bad thing?

Of course, somebody would need to maintain production of the drug. This means that people either would take it only part of the time, to maintain enough ambition, etc. to produce the drug on their own, or (more likely) we would have some lucky people who take it all the time and are always happy, and a few people who are tasked with producing all the joy for the rest of the world.

This exact premise (the second version) has already been explored, in short story form. http://en.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Ome...

(I agree that this situation sounds bad - most people would have a negative emotional reaction to it, but it's fun to explore why we have an aversion to the thought of pure, unmoderated happiness.)


I always hear the situation put this way, but it rings false to me. If you gave me a perfect happiness drug, I wouldn't want to sit in bed all day and take it. I would want to take it and go about my normal day only without the burden of misery — debug Java without wanting to claw my eyes out, help out people I meet because I don't feel stressed over my own schedule, etc. Being happy naturally has never driven me to sit in bed and has only ever made me a better person; I don't know why we assume drug-induced happiness would do the opposite. I suspect this is just because many of our current drugs do this as a side effect of inducing euphoria, and we're imagining that instead of a real happiness drug.


"Sit in bed all day" maybe isn't the best way to describe the problem with such a drug. "Never change anything in your life" might be better. People don't like change, and they generally only make serious life changes when they're unhappy with the way their lives are going currently. If you're never unhappy, you'll never change anything.

The negative consequence there is that often times we don't learn about ourselves until we attempt to make such a change. If you hate debugging Java, and you can take a pill that makes you not mind it, you might never discover that you could actually be a much better Ruby programmer than you ever were a Java programmer -- or that you really were never cut out to be a programmer at all and should pursue something totally different. You'd just chug along debugging Java your whole life and never realizing your full potential.


Great point.

> If you gave me a perfect happiness drug, I wouldn't want to sit in bed all day and take it. I would want to take it and go about my normal day only without the burden of misery

I've been reading some meditation related books lately, and this was one of the ideas they were trying to get across. That meditating for a sense of fulfillment doesn't replace your desire to go about and do good work, it merely changes your perspective on how you feel while doing those things. Additionally, it (allegedly, convincingly argued) enhances your ability to do the things you like well, and to make better choices in general.

> I suspect this is just because many of our current drugs do this as a side effect of inducing euphoria, and we're imagining that instead of a real happiness drug.

A useful way to contrast that is with uppers. I took my friends aderall a few times in College - in addition to staying awake, I felt profoundly happy - yet I didn't give up my goals (study all night) - I merely had a good time doing them.

I know there's some science around parts of that - far too lazy atm to search and link it (sorry!). But perhaps it is a bit counter-intuitive that feeling deeply happy and satisfied could also drive a person to do more meaningful work at the same time.


Heroin is as close to "a perfect happiness drug" as anything human ingenuity has ever produced. My experience of its habitués tends to suggest that, while it's not impossible you'd become a better person under its effects, it is quite unlikely.


Alternately, I could make a solid case for MDMA. A huge rush of serotonin is closer to 'happiness' in my book; tickling the mu-opoid is more of a 'bliss' kinda thing.

If there were a beverage that worked like coffee except was more MDMA like, that our physiology could sustain in daily use, I dare say a lot of us would drink it.


If you're not asleep, you might be a better person. The problem would be when you ran out.


It can be tricky from the outside to distinguish someone who's nodding from someone who's sleeping, but from the inside the two states aren't all that similar.


> I always hear the situation put this way, but it rings false to me. If you gave me a perfect happiness drug, I wouldn't want to sit in bed all day and take it.

Reason is a poor way of estimating behavior, in special, your own.


But this isn't reason divorced from reality — it's my experience that happiness does not have the effects postulated here. As I said, I have been happy on numerous occasions before. The result was not that I sat in bed all day with zero productivity.


The thought experiment is proposing a drug that promotes sustained, unconditional high levels of happiness, with zero drawbacks.

This doesn't compare with anything a normal person experiences during life. There's no rational expectation to be had of how anyone's behavior would be if such a drug existed.


I disagree with the assumption in the original article that a perfect happiness drug would be used by everyone. It is evident from even casual observation that many (I would say most) people would rather have a meaningful life than a happy life.

Sure, once you started the drug, you wouldn't care, you'd be hooked, but why assume everyone would take it in the first place? There's no chance I would try a drug that I had observed make someone just sit in their bed doing nothing all day, even if it was clear that they were ecstatically happy beyond anything I had ever experienced. In reality, I think very few people would take it.


Assuming you find it more moral to "do something for society" with your life, than to lounge around on drugs, what if there were nothing for you to do? What if all social ills and problems were taken care of by technology? Or what if there was definitively no appreciation for what you do? This could take the form of zero demand for your skills, or cultural change in which mainstream society ostracizes those that try to do good for others (perhaps recasting them in a different moral light: "trying to meddle in others' affairs").

I guess you could still go off and live in the woods with those that do not ascribe to a drugged life, and build a separate society. But even that still means the original society is doomed.


I'm not really talking about doing something for society. I'm finding it difficult to put into words what I'm thinking.

Pursuing pleasure as an end goal rarely leads to pleasure. A lot of addictions seem to stem from the pursuit of pleasure as an end. We treat people for doing something that makes them "happy" to the exception of all else (whether that be gambling, alcohol, drugs, etc.). I honestly think if there was a happiness pill we would consider treating people for taking it.

When I talk about doing something, I'm talking about what you do with what life throws at you. Everyone has different circumstances, but what is common is the ability to choose how we respond to those circumstances. Whether we make something (whatever that means to us) of what we are given or not.

Taking a happiness drug and checking out from the rest of life is deciding to do exactly nothing with what life has given us. And I honestly don't think most people would want that.

I'll end with a quote from Viktor Frankl's excellent book Man's Search For Meaning:

"By declaring that man is responsible and must actualize the potential meaning of his life, I wish to stress that the true meaning of life is to be discovered in the world rather than within man or his own psyche, as though it were a closed system. I have termed this constitutive characteristic "the self-transcendence of human existence." It denotes the fact that being human always points, and is directed, to something or someone, other than oneself--be it a meaning to fulfill or another human being to encounter. The more one forgets himself--by giving himself to a cause to serve or another person to love--the more human he is and the more he actualizes himself. What is called self-actualization is not an attainable aim at all, for the simple reason that the more one would strive for it, the more he would miss it. In other words, self-actualization is possible only as a side-effect of self-transcendence."

I am sure I have butchered my own thoughts, but hopefully this helps make some sense of what I'm trying to say.


I believe I understand what you're saying, and in my gut I feel the same. But I wonder if that is something learned, something relative to the present condition of humanity. Civilization seems to hold itself together by a thread. But I think as long as resources are limited this is necessary, because the engine of evolution pushes individuals to be as efficient as possible with resource consumption and competition. In the struggle to survive, this is tempered only by the personal value of a civil society (e.g. less threat of violent death). So we live in constant tension between satisfying ourselves and making sure just enough is done to keep the whole race from self-destructing.

There are of course many who don't subscribe to these traits (at least not consciously). I believe most of them are subject to cultural belief systems that promote social cohesion and cooperation: the nobility (and intellectualism) of altruism, the teachings of compassion by many religions, the promise of a release from guilt by donating to charity.

But if society were certainly stable, then what is there for one to actualize toward? Perhaps it would be that other engine that propels us: Curiosity. But isn't the pursuit of one's curiosities primarily selfish?


Even if we just look at things that benefit society, stable is a long way from perfect. We would still need people dedicated to curing disease, improving education, etc.

Beyond that, there is creating art, raising children, forging and strengthening romantic bonds (it's very difficult to phrase the pursuit of love in way that emphasizes its potential other-centeredness), making people laugh, advancing knowledge in some field. All these things still have meaning in a stable society.


What is a meaningful life?


Let me put it another way: I think most people would rather do than be. What's meaningful obviously varies from person to person. What is much more constant is a dissatisfaction with a life devoid of impact on anything outside of oneself.


> "I think most people would rather do than be" I'm not too sure about this. I think it varies from culture to culture. There are some schools of thought where "being" is most exalted of all.


Whenever this topic comes up, I point out that addiction is merely an undesirable side effect. A drug can be extremely enjoyable, but not cause a compulsion to take it again and again. And conversely, a drug can be neutral happiness-wise, but cause an addiction. We all know examples of similar things in real life: a creative flow state is very enjoyable but hard to get into, and Pringles are kind of disgusting but you don't want to stop until you've finished the pack.

My term for this is "the want-like distinction", and the best reference is Yvain's "Are Wireheads Happy"? http://lesswrong.com/lw/1lb/are_wireheads_happy/


It's not a new idea; everyone from Homer, with the lotus-eaters of the Odyssey, to Larry Niven, with his wireheads, has explored it in detail. I don't really count Le Guin's work among those explorations; The Ones Who Walk Away From Omelas is more in the vein of a parable or morality tale than that of the sort of consideration you describe -- as with The Cold Equations, the usual result tends to be two camps, diametrically opposed for reasons which neither can satisfactorily explain.

The question of why healthy people tend to be opposed to lotus-eating is also not difficult to answer, especially given that what TV Tropes calls the "Lotus-Eater Machine" has been constructed several times in reality, and its addictive nature and harmful effects well documented [1]. Both in individual life and on the scale of societies, what we typically call "civilization" is nothing more or less than an unending war against entropy; this is frequently, if imprecisely, formalized in the common dictum that "it's easier to destroy than to create" -- and easier, too, to sit in one's own filth than otherwise, in the absence of some motivation to do otherwise. Unhappiness serves to provide this motivation, and it is therefore precisely that motivation which the lotus-eater machine removes. After all, if sitting in your own filth doesn't make you unhappy, because you have a wire in your brain which ensures sublime delight no matter your circumstances, why bother to do anything about it? -- and, as we see from one of the cases described in the page I linked, indeed you will not.

Consider, too, that the English language already has a well-known and clearly defined word for the concept of "pure, unmoderated happiness". That word is heroin, and while I recognize the modern unpopularity of the proposition among those who've never seen firsthand what it does to its unfortunate devotees, I maintain most firmly that that damned drug is illegal for excellent reason.

[1]: http://mindhacks.com/2008/09/16/erotic-self-stimulation-and-...


> After all, if sitting in your own filth doesn't make you unhappy, because you have a wire in your brain which ensures sublime delight no matter your circumstances, why bother to do anything about it? -- and, as we see from one of the cases described in the page I linked, indeed you will not.

Right, but... why is sitting in your own filth objectively "bad" if you're happy? There could be negative externalities (e.g. maybe your unsanitary conditions are facilitating the spread of disease to others) but I don't think it's clear what about the filth itself is objectively bad.


Simply sitting in your own filth while being unconditionally happy will kill you from dehydration or infection unless someone else (who is not 'happily sitting') helps you.

The 'negative' emotions are very important for the functioning of any animal or mind - they are the drivers that motivate us to do stuff instead of happily dying.

I could write a long essay on how pain, fear, frustration, boredom and burnout are totally neccessary to implement even for a simple non-human artificial autonomous agent, in order to prevent it to be stuck on 'broken' loops forever - TL;DR is pain is needed to ensure that the 'mind' avoids damage; fear is needed to ensure that the 'mind' avoids serious risk of damage; frustration is needed to ensure that the brain doesn't try the same thing over and over again if it doesn't achieve the goal; boredom is needed to ensure that the brain isn't stuck in repetitive loops unless they bring great rewards; and burnout is needed to ensure that the brain at least tries takes a chance at escaping poor conditions, instead of happily suffering through them to death.

Put a brain in unconditional happiness - and it will be so useless and vulnerable, that you'll soon find out why evolution (or designer, if you so fancy) implemented the conditions on happiness.


Society deems it "unproductive", in the sense that society is a team-sport at a high level. Not being productive implies being a net-sense of entropy (neutral is not really an option, as society views you as under its umbrella of resource expenditure).[edit: put forth only as devil's advocate position]

The other and perhaps different explanation, is that Philosophically, man is happy only when exercising his facilities. This has been said as either his 'reason' or his 'power' depending on your taste in philosophers. But in either case the notion of perfecting some craft-work. In particular, this should further some functional inter-relationship with the outside world (not to mention family/informla political skills). The idea one could be both 'vegetative' and 'happy' simultaneously tends to imply 'ok, but that's not a human life...that is a vegetable one', or something similar.

These are good arguments, in many ways. Enough so that the burden of proof should be on those putting for completely contrary ideas, eg: "why is sitting in your own filth objectively "bad" if you're happy."

IMHO, Its not the filth, or the sitting, but the lack of anything evidently positive. Sitting in filth might be, after all, a corralary to "getting shit done" with limited resources. For instance, sitting is rest from other work; and 'filth' is just debris that has yet to be cleared away or moved away from (the latter being imminent).


No, by "sitting in one's own filth", I meant "sitting in one's own filth", i.e., neglecting hygiene in a particularly revolting fashion, but not bothering to do anything about it because one's ability to experience revulsion has been artificially suppressed. Such inaction also, of course, will if protracted enough certainly lead to various skin conditions, verminous infestations, &c. -- but our notional lotus eater won't be bothered by those, either, any more than by the conditions which produce them.


> Philosophically, man is happy only when exercising his facilities

> But in either case the notion of perfecting some craft-work. In particular, this should further some functional inter-relationship with the outside world

> These are good arguments, in many ways. Enough so that the burden of proof should be on those putting for completely contrary ideas

I respectfully disagree. My instinct is to question your premise that "man is happy only when exercising his facilities." I'm assuming that this has roots in ancient Greek philosophy, and I'm not well versed enough in that area to refute the reasoning that leads to that premise, but I will say that I'm hesitant to accept it at face value. Even if scientific studies indicated that man is happy only when exercising his facilities, it would only indicate that exercising your facilities has instrumental value in providing us with satisfaction, not intrinsic value.

To me, this idea that productivity is an objective Good seems like a statement of personal values rather than anything fundamental to the Universe. Of course, personal values matter; I'm just trying to draw a line here between productivity as a means to an end vs. productivity as intrinsically necessary. If someone is able to achieve happiness without "exercising his facilities," I don't think we can assume that their happiness must be impoverished compared to those pursuing more mainstream paths, even if our gut instinct tells us otherwise.

And to reiterate: this is ignoring, for the sake of argument, the topic of negative externalities. Obviously, if unproductive behavior has a detrimental effect on others, the issue becomes far more complicated.


Skin breakdown/decubitus ulcers, fungal infections, fecal-oral auto-transmission, attraction of parasite-carrying organisms, etc.


Yeah, you can argue these as points in the sense that they may lead to a decreased life span, which means less time enjoying your state of bliss. There's a possible utilitarian angle here.


I wonder whether the field of philosophy has a term of valence equivalent to that of "architecture astronaut".


Why would a perfect happiness drug make one lazy and less engaged ?

I guess it depends on how we define happiness.

In my view the perfect happiness drug would have the exact opposite effect, i.e. make people more functional, less prone to stress and anxiety.

Realizing happiness that is independent of conditions, would result in may result in diminished drive from activities that are driving by the seemingly insatiable ambition for greater perceived social status that is currently the normative state of the human race.

But that would be compensated because a greater portion of our drive coming from compassion and curiosity.

So more curiosity, and more solving of useful problems.

Some evidence of this can be had from contemplative activity can dismantle the mechanism in the brain the inhibits happiness without the aid of drugs*.

This sort of expertise is difficult and requires extended practice, so it can be confused with disengagement. Many often later go on to have considerable impact on human culture and values.

This is both subjectively verifiable, and objectively via brain scans one example outlined below. http://brainimaging.waisman.wisc.edu/press/NCCAMOct08.pdf


I've read up quite a lot on drug side-effects and as far as I know, creating a happiness drug with side-effects is not possibly really, because drugs make your body release happiness hormones artificially and your hormone deposits take time to refill.

As well, your hormone receptors can't take hormones all the time. They take time to recover, otherwise they will go numb.

That's exactly what Meth does for instance, it releases all your happiness hormones at once, that's why you feel so suuuper amazing. However, once the drug subsides, you're super hungover and feel like shit, because you're all out of happiness hormones.

Even worse, even once you stop doing Meth or whatever, your receptors have already become numb, so events that made you happy before you used drugs, such as getting a raise, raising investment, getting married, don't do anything to you anymore.

If you stop these drugs your receptors will become more sensitive with time again, but that takes a couple of years and might never go back to normal. So, once you take drugs, you're happiness from certain events will always be lower than it was before.

That's why it is so hard to get off drugs.


I get the impression you don't play chess?

Lets say Paul takes artificial hormones to increase his level of happiness; numbing his receptors. What stops Paul from also taking a drug which improves the recovery time of his receptors?

Edit: In a galaxy far, far away. Where E.T. comes from and science is rockin'.


Well if Nanobots can do that, we´re talking on a whole different level, where flesh wounds could be healed, organs could be grown back etc., but then it might be possible


This is where nanobots come into play. These nanobots carry the hormone past the blood-brain barrier, and deposit it directly in the receptors. Now the key is to have the bots only put the hormone in some of the receptors, keep track of which ones were used, then rotate through them so they have time to recover.


Seems like some of the other commenters missed the "thought experiment" disclaimer... obviously there are practical aspects that make this technically impossible.

I think what chimeracoder is trying to point out is our underlying assumptions about what individuals and humanity should be aspiring to. I'm somewhat of a nihilist, so I consider satisfaction with one's existence as the closest we can get to an "objective" good. I see "ambition" as a personal choice, something that provides satisfaction to those that desire it, but not necessarily a prerequisite for happiness.

I don't think chimeracoder is suggesting everyone should want to take a happiness drug. Rather, the question is: does it matter if someone does choose to live under the influence of such a drug? Would we consider their choice to be objectively "wrong", or just another way to live?


What we're talking about is probably biochemically impossible (truly no bad side effects, no tolerance, etc.).

Eh, I don't know about that. Opiates are pretty close, and their most harmful side effect is the legal environment which proscribes insane punishments for involving oneself with them. Said another way, even if it's biochemically possible, it would likely be legally impossible for this happiness drug to be distributed at all. Best to point one's though-experiment apparatuses in that direction, I think.


Yeah, legally impossible indeed. It's always helpful to check the practicality of one's thought experiments by asking, "Will legislative change X help or hinder the unearned incomes of the upper class?" In this case, probably it would hinder, by de-motivating their cheap labor. And presto, we've discovered one of the major reasons for the drug war.


Sure, if you're satisfied to look at everything in the world through the lens of Marxist economic theory, I suppose that makes sense.


what's going on here? just yesterday i had to inform someone here on hn about the fact that heroin isn't as harmless as milk. opiates can be pretty damn dangerous. they can have quite bad side effects, and you build up a tolerance pretty quickly with regular use, which can easily lead to accidental overdosing.


Reminds me of this short film.

http://vimeo.com/7306050


I saw this exact question on Quora recently (if there was a "happiness pill" would you take it).

It seemed that most answers mistakenly assumed that "Happiness" was something you can define for everyone, rather than something that individuals have to pursue for themselves.

Happiness is a catch-all term for various "good" feelings, such as joy, satisfaction, contentment, bliss, victory etc.


This is also the subject of a Kurt Vonnegut short story, "The Euphio Question", from Welcome to the Monkey House.


These characterizations of a happiness drug contradict the forces of evolution.

If any such a drug were to be developed, it would indeed sap ambition from many people. Even assuming it accounts for all existing relevant genetic variation, it wouldn't prevent genetic mutation. So those born with mutations that make the drug less effective will procreate with relative ease, eventually correcting for the drugs adverse effect on the survival of the species.

Of course if cultural (political?) forces also established that any genetic variation was bad, then we would indeed be in trouble. Why? With no threat of pain, and the absence of pleasure replaced by unlimited pleasure, what goals would exist? What would motivate action? Or given a relativistic perspective, what would just motivate procreation?


http://en.wikipedia.org/wiki/Experience_machine

Philosophy has been thinking about this for a while.

In my opinion, happiness is like life, it evolves. Targeted short-term pleasure is possible through extraordinary drugs, maybe in the near future, but probably the brain re-levels. It doesn't approach a stasis. At least, not completely (though happiness gets increasingly more refined over human history, so what seems like happiness now may be this sweet spot in the future that is its own ballpark which we largely reside in and explore -- again, I think happiness gets more and more pinpointed, compared to say Neanderthal times, but it doesn't necessarily ever approach a constant).


> Why should we assume that this is a bad thing?

Assuming the entire world can be maintained perfectly and automatically, nothing, but that's a bigger assumption than the hypothetical wonderdrug.

What happens when the Hoover Dam starts cracking? Or when a new disease is found and a new cure needs to be found? Or when we run out of a resource (oil, helium, it doesn't really matter which)?

We don't live in a world of static status quos, we live in a world of equilibria. Many of those equilibria factor in the ambition of individuals to maintain, to discover, and to solve problems.


> What happens when the Hoover Dam starts cracking? Or when a new disease is found and a new cure needs to be found? Or when we run out of a resource (oil, helium, it doesn't really matter which)?

Then we'll be happy about it


And be dead? Is that OK?


Presumably. Besides if all we do is sit around taking a happy drug, we'd stop reproducing. We wouldn't need the Hoover Dam or a new virus to wipe us out.


In most systems of metaethics, "years of life" and "number of people alive" are coefficients or integrands. 50 years spent happy--and then dying happily--is less good than 100 years spent happy. Which is less good, in turn, than 100 years spent happy, and creating two more people who also spend 100 years happy. And so on.

However, the adaptations we execute, as biological beings, don't really care about the health and welfare of their own far-future selves; they're more concerned about the Net Present Value of different choices they can make. So, one hour spent Really Happy, outweighs a year spent Just Happy, because that Really Happy is all received by your present self.

So we can do the math as rational beings, and as much as we want to be rational (which is itself a function of our biologically-trained impulses), we can look out for our future selves and keep the world running. Or we can accept our nature as natural beings, and wirehead. It's really the explicated meaning-of-life question, going forward.


>So, one hour spent Really Happy, outweighs a year spent Just Happy, because that Really Happy is all received by your present self.

I find making decisions that end up along lines of this really, really difficult. The rational part of my brain knows very well which one should I choose, but it has really, really hard time arguing with that more... I don't know, primal? part of me.

I think it's really fascinating, how relatively weak our conscious self is in arguments with our short term desires.


There's something even more fascinating you can learn if you introspect on this topic during an experience with a dopaminergic compound (e.g. cocaine, Adderall.)

Exerting exactly this type of willpower (setting long-term goals before short-term rewards) is what "spends" dopamine in the brain. The more dopamine you have available, the longer-term you tend to plan. And when you run dry, you feel "restless" and want to do things "on a whim."

---

In the face of this, it's really quite fascinating how meaningless a term like "willpower" becomes. Doing what you want-to-want to do basically comes down to:

A. having enough dopamine in the first place (children should really get checked for ADD/ADHD at about the same time get checked for nearsightedness),

and B. making pre-commitments with strong consequences for reneging (e.g., losing money you've bet; or making you look low-status to people you care about.)

There's nothing else to it.


Evolution meats nihilism!

There is a reason we appear to be physicaly unable to experiment that kind of happiness this thread is talking about. The only question is what we'll choose once we are able to change our own brains to make it possible, but then, that's a completely unkown context, so we'll probably choose some option that we can't even imagine today.


But the answer we give now to the question of what will we do tells us something about us, regardless of whether we're actually right or not. Which, in addition, is what good science fiction does; paraphrasing Ursula Le Guin, it invents lies to tell the truth about who we are, right now.


That'd be for the better of 99% of the other living species on Earth.


There is work that must be done to ensure a person's survival and well-being. If someone is so influenced by this drug that they wouldn't abstain from using it long enough to take care of themselves, then they would either need someone else to do this work for them or get sick and perish. This seems like a glaring downside, which violates the premise that our hypothetical drug has no downsides/deterrents from its use.


Practicality aside, this would be the end of "progress". No new discovery, no new exploration, no deeper understanding of how we got here, where we are or why we are here. The problem with 'bliss on tap' is assumption that there would be no problems left to solve... the mortgaging of our future potential for our present satisfaction.


Right, but what is the value of "progress" beyond the satisfaction it gives us?


Or, we could automate production and distribution so everyone's happy.


I think down syndrome may come close to the perfect genetic engineering of happiness IF its negative aspects like learning disability can be taken care of. (disclaimer: have one with it in family)


Robots


We'd need to find a way to continually power the robots.

Mass solar power, maybe?

Or perhaps, if everyone were perpetually swept away into the wonderland of the perfect happiness drug, the human body itself could be appropriated as a battery, leading to a deliberately created Matrix. </humor>


:)

The thing is, if "being happy" is the only goal, then the rest of it is just an efficiency question.

I think people view robots as either empowering us or somehow getting into lethal combat with us as the dominant life form on the planet. I seriously doubt it will be either. They'll just give us what we want. The race will naturally die out within a few generations.


Not everyone would choose to take the drug in the first place.


Tail risk decisions are never easy. Because we lack sufficient data by definition.

Should we focus on preventing terrorism? Well, if 9/11 was the worst case scenario then no. If on the other hand a terror attack could bring down the entire country it's certainly worth being paranoid about. Suppose terrorists poison our food and water supplies to the extent that we get country-wide food riots. A civilization is only 9 meals away from anarchy after all.

So the essential question is this:

- Is our civilization essentially fragile or fundamentally robust?

If our civilization is fundamentally robust we can simply focus on growth and deal with setbacks (global warming, terrorism, imperialism, wars) as they come. In the long term prosperity will go up and up. Not always as fast as we'd like and not always in ways we deem fair but if we keep making progress we'll get there eventually. This is the whiggish view.

The opposite view is that civilization is fragile. Kingdoms come and go and foolish decisions can and have lead to centuries of regression. The upward trend we've seen in the past couple of centuries does not mean our species has grown up in the slightest. Every new weapon of doom we discover we play with and we're no better than our imperialist and bloodthirsty forefathers. Our civilization is determined to self-destruct by either nuclear war, environmental disaster, political insanity or runaway capitalism. A civilization that is not capable of planning ahead will eventually walk like a lemming of a cliff. The best thing we can do is put tons of safeguards and regulations in place to improve our odds of surviving at all.

Those are the two main views. And the kicker is we don't have enough data to know for certain which view is correct.


Civilization is not fragile, authoritarian governments are.

'We' (our respective nation states) are our imperialist bloodthirsty fathers. What country do you live in?


This is exactly why people like Nassim Taleb [1] (Fooled by Randomness, Black Swan, Antifragile) are against things like GMO. We can't predict which tail risks will hit us and how much - the only thing we can do it to make ourselves robust against the negative ones.

This is also why, in the face of globalism, we should work to make life multi planetary [2].

And for people who think this is just silly, it might be a good idea to have a look at some recent history [3] and consider how close we were to being in a very, very different place. This is not science fiction.

Good piece.

1: https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb

2: https://en.wikipedia.org/wiki/Elon_Musk

3: https://en.wikipedia.org/wiki/Cuban_Missile_Crisis


Yes. I just searched this comment thread for "Taleb" and am distressed to find the first reference this far down. The Black Swan especially is amazing; I just read it (http://jseliger.wordpress.com/2013/11/26/life-the-readers-ed...) and now can't stop recommending it. Though it seems like the sort of book whose central idea can be understood through reviews, it is full of subtle and unexpected comments. Altman wrote this:

But maybe there are some tail risks we should really worry about.

and indeed that's what much of Taleb's body of work is about. I am partway through Antifragile and do not find it as compelling as The Black Swan, however.


> I just searched this comment thread for "Taleb"

Ha, I did the same thing.

> I am partway through Antifragile and do not find it as compelling as The Black Swan, however.

Personally I think Antifragile is much better, in that it it's a more complete work that contains his previous books and irons out what the consequences are. It's possible you need to have a similar view on history, culture and modernity in place before you appreciate it - I was into stoicism, empiricism and classics in general before I read Taleb - as he's largely using that as "the other side of the barbell" in his explanations, with the first side being mathematical (see his technical online textbook). I've read The Black Swan once or maybe twice, whereas I am already on my third reading of the Antifragile (I can't help myself, it's too relevant).


I came across an interesting tidbit of information the other day.

It turns out that "mud daubers" (wasps that make houses from mud) are responsible for at least 2 major airline crashes in the last 33 years killing at least 223 people. http://en.wikipedia.org/wiki/Mud_dauber#Involvement_in_Flori...

One in 1980, and one in 1996... that we know of.

Apparently these mud daubers love living in long cylinders. If they find one, say on a plane's uncovered instruments, they'll set up shop.

If you look at it under the right light, mud daubers are approximately 1/20th as powerful and threatening as the world's terrorists of the last 33 years.

This URL has some numbers of terrorist caused deaths over a similar timeframe: http://en.wikipedia.org/wiki/Patterns_of_Global_Terrorism


Regarding mud daubers, that article you linked to said:

> This species also brought down another plane in Washington during 1982.

Which, in addition to the crashes of the Florida Commuter Airlines flight (1980) and Birgenair flight (1996) makes it 3 planes mud daubers have brought down.


The question I always think of when people raise fears like this about bio-weapons is what what motivations are there for "bad guys" to release indeterminate killers like a bio-engineered virus? It seems like the principles of MAD still apply here. Why launch an initial attack that has the potential to "destroy the world" that either you or you leaders would still hope to inhabit? It would require someone illogical and/or desperate, but yet still had the technological prowess to create the weapon in the first place. It would basically need to be a Bond villain.


> It would basically need to be a Bond villain.

Exactly. We're not talking about rational agents anymore, it's about lowering the barrier of entry both financially and technologically so ordinary (arguably insane) people can become Bond villains on a shoestring budget. There are a lot of possible motivations why a person might do this, for example being a religious nutjob, or because date night took a turn for the worse.


They could be rational. A perfect virus that infects every animal on earth and then instantly kills them would decrease the suffering to zero. It's not hard to imagine utility functions where eliminating every human (or animal) is the best choice of action.

Unfortunately, in reality, such attacks are likely to cause huge amounts of suffering which muddies the water.

But if there was a magic button to press that'd instantly eliminate all life on earth, I think it'd only be fair to push it.


Or a well funded terrorist group that believes death will bring with it an eternity of pleasure. Plenty of people strap on suicide vests, and it wouldn't take that many more true believers to develop a suicide virus.


Within our lifetimes, this may well be a "crazy guy in a garage" project.

And the memeset I'd personally put my money on being the proximal cause is "humans are a blight upon Mother Gaia and should be removed at all costs". We don't have to imagine the existence of people who would push the Kill All Humans button if given a chance... we need merely read the right bits of the Internet. Sure, most of them would at least think twice... but there's enough that wouldn't.


I don't know if this is either a cynical or optimistic idea, but I don't know if those groups truly believe that throughout the ranks. That is why you often hear about payments to the family of suicide bombers. They aren't simply doing it for their religious beliefs, but also to help out their family. The same family that would have a 50% chance of death if the particular example virus from the article was released.

Or another way to put it, why is the pope-mobile bulletproof?


Almost all of them committing these atrocities for religious reasons. The Chinese government has treated Tibetan Buddhists horribly for decades, but you don't see any Tibetan Buddhist suicide bombers. There have been some sects of Buddhism in the past that have done horrible things, but in general it's harder to become a Buddhist suicide bomber than it is to become a Muslim suicide bomber. Some religions condemn violence of any kind. For example, I doubt anyone could be a Jainist suicide bomber.

If you don't believe me, hear it from the horse's mouth. Here is a video from a Muslim peace conference in Norway: http://www.youtube.com/watch?v=bV710c1dgpU#t=45s It will take five minutes of your time, but I think it shows best how sincere these people are.


> "The question I always think of when people raise fears like this about bio-weapons is what what motivations are there for 'bad guys' to release indeterminate killers like a bio-engineered virus?"

I'd imagine the threat of release would make for a pretty compelling bargaining chip. Potentially, you could gain a lot of leverage without having the backing of a sophisticated military-industrial complex (unlike nukes).

It's kind of like a DIY WMD for the millennial, "maker"-terrorist generation.


Is it really that DIY and cheap compared to other WMD? Is it easier to bio-engineer a virus than it is to steal a truck full of radioactive material [1] and attach it to some big explosives?

[1] http://www.cnn.com/2013/12/04/world/americas/mexico-radioact...


I was being somewhat facetious with that last line.

However, I don't think that it is outside the realm of possibility that relatively small entities, on the order of a small-cap corporation or private laboratory, could eventually create a weapon that has far more global destructive power than a radioactive dirty bomb.

This would afford the entity more leverage at a global bargaining table- as opposed to a negotiation with a single government. Combined with the relative anonymity one may be able to maintain when releasing the virus, these entities might see a higher expected value in engineering a virus.

It would be "DIY" in the sense that you wouldn't need the backing of a large nation state to produce an ICBM, or opportunistically steal some radioactive material to create a dirty bomb.

Also, a virus is implicitly cheaper to distribute: with a longer incubation period and an exponentially increasing rate of propagation, a virus could reach critical mass before there's time to react.

In a sense, this makes them more effective than other types of WMD- which may have lower singular chances of successful detonation, have high per unit capital costs, tend to have local area effects only, and are generally difficult to produce far removed from military industry at scale.

Personally, I think the world is more likely to end with a bug than a bang. I honestly don't know which is more terrifying to me.


The line of thought is, that there's always someone willing to contemplate an unpleasant act for shaky reasons, in decreasing numbers, down to a minuscule number of people who would commit the worst atrocity for no reason at all.

The risk comes from widening access to a given technology, and the amount of harm it could do. It's hard to calculate the risks, but bio weapons potentially have wide access and great harm.


There are a lot of power hungry people out there. In fact, our genome is essentially made up of the most successful conquerors and rapists. If you have a bioengineered virus in hand, you have tremendous power (through blackmail), even against powerful governments. You can also hunker down in some far away place (obviously madagascar) while the virus carries out your destruction.


There's plenty of people with mental disease who are illogical, but they still have capable of causing harm.

And "bad guys" are not necessarily required for a global catastrophe. Human error can do the job just as well.


Accidental release should be a major consideration. Also, there are a lot of crazy people out there who do a lot of crazy things for crazy reasons - their logic would either not make sense or be incomplete, but it wouldn't matter to them.

That said, I think biotech is an amazing force for good in the world, we just need to be responsible with how we deal with it. Which means defining and justly implementing 'responsible' is a core tech challenge.


There's a lot of crazy out there. Maybe the "bad guys" have a plan to stay isolated and conquer the ruins or maybe they are just actual bad guys. Look at most public mass shootings for examples of indiscriminate killing. All it might take is one researcher in the right time/right place to decide that they want to release this thing into the wild (for whatever irrational reason).


MAD with nukes works because it's easy to know who is firing at who. Even if a bomb is smuggled into a country and detonated, the original manufacturer can be determined from forensic evidence such as isotope ratios.

With bioweapons, the perpetrator is much harder to track down. And if you don't know who your attacker is, you can't retaliate.


Motivations? They are not rational. Crazy. Mentally ill.

Take this researcher, Bruce Ivins:

http://en.wikipedia.org/wiki/Bruce_Edwards_Ivins


Some men just want to watch the world burn.


Terrorists attacks cost more than lives. The direct costs of 9/11 were between $40 to $100 billion (http://en.wikipedia.org/wiki/Economic_effects_arising_from_t...). The direct costs of another terrorist attack targeting something like nuclear power could cost $700 billion or more (http://money.cnn.com/2011/03/25/news/economy/nuclear_acciden...). None of these estimates take into account indirect costs, which are potentially even larger. Preventing terrorist attacks is about saving lives, but it's also about stopping events that could wipe out a quarter of our revenue for the year. Massive economic damage can cause a lot of pain and suffering.

When analyzing risk, it's important to estimate costs as accurately as you can. Unfortunately Sam missed the boat on this one.


But a certain (large?) amount of the economic cost is exactly because of the overreaction to terrorism, as opposed to worrying about other threats.

Sam's saying "let's spend X amount of that $100 billion on preventing biological disasters, rather than just terrorism".


The overreaction to terrorism isn't a direct cost and its not part of the numbers I cited. Direct costs are insurance losses, medical care for victims, relief to widowers/widows, lost wages, etc. Those costs make up the numbers in my previous comment, which alone justify spending money on preventing terrorism. Indirect costs would be what your talking about and include things like unnecessary wars, wasteful spending on defense, currency/stock market devaluation, etc. Indirect costs are in the trillions but are harder to prove and reason about, which is why I omitted them.

I think Sam's getting at something important but it would be best to first start with estimating the costs as accurately as possible. As it stands Sam's estimates to the cost of terrorism are off by several orders of magnitude, so you can understand why I pointed that out.


Others have already (rightly) made the point that we're not yet able to synthesize new viruses from scratch. And the "amateur bio-hacking" thing is completely overblown -- the most advanced amateur work I've seen is stuff like putting GFP into bacteria, which is just trivial. It requires little more than some commercially available kits and a warm water bath. Synthesizing new organisms is many orders of magnitude harder.

That's not to say that things won't change, but if I had to pick a serious biological threat that exists right now, it would be antibiotic resistance. Thanks to air travel and long incubation times, we're not that far away from a global pandemic of multi-drug-resistant TB, yet almost nobody is talking about it.


We have been able to make viruses from scratch for about a dozen years. Synthetic poliovirus, 1918 flu, sars like corona amongst a dozen or so other demonstrations.


We can re-create existing ones from known (small) genomes and make some modifications to ones that we understand well. Even that work is well beyond what a hobbyist can achieve. Creating a novel virus would be a major research project.


There is a lot more work on synthetic viruses that you might not be aware of that goes beyond just simple instantiation (like viral attenuation for vaccines or hypotheses on origins of the viral outbreaks). Creating a novel virus these days is not that diffult (Grad student project). Anyways, your first post made it seem like it hadn't been done.


I'm not aware of every paper in the field, but I know the high points. We're still a long way off from the day when niche groups can generate novel viruses with specific infective properties. We're still basically just tinkering with the existing viruses in labs to figure out what the parts do.

Depending on how you define "novel", it could indeed be a "grad student project", or it could be a paper that deserves a Nobel. But it still falls firmly in the "improbable as a weapon" category of threats. I'm still far more scared of XDR TB than hypothetical synthetic viruses.


That's a pretty terrible title. What about H5N1? Why should I click? Are you just trying to be scary?


It's the article title and a key topic in the article.

"Also in 2011, some researchers figured out how to reengineer H5N1—avian influenza virus—to make it much scarier by causing five mutations at the same time that all together made the virus both easy to spread and quite lethal"


I'm not blaming the HN submitter, I'm blaming the author of the blog post. It's not just a key topic, it's the subject. You can't title a programming article "Programming", it has to be about something. Unless you're going for clickbait, which seems to work.


>Unlike an atomic bomb, which has grave local consequences

This is a little dismissive. Nuclear war under any plausible scenario wouldn't be an isolated 1945-type event. It would be a global event that drew in other players and more than likely would conclude in a mass launch by one of the world powers. We're not nuking Paris, Moscow, or DC and walking away. There will be retaliation.

Unlike biotech, these things are here, ready, and primed to hit targets. If there's a tail risk to worry about its human extinction via nuclear arms.


Although biotechnology is definitely a significant risk, there are some other things such as nanotechnology, flawed super-intelligences, and transhumanism-related issues that should rank very highly as well.

See http://www.nickbostrom.com/existential/risks.html for a nice summary of potential existential risks to humanity.


The main issue is the break-down of _trust_. See 20 years ago when I took planes, I could bring anything that was reasonable: water, shaver, shampoo, etc. and the security check was minimal. Now look at what is happening? We are treated like criminals: we are patted down, all things are stripped to go through bomb detector and metal detectors, a lot of times we need to take off our shoes and belts to go through the x-ray machines. Why?

Before we start to accusing others as 'terrorists', first step back and think: Why someone would sacrifice their lives to attack us? They are crazy people, some may say. But why we suddenly see so many 'crazy' people in the recent years? What caused them to be so desperate and so angry to the extend they want to waste their lives to do damages?

There are countless ways to cause mass damages in the modern societies, and unless we can understand the _rootcause_ of the attackers' motivation, trying to seal off all potential attack vector is no more effective than trying to remove weeds from ground without taking off the roots.


"Trying to keep things secret is not the answer. "

Disagree. Of course it's certainly part of the answer.

If not then why not publish all the details of when you are home and how your house is protected for anyone to see? And exploit if the appropriate "nut" decides to? Some walls are helpful as a barrier.

Security (by obscurity?) does provide some protection. Locks do keep some people out. Going in the other direction (and making it easy for someone and very available) is not a solution to making things safer.


Bacteria are constantly mutating and looking for new ways to kill us. And since antibiotics are over-prescribed and being added to livestock feed as a preventive measure rather than to treat animals that are actually sick, currently available antibiotics are losing their effectiveness.

Drug companies are no longer doing research on new antibiotics, because antibiotics actually cure things and hence are not profitable. Today's drug companies are only interested in treatments that last a lifetime. Pfizer was the last drug company doing research into new antibiotics, and they closed that division because it wasn't (and wasn't going to be) profitable.

I think the fact that drug companies and medicine in general are not focused on important problems, but only on profitable problems, is a major issue we should be addressing. See the recent Frontline documentary "Hunting the Nightmare Bacteria"

http://www.pbs.org/wgbh/pages/frontline/hunting-the-nightmar...


Very stimulating article.

Minor quibble: I've never thought (most) people actually "fear" terrorism, rather they have (justified) anger and perhaps an excess feeling of "something should be done" since terrorists are actual human beings who can be brought to account.

Perhaps a better comparison would be fearing airplanes over cars, but there I think much of the fear is in the novelty of the flying experience.

The comparison with nuclear weapons is really interesting, particularly as the response to fear of nuclear annihilation on the part of most people was overblown (digging bomb shelters under houses etc). On the other hand, the actions of the relevant governments (US/USSR) were mostly rational in a game-theoretic sense (acknowledging the prisoner's dilemma at hand).

Biotech may be different, as Sam mentioned, since only nation-states have the means to build nukes. On the other hand, computers/the internet still work despite Y2K and rtm's worm :)


Are there reasonable precautions that can be taken at an individual level to prepare for a pandemic type event?


Reasonable? Probably not. You should have about a week of food and supplies in the house, in case of any kind of disaster (stocking up more is most likely useless). Depending on how the disease spreads, you could get some surgical masks and latex gloves to mitigate your chance of infection. In the end though, none of it is going to be really useful. If/when a pandemic sweeps the planet, by definition a large number of people are going to get ill, so chances are for all the preparation we're all still going to be sick at some point.


I'm not sure if this is the first science fiction story to tell of a lone biologist creating a plague, but Frank Herbert's "The White Plague"[1] was written in 1982 and presages a time (in 1996 no less) that a single actor, motivated by personal tragedy, could create and release such a thing.

In a sense though, this notion that technology can give individuals unimaginable power has also been a theme with Robert Heinlein (I think), where he wrote about an easy-to-manufacture superweapon which basically gave anyone the ability to destroy the world. It was also a theme in Kurt Vonnegut's "Cat Cradle" although that's a little bit more of a stretch.

It's true that our destructive capability scales with the amount of energy that we can harness. And it's also true that biological threats are under-perceived. But it seems to be that biology is inherently messy to avoid being too susceptible to annhiliation. That is, yes, it may be possible to create a plague that wipes out 50% of humans - but in the scheme of things, that's not the end of the world. Certainly not the end of humans (not even close!). And it seems that the likelihood of creating such a pathogen is exceedingly small. Indeed, I'd estimate that you couldn't kill more than 10% of people with a single plague.

But since we're talking about catastrophe, it's interesting to wonder about whether a single human could end the world, and if so how. The most likely way (and rather dramatic way) would be to maneuver a large asteroid to impact the Earth. Energetically, it's entirely possible to do. As for nuclear war, I'm not entirely convinced that would really be the end for humans - although it's certainly possible. Another way to end the world might be to release mega-tons of CFCs into the upper atmosphere, intentionally destroying the ozone layer. I can't think of any other scenarios!

[1] https://en.wikipedia.org/wiki/The_White_Plague


He doesn't offer much support for his prescriptions.

Why is it bad to "try to keep things secret" but good to "spend a lot on proactive defense"?

How would that apply to hydrogen bombs? Should we open source the specific details on how to engineer a maximally efficient hydrogen bomb from the most accessible materials, and just spend a lot on hydrogen bomb defense? (Which is what exactly?)

When you omit support for conclusions, it implies you think the reasons are obvious. But it is not obvious that we should do away with efforts at secrecy around hydrogen bomb tech. Nor is it obvious we could defend ourselves from widely available thermonuclear bombs by being "proactive". It's a hand-wavey answer that appeals to the "information wants to be free" sentiment, but not actually well supported.


In the case of nukes, the key technology, gas centrifuges, was essentially invented by civilians and then made very public by being sold all over the world. The US tried to keep it a secret on its end but that achieved nothing but the stagnation of that particular technology in the US. "Proactive" international control is literally the only thing keeping every nation from getting nuclear arms.


> So everyone smart says that we worry about terrorism way too much, and so far, they’ve been right.

And the people who are even smarter realize that people will worry about what they will worry about, and respond to threats proportionally to how much people worry about them rather than chiding them about how much they should worry about things.

Human beings aren't rational when it comes to fear, but the products of that fear are very real. We live in a world where people freak out if an adult talks to a child, but happily drive their kids around in the death traps that are motor vehicles. Not only that, but we've gone to great lengths to structure our society to treat the former as abnormal and the latter as totaly normal. Telling people to be rational isn't going to make them that way.


On the virus he's actually writing about and the "gain of function studies" that caused so much controversy, part of the risk is not terrorists, or bad guys, or any ill intent whatsoever.

This kind of research is conducted in BSL-3 labs, and there's a not insignificant number of laboratory accidents, accidental exposures, etc. in those labs, by well-intentioned, well trained people.

I saw a presentation recently that estimated, using fairly conservative numbers, that 10 labs working on those viruses for 10 years had ~1600 deaths in expectation. Now that distribution isn't normal - lots of zeros and then some rare but catastrophic outcomes, but like many things, it doesn't require anyone to do anything actively malign. Just screw up.


I'm confused.

We shouldn't worry about terrorism because the likelyhood of dying from it is very low, but we should worry about genetically engineered virus because the likelyhood of dying from it is very high?

Last time I checked NO ONE ever died from a genetically engineered virus.


This is preaching to the choir, especially on this site of self-proclaimed technologists. Yeah, we know more oversight/regulation needs to happen in certain nascent industries. We also know how awful things could get; you just told us. Blog posts like these is just wasting breath. Getting out and doing something and getting involved in the political process now is what will be helpful...so when the last baby boomer in a position of political power finally shuffles off the mortal realm (and we can have a weeklong celebration) we will have well-educated people on the issues that matter ready to ascend to power.

Then again, who am I kidding. We're talking about politics.


The main point of the article is not a point which is widely discussed, known, or accepted. It has nothing to do with oversight/regulation. The point is that offensive biotechnology has progressed way ahead of defensive biotechnology. In other words, we know how to engineer viruses but not how to engineer the immune system. The human immune system is vastly more complicated than viruses. Therefore, the author appears to be calling for vastly more funding into defensive biotechnology. In other words, though research into dna modification technologies (such as viral engineering) is already heavily funded, resarch into therapeutic methods and immune response should be much greater than most are even considering.

My own personal opinion is that that funding should be increased orders of magnitude; a few trillion over the next decade seems judicious.


The real effort to create such a virus is not clear, but I always wonder why governments never tried to make an effort in order to make humanity more resilient to attacks of this kind (natural or artificial) with education. The incredible thing about viruses is that if you have a disciplined population that stays home as much as possible, avoid contacts during interactions, and so forth, during an epidemic issue, you can do wonders at containing the event. But for some reason we are not prepared at all to act rationally to such an event.


"But another possibility is that we engineer the perfect happiness drug, with no bad side effects, and no one wants to do anything but lay in bed and take this drug all day, sapping all ambition from the human race."

That would be a pretty bad side effect in itself. There are some pretty nice drugs that approximate what you say, yet as a population we mostly get on with our lives.

It's not in our nature to be satisfied with any persistent state - positive, negative or neutral. So as a tail risk worth worrying about I'd put this in your terrorism category.


Is this satire? Why would you start a post with a description why the remainder of said post is pointless fearmongering?

Theres plenty of dangerous technology out there, right now. We need not conjure superviruses. The only thing saving us is as usual the incompetence and scarcity of those that actually want to cause harm on a big scale. Don't think for a second it's the security theatre that keeps the numbers down or the lack of weapons of mass destruction.


I'm personally kind of amazed it hasn't already happened, either through intentional (mis-)tinkering or natural mutation. The reason I'm surprised is air travel. People fly everywhere, and anything that appears that is easily transmissible ought to spread like wildfire.

It means one of several things:

(1) Humans are more resilient against plagues than we think.

(2) It's harder to produce a super-disease than we think, so it's a very rare event.

(3) We've just been lucky as hell.


Would it be possible to defend to these kinds of attacks by creating (and spreading) less lethal forms of the killer virus (assuming general capability to design viruses but not the immune system) and bring overall lethality down (similarly how MRSA spreads less when there are natural competitors present)? This would solve the offensive biotech / defensive biotech dilemma to a degree.


> Based on current data, you are about 35,000 times more likely to die from heart disease than from a terrorist attack.

With that kind of logic we can get anywhere. For example it is way more likely to die after breathing air than after 'breathing' water. It just takes, say, 75 years on average.

Comparing this with other non-natural death causes, such as murder, would be much more fair.


If we're talking about decisions of the type "should I do something to protect myself against X" or "should my government invest $$$ to protect us all against X" or "should we sacrifice Y to gain protection from X" - then in all these scenarios it makes perfect sense to compare heart attack with terrorist attack.

You can protect and save orders of magnitude more lives by focusing on the real threats and ignoring terrorists.


Unlik say the Homebrew club, some similar hackers exploring viruses could accidentally release something unintentionally. All it takes is one oops. And looking at the early computer days security was never a primay or secondary issue, if at all. But yet amazing things were created. The difference is one is local while the other can spread with no control...


Speaking of irrationality of fears, people are so obsessively frightened with the idea of "mad scientists" doing weird genetical experiment that goes wrong, and in the same time completely ignore the far more realistic horror stories that are almost imminent, like bacterias becoming widely immune to all known antibiotics...


    > But another possibility is that we engineer the perfect
    > happiness drug, with no bad side effects, and no one wants
    > to do anything but lay in bed and take this drug all day,
    > sapping all ambition from the human race.
How does this compare to properly administered medical-grade morphine?


Nathan Myhrvold makes nearly the same point with lots more detail in his Strategic Terrorism paper. http://www.lawfareblog.com/wp-content/uploads/2013/07/Strate...


If you're interested in more, start with the first time people got concerned about this problem, nearly 40 years ago: http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombin...


This area is just one that Bill Joy covers in http://www.wired.com/wired/archive/8.04/joy_pr.html Why the Future Doesn't Need Us, which is my favorite essay on lots of these ideas.


Technical impossibility aside, I take issue with the whole assumption that there exists a Hollywood-movie "bad guy" who would take it upon himself to create a killer virus.

Yes, there are terrorists who do not shirk from mass murder to achieve their goals, eg. flying planes into the heart of the enemy's military and financial centers. But creating a virus that that respects no religion or national boundary and will kill everybody it touches serves no rational goal.

The only reason for somebody to do this would be if the extermination of the human race was the actual goal, and that's just so far-out that even certified nutcases like Japanese subway sarin attackers Aum Shinrikyo would blanch. The ideology of these groups is invariably that, while the rest of the human race may be doomed, they are the Chosen Ones that will survive, but viruses don't play dat. It would thus only make a smidgen of sense to do this if you had an absolutely solid antidote/vaccine that would ensure that your group can survive the onslaught... and if that exists, the rest of humanity can develop one as well.


I absolutely agree. And I believe governments nowadays are doing too little to prepare us for a potential epidemic of a lethal virus. In my opinion a reasonable measure would be an emergency plan which (in a matter of a few days) can provide all households with enough food for a month long curfew.


  hacking our bodies will likely be more powerful than hacking bits
on some level we've been doing this for thousands of years. it is a matter of time before we take it to next level. and there is lots of interest there.


This is unrelated to the topic at hand, but Sam Altmans' blog has, at some point in the past week, been blocked by the corporate firewall I'm behind (large international bank). How long till they block HN, I wonder.


If you're interested in building risk-based applications, come talk to me about Pulse OS and Riskpulse - http://riskpulse.com/offerings/


"Based on current data, you are about 35,000 times more likely to die from heart disease than from a terrorist attack."

Could we please get a link / source to this current data?


Given the history of weaponization it would be illogical to worry about 'bad guys' instead we should be worried about the US Government.


Nuclear annihilation didn't happen because a few countries showed some restraint. In the coming decades, the capability of manufacturing deadly agents will be shifted to individual people. In fact, one could argue this is already happening. So our future will most likely be impacted by the behavior of billions of people, most of which could take a serious stab at causing mass casualties if they actually wanted. To top it off, this shift of capabilities to the individual will have a probably small but considerable chance of ending our civilization outright. This is absolutely something to worry about.


Probably time to give individuals similar sovereignty to nations. Likely what will occur, similar to nuclear proliferation, is that individuals will probably realize that the only way to achieve sovereignty similar to nations is by developing these sorts of technologies.

Congrats to the prepressive nations who are incentivizing the development of these technologies by being opressive.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: