Hacker News new | past | comments | ask | show | jobs | submit login

I read this paper (it's only a few pages, completely non-technical).

The entire paper assumes we get a certain type of communication and in the course of decoding it, destroy humanity. It would make an interesting screen play, but the paper presents no proof whatsoever that "message decontamination is impossible."

At best, the paper presents 1 case and outlines 1 possible ending which is the end of humanity. Kinda interesting but not nearly as universal as the title claims.




> The entire paper assumes we get a certain type of communication and in the course of decoding it destroy humanity.

"Assume" is a bit of an odd way of putting it. The paper is a discussion of the risks of receiving a certain type of message.

> but the paper presents no proof whatsoever that "message decontamination is impossible."

That's a bit harsh. The paper provides several arguments which I find quite solid. Which of them did you find unconvincing? The existence of message that can't be parsed by hand? The risk of manipulative AI running on an air-gapped computer? The difficulty of containing a message given that we can't turn off the source? The difficulty of containing knowledge/information over longer time scales?

I didn't think there was much particularly new in that article. It's mostly a re-hashing of other people's arguments. The most interesting point the article made is this conclusion:

"As we realize that some message types are potentially dangerous, we can adapt our own peaceful transmissions accordingly. We should certainly not transmit any code. Instead, a plain text encyclopedia (Heidmann 1993), images, music etc. in a simple format are adequate. No advanced computer should be required to decrypt our message."


The way I read the title was: \forall messages M s.t. \forall decontaminating functions F, F does not decontaminate M.

I think what the paper actually shows is \exists message M, s.t. \forall decontaminating functions F, F does not decontaminate M.

In fact, we know the title (as I interpreted it) is false - the paper gives an example of a message that can be read as plain text which can be truly decontaminated.

Basically, when we get a message, we'll be able to immediately tell whether we can decontaminate it or not. If not, the paper argues we're doomed because eventually it'll leak. I just don't find that to be very interesting, and definitely not as strong a claim as the title (as I interpreted it).


> when we get a message, we'll be able to immediately tell whether we can decontaminate it or not

I don't think this is true (at least not "immediately").

The paper is arguing that we will need computers to help decode any non-trivial message, and any computer program capable of decoding a non-trivial message will be Turing complete, and hence capable of running virus code that could have damaging or destructive effects. (The paper's example was a message containing equations in LaTeX, which is Turing complete, hence an ETI could embed a virus in LaTeX code, and humans might not be able to spot it.)

The strategy the paper is advocating, as far as I can see, is that, unless the message is simple enough that humans can decode it without computer help, and thereby check directly whether it contains any dangerous information, the message should be destroyed. I suppose this could count as "we can't tell whether we can decontaminate it", but I'm not sure that's what you meant.


I suppose one option is to stick a computer, not networked, on a small spacecraft along with a single astronaut and pop them into LEO. Run the code, and have the astronaut tell us what it says or does. If something goes wrong, the whole thing deorbits. If we’re really paranoid we could send it well past LEO.

Edit: If we were incredibly paranoid and had some self-sacrificing astronauts it could be a one-way trip to Mercury.


It sounds like you've never heard of the AI Box Experiment[0]. I personally don't believe that there is any way to safely do isolation while still maintaining the potential to transmit the message.

[0] http://yudkowsky.net/singularity/aibox/


This is actually the first time I've heard of this. Interesting, but I find it hard to believe the gatekeeper wouldn't just decide to turn into an obstinate 3-year-old and thus win the challenge :)


What if we let the potentially malicious AI of extraterrestrial origin be managed by a trusted AI of our own manufacture and that is proven cannot be swayed by arguments such as those commonly indicated could corrupt a human?

Just speculating, without knowing how we could get to such a predicament.


1. How would you go about creating an AI?

2. How would you identify that what you've created is an AI?

3. Under what circumstances would you decide to trust the AI that you've created?

4. Under what circumstances would you be confident to feed a potentially malicious potential AI to your trusted AI?


We could restrict the astronaut to using Morse code, and have literally no other data link back to Earth. The astronaut can be immobilized except for one finger used to key the Morse code. The ship can be little more than a ballistic missile set to run a set number of orbits before burning in the atmosphere. There is no way out of that box, even if we want to let it out.


There's no way for you to get the astronaut out of the box. But consider a hostile, superhuman AI. Who knows what it can do?

But more to the point, who knows what it can convince the astronaut to do? Can it convince the astronaut to transmit a hostile message back to Earth, suborning the receiving computer, and thereby letting the AI out of the box?

In the AI alignment field, this is called the "value drift" problem.


There's no way for you to get the astronaut out of the box.

I may have been unclear: the astronaut is never intended to leave the box, he burns with the box. The astronaut is paralyzed save for his tapping finger, and on a doomed one way trip.


Yes, I understood. The problem is that you can't depend on the astronaut to stay on your side. Either the AI can convince them that the AI can save their life only if they let it out of the box, or it can convince them that, since they will die anyway, they might as well let the AI out of the box.


Things like artificial intelligences can be transferred via Morse code.


Yes, but not through whatever a human can tap out in a relatively short period of time. Besides, you don’t need to receive the signal a computer, just a speaker and a guy with a pen and paper.


Isn’t that literally the plot of Life? That didn’t end up well....


Personally, I find the idea that "humans could be bargained into a poor position" to be fairly weak.

It's certainly possible, but negotiations with hidden agendas have been part and parcel of humanity for all time, and are a major basis for all foreign (and most internal) affairs.

The paper presents no predicament that China couldn't put the US in tomorrow with a suitably (apparently) one-sided deal, and yet countries don't as a rule isolate themselves to avoid hearing potentially deadly trade offers.


> I find the idea that "humans could be bargained into a poor position" to be fairly weak

That's because you're assuming that whatever ETI or AI we are bargaining with has roughly our intelligence. That is what has made it possible for humans to negotiate with each other instead of isolating themselves to avoid hearing potentially deadly offers.

But any ETI or AI capable of sending us a complex message will probably be much more intelligent than we are. In that case, we would basically be in the same position as, say, a dog trying to negotiate with humans; we simply would not even be able to comprehend what the other side was doing or thinking.


No, I'm not, I'm saying that magic intelligence doesn't change the rules of the game.

If I were negotiating with an alien intelligence, I would assume that it could think circles around me, so I wouldn't be about to accept any bargain that I couldn't - fully - think through the ramifications of. I'm not going to accept complicated bargains, only extremely simple ones. This would most probably result in making no bargains at all. So be it.

I reject the notion that a being capable of reason, holding all the cards, is unable to create an unwinnable state for its opponent, when inaction is a possibility. The most intelligent computer in the universe can't beat you at chess if you don't play.


How would you know you’re playing chess, or even that you’re negotiating?

Any sufficiently advanced alien invasion could be indistinguishable from local politics.

I think the paper fails because it assumes a magical hypersmart AI would arrive as a message at a radio telescope.

But if you’re going to magically assume a magical hypersmart AI, you may as well assume it can get here magically without being noticed.


I wonder from an anthropological view, how these discussions relate to those of the past, about demons.


Agreed, there are a million more convincing ways an alien intelligence could take over, most likely - as you say - without us ever noticing.

I'm just jumping off from the particular assumption that the paper makes.


> I reject the notion that a being capable of reason, holding all the cards, is unable to create an unwinnable state for its opponent, when inaction is a possibility.

If this is true for you, it's also true for the AI. Which would be a contradiction, similar to the old classic irresistible force meeting immovable object. So the notion you're rejecting is the only consistent possibility.


No, on two counts.

Firstly, we begin with a power imbalance, we're not on a level playing field.

Secondly, there's nothing inconsistent about both parties being able to create an unwinnable state for the other. As I stated earlier in my comment, the most likely outcome is no dealing whatsoever. The only rational alternative is an agreement whose effects we can both perfectly foresee.



Monty python solved the Fermi paradox!


That was my immediate reaction as well, "decontamination is impossible" um, why?

I deal with 'unusual' payloads all the time and I've never seen anything that has this potential, I don't see how anything could be built to be so, without obviously being so.

EG: the progression would go from something being some form of plain text, to it being encoded in some way which is very very likely.

On the extreme ends say something is sent as machine code. Firstly, this is very odd and suspicious for a lot of reasons but even still you can run untrusted machine code in a very safe way. Assume its resistant to existing types of disassembly for whatever reason. Let's say you want to take an absurd level of paranoia (which, really, is justified if an alien civilization sends us machine code), rut it on an entirely physically disconnected (battery powered) machine inside a Faraday cage. If the code still won't work (ERR: must be connected to the internet to proceed) you can basically assume its malicious.

And, after all that, there are far easier ways to kill us all. Send the plans to a massive superweapon but only send it to the USA (or just to Russia, or just to China) but send the other side a message telling them what you did. The snubbed nation may feel the need to make a preemptive attack before the other nation can develop the technology.

Send plans for an ultimate bioweapon to the right/wrong people.


I think you are both might be confusing the medium with the message. Even if decoding of the message can be done safely, the content of the message might be the danger itself.

The author is suggesting that the message contains the code for a true ai and makes the point that it might be able for it to bargain in exchange for giving humans information they want or need. It goes on to reason that eventually knowledge of this ai will become public and due to human nature it will eventually be let free.

Disconnection from infrastructure is not enough to make the message safe.

Given that time is not a factor for the ai this does not seem far fetched. At least in the context of this paper.


Reminds me of His Master's Voice by Lem. https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)

It goes beyond the medium.

Humanity could receive a schematic for an extinction device and wouldn't know it until it's turned on. How long would we last before we caved? We could project any potential use for some mysterious alien tech; debate and rationalize at every moment of weakness. Could it cure a plague or teach us even more? Someone could build it in secret.


Greasemonkey gerbil

Looking at the mousetrap

Interesting device

It sees the spring and smells the bait

Understands everything -except the connection


>rut it on an entirely physically disconnected (battery powered) machine inside a Faraday cage

What if decompression produced an alien message hundreds of gigabytes in length? Is that file completely safe to transmit to un-airgapped computers?

Would you bet all of human civilization that it's safe? No way any part of it would produce any negative effect on a human reading it?

>Send the plans to a massive superweapon but only send it to the USA

Laser beams sent over interstellar distances will be hundreds of thousands of kilometres wide at the destination.

The threat model for a hostile probe inside our solar system would be rather different, of course.


"This is a national security issue that must be treated with the utmost care. Can you analyze and decode this message we received from Alpha Centauri 4?"

[Time passes]

"By Jove, I think we've cracked it, Sir. It's essentially Huffman encoded and contains a mechanism for repeating redundant strings. Let me see if I can decode the entire message."

[Time passes]

"Shit, my computer crashed."


I came up with an idea as a fantasy world-building exercise which included a confusing book of spells, which if someone where clever enough to decode the central theme of the book, they would have performed the mental gymnastics necessary to cast a spell which would erase every possible future in which they did not become a supreme evil.

But here the basic premise is that merely thinking the right thoughts can physically change reality. Which is 'sort-of' true for real brains, but there's an important layer of indirection that makes it not work the way I spelled out above.


It's important to keep in mind that the barrier to entry to publishing on arXiv is that you can produce a well-formed LaTeX document with coherent paragraphs. There's no reason to believe a claim _just_ because it's on arXiV.


Note how they wrote of a ‘proof’ of the famously unproven Riemann Zeta hypothesis and proceeded to merely state it (in a form that might even be compatible with the proven Euler Zeta hypothesis, the ungeneralised case where s is a real number and not a complex number).



In the paper, they would nuke the hypothetical moon base, although sadly not from orbit.


Wasn't Arrival about this?


Nope.

<spoilers>It's about two things. First, an alien race that give's humanity a "weapon" which is their language. Anyone who learns the language starts to perceive time in a non-linear (probably deterministic) fashion, ie. flashbacks and flashfowards. They did this so that humanity would save them sometime in the future from something unknown to modern day humans. And second, it's about how a translator lives through the life and death of her child while understanding said language. She makes the choice of conceiving her child even though she knew the child was going to die in the future because it was worth it to her because of how much she loved her daughter.</spoliers>


Yes, that is what happens, but the whole language was like virus that turned people into p-zombies.


That is a very good summary of the movie. I've watched it 3 times and love it more each time.


Did you read the short story? IMO, it's better.



Not exactly, Arrival is more about how learning a new language can change the way you think.

If you want a fiction novel that IS very similar to this, I recommend reading The Three-Body Problem.


That is good. Earlier, as EvanAnderson notes, there was the Blight in Vernor Vinge's A Fire Upon the Deep. Also the malware attack on the Ceres colony in The Killing Star, by Charles Pellegrino and George Zebrowski.


Snow Crash was closer, if anything.


I read both the short story and saw the film, and I didn't get that from it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: