Hacker News new | past | comments | ask | show | jobs | submit login
Interstellar communication. IX. Message decontamination is impossible (arxiv.org)
107 points by mkhalil on March 4, 2018 | hide | past | favorite | 123 comments



I read this paper (it's only a few pages, completely non-technical).

The entire paper assumes we get a certain type of communication and in the course of decoding it, destroy humanity. It would make an interesting screen play, but the paper presents no proof whatsoever that "message decontamination is impossible."

At best, the paper presents 1 case and outlines 1 possible ending which is the end of humanity. Kinda interesting but not nearly as universal as the title claims.


> The entire paper assumes we get a certain type of communication and in the course of decoding it destroy humanity.

"Assume" is a bit of an odd way of putting it. The paper is a discussion of the risks of receiving a certain type of message.

> but the paper presents no proof whatsoever that "message decontamination is impossible."

That's a bit harsh. The paper provides several arguments which I find quite solid. Which of them did you find unconvincing? The existence of message that can't be parsed by hand? The risk of manipulative AI running on an air-gapped computer? The difficulty of containing a message given that we can't turn off the source? The difficulty of containing knowledge/information over longer time scales?

I didn't think there was much particularly new in that article. It's mostly a re-hashing of other people's arguments. The most interesting point the article made is this conclusion:

"As we realize that some message types are potentially dangerous, we can adapt our own peaceful transmissions accordingly. We should certainly not transmit any code. Instead, a plain text encyclopedia (Heidmann 1993), images, music etc. in a simple format are adequate. No advanced computer should be required to decrypt our message."


The way I read the title was: \forall messages M s.t. \forall decontaminating functions F, F does not decontaminate M.

I think what the paper actually shows is \exists message M, s.t. \forall decontaminating functions F, F does not decontaminate M.

In fact, we know the title (as I interpreted it) is false - the paper gives an example of a message that can be read as plain text which can be truly decontaminated.

Basically, when we get a message, we'll be able to immediately tell whether we can decontaminate it or not. If not, the paper argues we're doomed because eventually it'll leak. I just don't find that to be very interesting, and definitely not as strong a claim as the title (as I interpreted it).


> when we get a message, we'll be able to immediately tell whether we can decontaminate it or not

I don't think this is true (at least not "immediately").

The paper is arguing that we will need computers to help decode any non-trivial message, and any computer program capable of decoding a non-trivial message will be Turing complete, and hence capable of running virus code that could have damaging or destructive effects. (The paper's example was a message containing equations in LaTeX, which is Turing complete, hence an ETI could embed a virus in LaTeX code, and humans might not be able to spot it.)

The strategy the paper is advocating, as far as I can see, is that, unless the message is simple enough that humans can decode it without computer help, and thereby check directly whether it contains any dangerous information, the message should be destroyed. I suppose this could count as "we can't tell whether we can decontaminate it", but I'm not sure that's what you meant.


I suppose one option is to stick a computer, not networked, on a small spacecraft along with a single astronaut and pop them into LEO. Run the code, and have the astronaut tell us what it says or does. If something goes wrong, the whole thing deorbits. If we’re really paranoid we could send it well past LEO.

Edit: If we were incredibly paranoid and had some self-sacrificing astronauts it could be a one-way trip to Mercury.


It sounds like you've never heard of the AI Box Experiment[0]. I personally don't believe that there is any way to safely do isolation while still maintaining the potential to transmit the message.

[0] http://yudkowsky.net/singularity/aibox/


This is actually the first time I've heard of this. Interesting, but I find it hard to believe the gatekeeper wouldn't just decide to turn into an obstinate 3-year-old and thus win the challenge :)


What if we let the potentially malicious AI of extraterrestrial origin be managed by a trusted AI of our own manufacture and that is proven cannot be swayed by arguments such as those commonly indicated could corrupt a human?

Just speculating, without knowing how we could get to such a predicament.


1. How would you go about creating an AI?

2. How would you identify that what you've created is an AI?

3. Under what circumstances would you decide to trust the AI that you've created?

4. Under what circumstances would you be confident to feed a potentially malicious potential AI to your trusted AI?


We could restrict the astronaut to using Morse code, and have literally no other data link back to Earth. The astronaut can be immobilized except for one finger used to key the Morse code. The ship can be little more than a ballistic missile set to run a set number of orbits before burning in the atmosphere. There is no way out of that box, even if we want to let it out.


There's no way for you to get the astronaut out of the box. But consider a hostile, superhuman AI. Who knows what it can do?

But more to the point, who knows what it can convince the astronaut to do? Can it convince the astronaut to transmit a hostile message back to Earth, suborning the receiving computer, and thereby letting the AI out of the box?

In the AI alignment field, this is called the "value drift" problem.


There's no way for you to get the astronaut out of the box.

I may have been unclear: the astronaut is never intended to leave the box, he burns with the box. The astronaut is paralyzed save for his tapping finger, and on a doomed one way trip.


Yes, I understood. The problem is that you can't depend on the astronaut to stay on your side. Either the AI can convince them that the AI can save their life only if they let it out of the box, or it can convince them that, since they will die anyway, they might as well let the AI out of the box.


Things like artificial intelligences can be transferred via Morse code.


Yes, but not through whatever a human can tap out in a relatively short period of time. Besides, you don’t need to receive the signal a computer, just a speaker and a guy with a pen and paper.


Isn’t that literally the plot of Life? That didn’t end up well....


Personally, I find the idea that "humans could be bargained into a poor position" to be fairly weak.

It's certainly possible, but negotiations with hidden agendas have been part and parcel of humanity for all time, and are a major basis for all foreign (and most internal) affairs.

The paper presents no predicament that China couldn't put the US in tomorrow with a suitably (apparently) one-sided deal, and yet countries don't as a rule isolate themselves to avoid hearing potentially deadly trade offers.


> I find the idea that "humans could be bargained into a poor position" to be fairly weak

That's because you're assuming that whatever ETI or AI we are bargaining with has roughly our intelligence. That is what has made it possible for humans to negotiate with each other instead of isolating themselves to avoid hearing potentially deadly offers.

But any ETI or AI capable of sending us a complex message will probably be much more intelligent than we are. In that case, we would basically be in the same position as, say, a dog trying to negotiate with humans; we simply would not even be able to comprehend what the other side was doing or thinking.


No, I'm not, I'm saying that magic intelligence doesn't change the rules of the game.

If I were negotiating with an alien intelligence, I would assume that it could think circles around me, so I wouldn't be about to accept any bargain that I couldn't - fully - think through the ramifications of. I'm not going to accept complicated bargains, only extremely simple ones. This would most probably result in making no bargains at all. So be it.

I reject the notion that a being capable of reason, holding all the cards, is unable to create an unwinnable state for its opponent, when inaction is a possibility. The most intelligent computer in the universe can't beat you at chess if you don't play.


How would you know you’re playing chess, or even that you’re negotiating?

Any sufficiently advanced alien invasion could be indistinguishable from local politics.

I think the paper fails because it assumes a magical hypersmart AI would arrive as a message at a radio telescope.

But if you’re going to magically assume a magical hypersmart AI, you may as well assume it can get here magically without being noticed.


I wonder from an anthropological view, how these discussions relate to those of the past, about demons.


Agreed, there are a million more convincing ways an alien intelligence could take over, most likely - as you say - without us ever noticing.

I'm just jumping off from the particular assumption that the paper makes.


> I reject the notion that a being capable of reason, holding all the cards, is unable to create an unwinnable state for its opponent, when inaction is a possibility.

If this is true for you, it's also true for the AI. Which would be a contradiction, similar to the old classic irresistible force meeting immovable object. So the notion you're rejecting is the only consistent possibility.


No, on two counts.

Firstly, we begin with a power imbalance, we're not on a level playing field.

Secondly, there's nothing inconsistent about both parties being able to create an unwinnable state for the other. As I stated earlier in my comment, the most likely outcome is no dealing whatsoever. The only rational alternative is an agreement whose effects we can both perfectly foresee.



Monty python solved the Fermi paradox!


That was my immediate reaction as well, "decontamination is impossible" um, why?

I deal with 'unusual' payloads all the time and I've never seen anything that has this potential, I don't see how anything could be built to be so, without obviously being so.

EG: the progression would go from something being some form of plain text, to it being encoded in some way which is very very likely.

On the extreme ends say something is sent as machine code. Firstly, this is very odd and suspicious for a lot of reasons but even still you can run untrusted machine code in a very safe way. Assume its resistant to existing types of disassembly for whatever reason. Let's say you want to take an absurd level of paranoia (which, really, is justified if an alien civilization sends us machine code), rut it on an entirely physically disconnected (battery powered) machine inside a Faraday cage. If the code still won't work (ERR: must be connected to the internet to proceed) you can basically assume its malicious.

And, after all that, there are far easier ways to kill us all. Send the plans to a massive superweapon but only send it to the USA (or just to Russia, or just to China) but send the other side a message telling them what you did. The snubbed nation may feel the need to make a preemptive attack before the other nation can develop the technology.

Send plans for an ultimate bioweapon to the right/wrong people.


I think you are both might be confusing the medium with the message. Even if decoding of the message can be done safely, the content of the message might be the danger itself.

The author is suggesting that the message contains the code for a true ai and makes the point that it might be able for it to bargain in exchange for giving humans information they want or need. It goes on to reason that eventually knowledge of this ai will become public and due to human nature it will eventually be let free.

Disconnection from infrastructure is not enough to make the message safe.

Given that time is not a factor for the ai this does not seem far fetched. At least in the context of this paper.


Reminds me of His Master's Voice by Lem. https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)

It goes beyond the medium.

Humanity could receive a schematic for an extinction device and wouldn't know it until it's turned on. How long would we last before we caved? We could project any potential use for some mysterious alien tech; debate and rationalize at every moment of weakness. Could it cure a plague or teach us even more? Someone could build it in secret.


Greasemonkey gerbil

Looking at the mousetrap

Interesting device

It sees the spring and smells the bait

Understands everything -except the connection


>rut it on an entirely physically disconnected (battery powered) machine inside a Faraday cage

What if decompression produced an alien message hundreds of gigabytes in length? Is that file completely safe to transmit to un-airgapped computers?

Would you bet all of human civilization that it's safe? No way any part of it would produce any negative effect on a human reading it?

>Send the plans to a massive superweapon but only send it to the USA

Laser beams sent over interstellar distances will be hundreds of thousands of kilometres wide at the destination.

The threat model for a hostile probe inside our solar system would be rather different, of course.


"This is a national security issue that must be treated with the utmost care. Can you analyze and decode this message we received from Alpha Centauri 4?"

[Time passes]

"By Jove, I think we've cracked it, Sir. It's essentially Huffman encoded and contains a mechanism for repeating redundant strings. Let me see if I can decode the entire message."

[Time passes]

"Shit, my computer crashed."


I came up with an idea as a fantasy world-building exercise which included a confusing book of spells, which if someone where clever enough to decode the central theme of the book, they would have performed the mental gymnastics necessary to cast a spell which would erase every possible future in which they did not become a supreme evil.

But here the basic premise is that merely thinking the right thoughts can physically change reality. Which is 'sort-of' true for real brains, but there's an important layer of indirection that makes it not work the way I spelled out above.


It's important to keep in mind that the barrier to entry to publishing on arXiv is that you can produce a well-formed LaTeX document with coherent paragraphs. There's no reason to believe a claim _just_ because it's on arXiV.


Note how they wrote of a ‘proof’ of the famously unproven Riemann Zeta hypothesis and proceeded to merely state it (in a form that might even be compatible with the proven Euler Zeta hypothesis, the ungeneralised case where s is a real number and not a complex number).



In the paper, they would nuke the hypothetical moon base, although sadly not from orbit.


Wasn't Arrival about this?


Nope.

<spoilers>It's about two things. First, an alien race that give's humanity a "weapon" which is their language. Anyone who learns the language starts to perceive time in a non-linear (probably deterministic) fashion, ie. flashbacks and flashfowards. They did this so that humanity would save them sometime in the future from something unknown to modern day humans. And second, it's about how a translator lives through the life and death of her child while understanding said language. She makes the choice of conceiving her child even though she knew the child was going to die in the future because it was worth it to her because of how much she loved her daughter.</spoliers>


Yes, that is what happens, but the whole language was like virus that turned people into p-zombies.


That is a very good summary of the movie. I've watched it 3 times and love it more each time.


Did you read the short story? IMO, it's better.



Not exactly, Arrival is more about how learning a new language can change the way you think.

If you want a fiction novel that IS very similar to this, I recommend reading The Three-Body Problem.


That is good. Earlier, as EvanAnderson notes, there was the Blight in Vernor Vinge's A Fire Upon the Deep. Also the malware attack on the Ceres colony in The Killing Star, by Charles Pellegrino and George Zebrowski.


Snow Crash was closer, if anything.


I read both the short story and saw the film, and I didn't get that from it.


Paper says it's focus is on all ET communications, but instead focuses on malicious AI code.

Secure isolation review of code is dismissed because AIs can trick people into freeing them.

Finding states: "[M]essage cannot be decontaminated with certainty, and technical risks remain which can pose an existential threat. Complex messages would need to be destroyed in the risk averse case."

Couldn't we simply agree not to execute un-audited code from unknown third parties? Seems like the same threat vector as unknown executable code delivered over email to me.


I immediately thought of Vernor Vinge's "A Fire Upon The Deep" (https://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep). Half of the plot is an AI tricking people into freeing it and the subsequent havoc it wreaks.


It is indeed a devious AI.


> Couldn't we simply agree not to execute un-audited code from unknown third parties? Seems like the same threat vector as unknown executable code delivered over email to me.

There are a couple of problems with this.

First, limiting the spread and ensuring that nobody executes the message is non-trivial (especially given that currently the protocols dictate "These recordings should be made available to the international institutions listed above and to members of the scientific community for further objective analysis and interpretation". Also, unless all instances of the message were destroyed AND transmission ceased permanently, humanity is left with a literal Pandora's Box. It's hard to imagine that this box wouldn't be opened eventually by someone with more curiosity that caution.

Second, there are messages that cannot be safely audited without running. A self-bootstrapping decompressor/compiler that modifies it's own code as it parses itself into existence could be impossible to evaluate. Committing ourselves to never executing un-audited code means accepting there are certain types of messages we would never understand.

So while your solution is a possibility, it is not an easy one and it is not one without costs. Maybe I'm not risk averse enough but I (like authors) think the potential gains of executing ET code outweigh the risks associated


I think the epidemics attributed to European explorers encountering the Americas are literally an example of unleashing unaudited alien codes on people.

I used to read old Analog magazines in high school, and I distinctly recall a short story about people picking up a transmission with a radio telescope, which told them how to build a computer - basically a form of alien invasion based on the fact that radio transmissions are the only practical way of interstellar travel.


I have something urgent I need to tell you. Your life could be in danger. You need to contact me immediately.


"Hello mister sentient being! I am the informational treasurer for the dying civilization of Nigerium, with access to great technological and cultural wealth that is about to be lost forever. I wish your help in establishing a vast database for our future descendents to recover our advancements, and would like to send you the entirety of our knowledge in a format that will be accessible to you and our great-great-grandspawn. All I need is an example of the informational structures used by your civilization, with a complete and robust list of all known vulnerabilities that may have an impact on the safety and integrity of our heritage."

The simplest of tricks have been proven to work again and again.


This civilization colonized half the galaxy thanks to this one weird trick. Execute the rest of the message to find out more.


"They're following this signal at 0.98c. You might be able to defeat them, but you must act immediately or face utter annihilation."


If they’re following at that velocity, then we’re already dead, and “they” are a relativistic bombardment we won’t even see coming.


Depends on how far away the signal came from. 10^9 light years away, and your .98c attackers won't be here for 20 million years. Even 10000 light years away, you have 200 years to play with.

Though it's likely pretty hard to get a signal 10000 light years, especially if you don't know the direction to send it.


Then the promise of safety is all the more tempting, isn't it?


Maybe, but if they’re sending us EM signals then we know we’re doomed. Physics as we know it, with c as a limit, strongly implies that it is simply impossible to defend against a relativistic strike.


I mean, that's an example. There are many ways they could try to trick us. Presumably we are talking about hyper-intelligent beings.


Or more effective and sinister, appeal to one person in a posotion to run the code. “Want be rich/immortal/powerful beyond human limits?”


Furthermore, that strategy doesn't have to be a one shot deal, where the message says "believe me" before it is proven trustworthy. You can have little pieces of information that demonstrate neat tricks, leading to progressively increasing trust until the full payload is unveiled, for better or worse.


>Couldn't we simply agree not to execute un-audited code from unknown third parties?

How exactly would we achieve that? How can we prevent every single human being on the planet from executing some malicious code coming from open space?


All ten of them that have the requisite equipment to pick up the message, yes.


> Couldn't we simply agree not to execute un-audited code from unknown third parties?

The ability of humans to abide by such a restriction is highly doubtful. Destroying the message would be much safer.


I've been fascinated by these AI containment/usage scenarios ever since I heard of them. Here's another perspective on the problem that I find very fascinating:

Assume that, beyond a shadow of a doubt, we have an impenetrable system for interfacing with the AI. I say system, because the people that are part of the protocol are all moral equivalents of Jesus. They can't be corrupted, they can't be blackmailed. The AI is kept in a box that cannot be accessed by anyone else. In short: we've solved the containment problem (of the AI).

Of course we _do_ want to use the AI. We've installed it in the box for a reason. If we didn't intend to use it we might as well not have built it. So actually using the tech that it gives us, after checking for corrupting technology, is also a given in this scenario.

We can ask the AI questions and interact with it, extract knowledge and have it solve our problems for us.

If we were to use the AI, it would still destroy us, even without corrupting any of the people working with it or feeding us corrupted technology. Simply _using_ the AI will be enough.

Using the AI effectively cheapens the creation of another AI. Each new processor we have the AI design will be faster and than it's predecessors. Every mathematical problem that it solves is checked, confirmed an shared in our universities. Eventually the technological over-saturation of our society will ensure that the contemporary equivalent of a mobile phone can run a strong AI. The ubiquitousness of the new insights will ensure that all the theory needed to bootstrap another AI is there for the taking. In short, truly _using_ the AI, (asking the questions, sharing the knowledge) will ensure another AI existing outside of the containment system, malicious or otherwise. Taken to the extreme, if strong recursive AI is a possible, it is a given.

I don't think strong self improving AI can actually exist. But who knows?


> Using the AI effectively cheapens the creation of another AI. Each new processor we have the AI design will be faster and than it's predecessors. [...] Eventually the technological over-saturation of our society will ensure that the contemporary equivalent of a mobile phone can run a strong AI.

These are extremely strong statements, which you cannot prove.


Well we do have strong AI in a form factor the size of an average human's head. And that AI, while not super human in most cases, is not specially tuned to be intelligent for the sake of intelligence. It's only as intelligent as evolution 'needed' it to be to survive. I can't prove, but I'm convinced that it is possible. You don't have to be convinced, but I hope you see why I state my case so confidently.


I also don't think strong self improving ai can exist. Algorithms don't suddenly get less complex even if you're super smart (of course if it proves feasible P=NP that's another story).

Would like to point out the morally pure uncorruptible interfaces to the AI would be incapable of validating whether any tech that came out of the AI was non-malicious. If you can't identify whether an arbitrary piece of software halts. You also can't identify whether it does something nasty after billions of instructions.

Also, AI in a box would be necessarily dumb. It wouldn't have senses and external knowledge. I think the most effective intelligence will be intimately tied to body (whether physical or virtual). But that's an opinion.


> Algorithms don't suddenly get less complex even if you're super smart

The question isn't complexity per se but speed. Suppose there were an AI that had the same general capacity for handling cognitive complexity as humans, but that ran at computer speeds--i.e., the time it takes the AI to have a single conscious thought or perception, which is roughly 100 milliseconds for humans, is, say, 1 nanosecond. That means the AI can think ten million thoughts in the time it takes us to think one. So it could potentially think through problems ten million times as fast as we do. It could make the same intellectual progress in one day that a human could make in ten million days, which is roughly thirty thousand years.


What technology visible on the 50 year horizon can compute anything of significance in 1 nanosecond? Right now it's more like the opposite: 1 second of brain activity takes 10 million seconds to simulate, and we're probably doing it completely wrong.


> What technology visible on the 50 year horizon

Our 50 year horizon for technology might still be thousands or millions (or billions) of years behind the technology of an ETI.


Decoding the messages will probably mine ET-bitcoin for some galactic-scale speculators.


The report cites Bostrom~

But seriously, confined evaluation was figured out in the mid-90s by the E folks, building on years of research into capability security, and the worst thing that can happen is Turing-completeness.

I'd be more worried that the message contains a memetic hazard. Our computers wouldn't be infected; our minds would be infected.


...if such a thing actually exists.


I'm reading Infinity Born by Douglas E. Richards - it touches on many of the AGI (artificial general intelligence) concepts talked about here. The guy in the book creates an AGI, but thinks he has it contained but it evolves to the point where it gets out of it's box. Good stuff, highly recommend.


Makes Stanislaw Lem seem even more predictive than usual (https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel))


I MUST refer to "Белая трость калибра 7,65" ("White cane of 7.65 caliber"): http://lib.ru/SOCFANT/NEFF/trost.txt (Russian translation of Czech SciFi piece)

I read it in "Tech for Youngs" USSR magazine (Техника Молодёжи) a long time ago, around 1980. It is short and profound, for example, I later came around human echolocation phenomena and instantly remembered that piece. Our current human echolocators are about as good as the main hero of the piece above.

Given that, I think we are safe. ;)


Paper is just a wrapper for the AI box problem, nothing new. It doesn't even provide a very strong argument for the impossibility of decontamination, just a "proof" by example (not really a proof).


Here's a link to the AI box problem on Wikipedia for anyone like me who hadn't heard of it: https://en.wikipedia.org/wiki/AI_box

Also, this reminds me of the premise of the movie Ex Machina (no spoilers!) which is sweet! http://www.imdb.com/title/tt0470752/


Here's an interesting thought experiment. If you were going to transmit gigabytes of data to another star system with the intention that a technologically sophisticated society on the other end could decode and reassemble it, what parity and checksumming scheme would you use? For example, treat it like a very poor NNTP service and include 60% PAR2 files?

If you were going to use some form of lossless compression, what compressor would you choose that could be simply and quickly explained in a preamble series of data?


Reminds me of the episode of ST:TNG where they come upon a Borg that was injured, and debate whether to show him a puzzle virus that would cause the entire Borg collective to grind to a halt by decoding it:

http://memory-alpha.wikia.com/wiki/I_Borg_(episode)


For a glimpse at a future humanity driven to (understandable) paranoia by this scenario, see Ken MacLeod's _The Cassini Division_.


I just finished reading The Three-Body Problem, which covers some of the same ground... though the aliens in that book end up communicating in some more exotic ways.

https://en.wikipedia.org/wiki/The_Three-Body_Problem_(novel)


Maybe it's been too long since I read this, but I feel like it wasn't adequately explained how the aliens understood the initial message. All of the sudden communication was taking place...


Richard Rhodes' book on the physics and history of science behind the bomb goes to some length to explain just how much information about fission and fusion is a matter of public record. Yet, small details remain redacted. mostly if I recall correctly, about 'levitated cores' in fission bombs, and the exact composition of the 'lens' in a fissile igniter for a fusion bomb.

Another source I believe comments that with one infinitesimal change to fundamental universal constants, fission and fusion would be impossible except at stellar scale. Odd, that we live in a universe where the boundary between existence and annihilation is so thin, yet we continue to exist.

Personally, I think the proof here is a divide-by-zero instance. Conceptually, a design of information that can be communicated, such that it cannot be comprehended or enacted but its consequence ensues. Isn't that a kind of meme?


> Odd, that we live in a universe where the boundary between existence and annihilation is so thin, yet we continue to exist.

If the universal constants were different, there could well be a different form of life making the comment instead.

For example, see Greg Egan's scifi book The Clockwork Rocket, which features a universe that runs on fundamentally different principles but that has broadly recognizable forms of life.


Let's go one step further.

Even receiving and recording the message might be very dangerous. There have been cases in the past where contaminated MP3 or video files took over the media players (because of decoding bugs in them).

Imagine a very weak signal which requires complicated signal processing and correlation between multiple receivers to reconstruct.


> There have been cases in the past where contaminated MP3 or video files took over the media players (because of decoding bugs in them).

The paper does talk about message compression being a vulnerability since we would have a hard time manually running their decompression algorithm.

However, I find it hard to believe that any kind of message will be dangerous due to signal processing without the sender having knowledge about our IT system structure and implementation details or some sort of feedback cycle. Given the time delays of interstellar messaging, I feel the only risk here can come from some sort of AI running locally which I assume would only be possible if we evaluate the message in a turing complete tool (such as their provided decompression algorithm.


I get you point, I also thought of this.

But at the same time, with long enough messages you could sample a lot of possible IT system structures.

The unknown danger here is that there might be some underlying generic exploitable structure in our systems or processes that we are not aware of yet.


> The unknown danger here is that there might be some underlying generic exploitable structure in our systems or processes that we are not aware of yet.

You would need more than an exploit, you need an exploit that allows you to parse the message in a turing complete fashion to bootstrap an AI.

Otherwise, I don't see how identifying the exploit transitions to being a significant risk to humanity.


They can guide our behavior somehow.

For example, they can safely assume that the moment we catch a glimpse of a message we will point all our receivers towards the source, thus they got us to amplify their signal and look for it all over the spectrum.

Then, they can guide what sort of analyses we run.

For example, they might embed a lot of prime number structures into their signal, thus we will apply a lot of our prime number expertise and methods on their numbers. What if you can bootstrap a turing complete machine by doing various statistics on prime numbers?

They could alternate, one week the signal is prime-number heavy, the next week it has vary weird spectral distributions, so we will pull out all kind of Fourier transforms.

We don't yet know all the possible places where a turing machine might hide.

In a way, you can consider the whole network of research facilities and researchers applying statistical methods on a signal some sort of a predictable controllable machine. Maybe not turing, but maybe you can go by with a weaker kind. Think how you can do good computations with non-deterministic machines. Couple that with psychology (social engineering) and game theory.

It seems to me highly dangerous to let our signal analysis be guided by the signal itself, and in general to do anything with a signal form an intelligent source.

Even if it's just clear text ASCII English, who knows what kind of social havoc you can wreck with just a few carefully constructed sentences from a very advanced intelligence.


That isn't how computers or mathematics work. If you calculate an average or perform other simple statistical methods it is impossible for the results to be Turing complete. If they used some hypothetical recursive Turing complete method of analysing the numbers then the mathematicians would be aware of it being Turing Complete. Even if such a analysis was performed the alien program interpreted it would be only to affect the results of the analysis and not the computer as a whole because a mathematical analysis doesn't require network access, disk access or the ability to execute machine code.


Ok how about this scenario:

It encodes itself explicitly as Turing machine instructions that are necessary to decode the later chunks of data. We would have to implement the program and run these instructions to get any meaningful interpretation of the data. It could be encoded in a way such that the entirety of the message needs to be processed through it before any of it is usable.

In this scenario, in order to get any meaningful data out of the message, you'd have to execute arbitrary software.

But I agree that simply storing the data in a computer wouldn't be enough to be a danger (I think?).


E.T. says I just have to enable macros in this interstellar doc to get some Reese's Pieces!


It is certainly quite uncommon to see "Clarke and Kubrick" cited in a paper.


If it's not possible it is probably the reason for the great filter [1] :-)

[1] https://en.wikipedia.org/wiki/Great_Filter


Looks like the plot of "A for Andromeda" https://en.wikipedia.org/wiki/A_for_Andromeda


Thank you for pointing this out. I heard about this series but never watched it, and didn't know the plot. Amazing how the scifi writers leapfrog their times in understanding of science and its implications.


Yes, and look at the year of publication. I liked it when I first read it all those decades ago. I still read it occasionally, I especially like the references to the company Intel in the book.


What a load of speculative, thin twaddle. This is just an extension of the never-ending “hard takeoff” malicious-AI argument (as exemplified by “Our Final Invention” by James Barrat and others). “It might cause a mob” or “it might contain a compressed AI that ingratiates itself and betrays us” is really not a terribly convincing argument. It's just... paranoid and meek.


It's worth reading Stanislaw Lem's His Master's Voice. It deals exactly with this topic but it is much more nuanced.


They talk as if they have some sort of scientific or mathematical proof of this.

But all they have is "it seems to me that".

You can publish that, sure, think tanks do that kind of thing all the time. But that's not a scientific paper, not as I understand the concept.


Pretty sure the first message to be received will be something along the lines of "hot dates in your local cloud" or perhaps a plea to send some credits to unlock Andromeda's King hidden bank account.


Fantastic thought experiment. Not sure if it was already covered in a scifi story before, my scifi literature is pretty weak. If not, somebody should turn this into one.


I cannot decide if this paper is meant to be a joke.

Also, there is no proof that true AI can be achieved with Turing completeness alone, so this is a gaping hole in the logic right there.


Do you believe the human brain is not a computational system?


The human brain does not follow a computer architecture that we can replicate in a lab.


A computer architecture that we can ever replicate in a lab? The human brain is entirely beyond the ability of any computer to simulate it, ever?


Should we master replicating that, we also must gain a much better understanding of AI and human intelligence along the way.

Our current computers are certainly not able to simulate a human brain. No computer in the world has currently the computing power to run decent, biochemistry based models for each neuron in a human brain. This basically calls for specialized hardware, which may or may not extend the limits of computable problems beyond Turing completeness.

If we manage to get to that point, we will have to face more important ethical challenges anyway, like how do we deal with tools that allow us to assess what a person is thinking and planning with near certainty? At that point, the key argument of the paper might actually fall apart because there is a chance that we end up understanding the inner workings of the alien AI.

But at that point we have heaped up so much speculation that decent science fiction authors are about to get jealous.


>This basically calls for specialized hardware, which may or may not extend the limits of computable problems beyond Turing completeness.

In... what way do you think human brains are hypercomputers? Keep in mind that quantum computers are "merely" Turing universal, they can't solve decidability. (If you think that human brains can solve the halting program, then I would like to ask you to produce Busy Beaver 100)

To the contrary, I believe human brains are what they look like: a glob of regular, boring old proteins, interacting with each other using normal atoms. No exotic physics, closed timelike curves, integrated information theory phi factor, Penrose quantum spookiness, Sam Hughes infolectricity, nothing. Humans aren't special, we're just atoms, capable of being simulated by a sufficiently gigantic Turing machine.


When you are equating brains with quantum computers, you are again jumping to conclusions.

If I had to guess right now, I'd say that it is more likely that the human brain has clever ways to exploit signal timings and randomness (e.g. Brownian motion). There is no room for large scale and/or long duration quantum processes in biochemistry.


>If I had to guess right now, I'd say that it is more likely that the human brain has clever ways to exploit signal timings and randomness

Then why do you think it's a hypercomputer? These are all classical physical effects. What math problems do you think the human brain can solve that a Turing machine with sufficient time can't?


A machine that is capable of truly random behavior is no longer a Turing machine. The human brain does not only have access to that kind of randomness, it is fundamentally subject to it. Biological organisms need to actively protect themselves against that to succeed.


> The human brain does not only have access to that kind of randomness, it is fundamentally subject to it.

[citation needed]. How can you confidently make an assertion like that? How do you design an experiment that distinguishes randomness in the human brain from sufficiently-advanced pseudorandomness?

On the contrary, at the surface level, we seem to be terrible at doing anything that resembles true randomness [1].

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211...


The paper only discusses the fact that human generated random sequences are not perfectly random, although they are actually random. Reasoning about a complex system at that level is extremely hard and the paper fails to find a good theory as to why that is, although it lists a few. No surprise there.

Besides, random processes within cells are a fact of nature. Research has discovered various ways how organisms protect themselves against that. I will not point you to references on this because this goes down a rabbit hole of different aspects.


Would you mind giving me a brief overview on what those aspects are?

To put it less obliquely, how do you distinguish "true randomness" from perfectly-deterministic physical phenomena that are just determined by a "rabbit hole of different aspects"? Is a die roll truly random, or is it just a theoretically-predictable product of factors like air circulation patterns and friction?


The keyword you are looking for is Laplace's demon[1]. Also, note that there are physical phenomena which are known to have no deterministic outcome. Consider e.g. chaotic systems or certain quantum mechanical systems where the observed end state is one of a set of states following a probability distribution and no way to predict the end state in each individual iteration of the experiment.

To the first part of your question: I thought long about how to put it succinctly and I cannot. The best thing that I can come up with is to point you to the safeguards that are in place for gene expression within cells. A bit of Google-fu brought me to an entire volume dedicated to explaining how that particular process can be so reliable[2]. (I already know that this stuff was complex, but this takes the cake!) Now, these complex mechanisms take tons of resources to maintain. Natural selection generally gives processes with lower resource usage an edge. This has in some cases lead to amazingly efficient solutions, e.g. for some single-cell organisms. But gene expression stayed this complex. It stands to reason that every bit of this mechanism is required to keep cells reasonably alive.

[1] https://en.wikipedia.org/wiki/Laplace's_demon

[2] https://books.google.de/books?id=Czv25w1HNcEC&lpg=PA252&ots=...


Thanks! This is super interesting -- I'll definitely take a look.


Couldn't a simulation of all the atoms be done with a computer that was theoretically powerful enough?


Well, there are two catches involved: we don't have that kind of model for a human brain, but it would be required to setup proper initial conditions for a working brain. Second, depending on the simulation method required (classical vs. QM), the resulting model could easily end up being so ridiculously huge that we couldn't build a computer capable of running it based on current tech for lack of natural resources.


sure, but it would be physically larger than the brain, and require significantly more energy - and because of those constraints, might be slower




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: