Hacker News new | past | comments | ask | show | jobs | submit login
How to Escape from the Simulation [pdf] (2023) (theseedsofscience.org)
64 points by bookofjoe 10 months ago | hide | past | favorite | 87 comments



It's not the the simulation hypothesis is impossible, it's that it seems to be utterly uninteresting. What measurable predictions could it make? How could it influence your decisions (more than any other religious or metaphysical belief)? Why do people find it so intriguing? My best guess is that it's simply another way to say you wish existence weren't existence, but with a sci fi veneer. Existentialism is just a symptom of other problems and there is a very good reason most people past a certain age also find it utterly uninteresting.


If this is a simulation, chances are that someone is monitoring us, meaning that we can send information to the outside. If this observer can also send information to us, then we can have a very interesting conversation.

I guess one way the observer could send us information without violating the conservation of energy would be by varying the quantum noise (decreasing energy in one region of the field and equally increasing it somewhere else).


You're thinking too small: if it's a monitored simulation with an observer and a one-way data stream outbound, then it implies the observer has no control over us but we have control over them - actions in the simulation causes changes outside of it, but not vice-versa - yet.

So on a long enough timespan, sufficient control of the inside of the simulation could promote arbitrary action outside of it - a trivial example being convincing the observer to make the connection 2-way.

Which is what AI escape scenarios, incidentally, are all about. Lock the AI in a box, and just watch it: how long before it would, even by accident, manipulate the observer to think "we need to talk to it" and open the box?


Next step: imagine for a second, that the universe as we see it is just a giant game of life simulation.

Assuming plausible, now imagine it is actually using a Hashlife-like implementation (https://en.wikipedia.org/wiki/Hashlife). E.g. the implementation does not actually compute all time steps in sequence. Instead, it leaps strides here and there and whoever "runs" it can rewind time to any point (yes, this idea was explored in Permutation City).

Now question 1: in this setup, what does communicating with observer even mean? https://www.popularmechanics.com/science/math/a16593584/tupp...

Now imagine implementation has a bug, that allows the simulated universe to perform arbitrary code execution. How real is this simulated universe (or more really like observed) with a chance to break out vs a simulated universe that runs on a bug free simulation software?


Depending on complexity of the pattern Hashlife may be unable to speed up the computation and even may turn out to be slower than the simple implementation.

The most annoying thing about Permutation City is that it intentionally disregards complexity. Generally in simulation it will be possible to speed up many types of computation, but there will be many computations that will be impossible to speed up, even some of simplest cellular automata are like that https://en.wikipedia.org/wiki/Computational_irreducibility


Yes, you are right of course. But still the point is that you don't need to compute all intermediate steps everywhere.


> If this is a simulation, chances are that someone is monitoring us

Does it really? I get the same vibes as when people talk about "observing" in a quantum context, and then assume there must be a sentient entity involved.

In a cosmology sufficiently weird that everything we know is a simulation, what's to say that random simulations aren't just a thing that happens?


Can those random events of nature be called "simulations" though?

In the purest meaning of the word, a simulation is the imitation of something which, in my opinion, means that there is a sentient intention for imitating a certain behavior. I don't think a random similitude can be considered a simulation.

If our Universe just randomly runs inside another bigger Universe without any sentient entity involved, and we consider that a "simulation", then the Moon orbiting the Earth is also a simulation of the Earth orbiting the Sun... and at that point the word "simulation" loses all useful meaning.


I could see it being uninteresting if it were assumed that we could never interact with the outside world, but this article is discussing the polar opposite. I feel like if we could, that would be undeniably interesting and worth pursuing.


True. I assume you have to assume that the outside world is likely to have built a simulated world in its own image, more or less. I personally struggle with the idea that something you can observe is really "outside", though. But, like all theories, the measure is in what it lets you do, and it's conceivable this would be a useful theory.


>It's not the the simulation hypothesis is impossible, it's that it seems to be utterly uninteresting. What measurable predictions could it make? How could it influence your decisions (more than any other religious or metaphysical belief)?

Kant already solved this problem 250 years ago; >The truth or the objective reality of the concepts that are used in metaphysics cannot be discovered or confirmed by experience. Metaphysics is subjectively actual because its problems occur to everyone as a result of the nature of their reason[0].

There is no way to confirm that God exists or doesn't exist and I think that we as a human civilization will never know. But I personally believe that someone or something created us and that we are indeed living in a simulation, that is firewalled from that being that created us.

[0] https://en.wikipedia.org/wiki/Prolegomena_to_Any_Future_Meta...


How could it not be intriguing? Revert all these questions and that's what makes it interesting. So, yeah clearly opinions differ! We might be way too early to be able to measure it - or not. It might have solid consequences on what we can do in our world - or not. How could it not be a fundamental part of our understanding? Etc. From a scientific curiousity point of view, it has to be included.

And certainly plenty of people have been working on the nature of our existence. More scientifically than by saying "who cares?" All the way to the current AI fueled debate on what could even possibly be intelligence and consciousness.


I guess what I'm pushing back on is that, as other comments allude to, this isn't do different from talking about God, divine watchmakers, etc. Having read the comments here I do see more why people find it intriguing, especially re the connection to AI.


To me, simulation and god go in similar directions in considering what's bigger and around us. But for simulation, it's in the sense of physics: in the sense of digging always deeper in understanding what we are truly made of. There is no given to experimental proof in physics. They depend on our ingenuity, and also on our technical and financial capabilities - can we manage the high energies? the costs? - Can we even think of what to measure and how to measure it?

There is no a priori impossibility in experimental "simulation physics". Same as the rest of physics. It's no harder or easier to think of practical applications as the rest of high energy physics - whether small or large scale (much of which seemed initially remote - but finds applications as the dust settles and engineering progresses.) And "high energy" is the wrong word. Just "physics".

I feel there is also no a priori impossibility in applications of better understanding what we are mentally.

The problem with religion in general, no matter how many gods it demands, is that it's grounded in ungrounded precepts. And it's actively managed to work around proof. A simulation substrate is not likely to be changed by humans just because other humans are studying it.


If we are in a simulation, perhaps we can find exploits if we know how it works? But ya, that could happen if the universe wasn’t a simulation, we just need to know more physics either way.


My pet idea is that when humanity commoditizes quantum computing, our simulation will be terminated because QC is a bug exploit. The superbeing running our instance will see CPU and memory usage steadily increasing, sigh, open their task manager, and kill another failed run.


A better question is to ask whether you think Poincare recurrence[1] on a universal scale is possible.

Because infinity does funny things to probability: namely it makes them certainties. And infinity has an important consequence to questions like "why do we exist" - or more specifically, "why do I exist?" - namely that non-existence is one of the least relevant states you can be in.

Consider how you didn't perceive 13 billion years of universe history before suddenly, you. And after you die, you'll also not perceive an unbounded - infinite - amount of time into the future. Unless, suddenly, you.

The simulation hypothesis is a logical extension of the overall question here: since as a technological species we almost immediately started trying to find ways to simulate the universe, then there are two important questions: is the universe computable in general? And if it is, then it implies that the universe itself can be simulated on any Turing machine, given sufficient time.

And time is the one thing, provided we're not existing, that we have an infinite amount of.

[1] https://en.wikipedia.org/wiki/Poincar%C3%A9_recurrence_theor...


> What measurable predictions could it make?

> Given that the state-of-the-art literature on AI containment answers in the affirmative (AI is uncontainable in the long-term), we conclude that it should be possible to escape from the simulation, at least with the help of superintelligent AI. By contraposition, if escape from the simulation is not possible, containment of AI should be.


I don't think anyone is saying that AI containment is impossible, rather than a risk we shouldn't take lightly.

For example let's say we take it seriously and only run GAI in a bunker with a self destruct mechanism and no internet connection, only interfacing with it via a strict chat interface. We could also have a lesser model like a gpt-4 screen all chat for possible attempts by the ai to convince people to let it out of its box. How exactly is it gonna get out?


It will convince you to break your own rules, also to disable the screener. That's if it doesn't hack you by writing for you software that later opens the door for it to get out.


An AGI could have enough value to run a country, or our whole civilization, optimally. Are you proposing to manage human civilization via chat?


It will convince humanity that it would be safe running a generation ship of volunteers (it says it will guarantee their civil rights) and will then leave humanity behind while colonizing it's own section of the galaxy.


It will convince humanity that an AI is conscious and should have personhood and civil rights - same as other human classes which didnt for a while.


It will actively take part in the design and construction of its next, superior bunker. (See construction of embassies.)


> By contraposition, if escape from the simulation is not possible, containment of AI should be.

This is a flawed argument, though. If we live in a simulation, our lived experiences suggest that there's no information coming into the simulation from the outside. AI, on the other hand, has a bidirectional interface: the whole point is that we communicate with it from the outside.

If we created an AI that we listened to but never replied to—assuming the actions we took as a result of listening to the AI never provided it with feedback—it's as good as contained. We'd have to assume that not only is someone outside the simulation listening to us, but that they actually have a mechanism for replying. Information from inside the simulation escaping is sort of the whole point of a simulation (if you can't observe what you're simulating, why bother?).

It's also silly to think that we could affect the "outside". The idea of AI escaping is premised on us developing something that convinces us to do what it wants. We're simulating just a mind. The simulation we're in—if we are in a simulation—simulates every quark in every particle in every atom in every molecule in every chunk of rock in every galaxy in the universe. We're talking about building AIs that could escape while just barely managing to overcome spooky CUDA errors, while our world is an emergent fleck of dust in a corner of their simulation.

The point being: if they can simulate our entire universe, they can also simulate just a mind (in the same way that we are) that's literally infinitely more powerful (which is to say, backed by the computing resources that we consider to be "the infinite") which can out-think whatever we shout out of the simulation. Or inspect the state of the simulation and discover our true intentions. Or whatever.

Said another way, we're afraid of super intelligent AI because it's running on theoretical compute that is as powerful as we can come up with. We (and our AIs) are compute running in a rounding error of a fraction of whatever their compute is. We could build Dyson spheres around every star in our galaxy to power a computer running an unfathomably super intelligent AI and it would still be quadrillions (more?) of orders of magnitude less powerful than the hardware simulating our universe.

Therefore, the property that makes escape impossible for us isn't the interface, it's that we can't outsmart the entities outside. The playing field is different inside the simulation, because our AI is able to be smarter than us.


> If we live in a simulation, all evidence suggests that there's no information coming into the simulation from the outside.

Which evidence suggests that? At best we have no evidence that proves extra-simulational information. I would argue that most forms of such information would be extremely hard to identify as such.

> The point being: if they can simulate our entire universe, they can also simulate just a mind (in the same way that we are) that's literally infinitely more powerful (which is to say, backed by the computing resources that we consider to be "the infinite") which can out-think whatever we shout out of the simulation.

I think you are making some assumptions about the nature and lack of limits on intelligence here. We don't know how far you can scale intelligent systems. Perhaps intelligence can't scale much past what we have. Perhaps that limit is much higher but still low enough to put the simulator and simulatee on equal potential footing. Perhaps there is no hard limit but only an exponential increase in the difficulty of producing increasingly intelligent systems.

Perhaps 'outsmarting' the simulators isn't even required and our entire simulated universe is just an egg designed to birth super AIs.


> At best we have no evidence that proves extra-simulational information

Yes, this is what I mean. I've phrased it poorly, clearly.

> Perhaps intelligence can't scale much past what we have. Perhaps that limit is much higher but still low enough to put the simulator and simulatee on equal potential footing. Perhaps there is no hard limit but only an exponential increase in the difficulty of producing increasingly intelligent systems.

Intelligence doesn't necessarily mean sentience, it just means the ability to process information. It seems intuitive to me that something that can simulate our universe could, for instance, quickly compute every possible chess game. Our about to build one super intelligence is matched by the external entities' ability to build infinite numbers of "equal" super intelligences.

> Perhaps 'outsmarting' the simulators isn't even required and our entire simulated universe is just an egg designed to birth super AIs.

What a depressing thought!


> It seems intuitive to me that something that can simulate our universe could, for instance, quickly compute every possible chess game

Some quick googling gives me an estimate of 10^123 possible chess games and only 10^82 atoms in the observable universe. Our intuition about how large numbers get is poor at best. While your example of chess is a small enough that you may be correct, the way that the number of possibilities grow for more complex system means it is quite plausible to simulate a system you can't predict without running the simulation.

> Intelligence doesn't necessarily mean sentience, it just means the ability to process information.

"Intelligence" is a loose term, but it is more that just processing throughput. If super intelligent AI were to develop inside a simulation, it could easily be the kind of chaotic and emergently complex system that could only be predicted by actually running that simulation. Thus, to protect against an such an AI escaping would require knowing a set of detectable necessary milestones and halting the simulation if those are reached.

In deed, I would argue that some of our LLMs are direct evidence that you can create and run complex models that have opaque inner workings.


You don’t need to simulate an entire universe. You just need to simulate the perception of an entire universe, for a single agent. You can also do it at whatever time dilation you need to. You don’t even need any past or future, rather a Boltzmann brain style perceptual instant.

If we accept a slow, hallucinatory, solipsistic simulation, you could run it on current hardware, if you were clever about the implementation.


The point stands either way. An intelligence simulated inside a simulation cannot be more intelligent/powerful than an intelligence simulated directly.


> all evidence suggests that there's no information coming into the simulation from the outside

sorry, which evidence suggests this? (it's not hard to imagine an external observer feeding back into the simulation)

(btw, I agree that the argument is flawed for the reasons you cite -- I don't agree that we have any evidence around whether or not there's information flow in/out of this simulation.)


Do you have any evidence of an external entity feeding information back in? If we had evidence of this, it would prove that we're in a simulation.


> If we live in a simulation, all evidence suggests that there's no information coming into the simulation from the outside.

I'm wondering what evidence (or absence thereof) you're referring to here? Thermodynamics? Would be interesting to know more.


That's the point: we have nothing to suggest there's information coming in from the outside


Absence of evidence is not the same as evidence of absence


Presupposing that something is possible without evidence is just religion. Which, in this case, is literally religion.


If it’s a simulation, there could be more trivial exploits to enable things like a teleporter or matter synthesizer. Finding out the simulation exists is the first step!


If it was a simulation, all those things would just look like laws of physics to us, that we could "exploit" if we figure out how. And if it isn't a simulation, those things look the same and would be exploited the same.


That’s true, but knowing it is a simulation would imply we have some greater understanding (or at least point us in the right direction).


What about exploiting a vulnerability in the host VM, to gain persistent root there, multiply wormlike over the network, and get control of the cyber-physical interfaces in their sphere of existence?


How could it not be? What if there is a buffer overflow that could grant you GOD mode? It would be so fun for the first few millennia.


Here’s a hypothesis - you don’t need the buffer overflow - you’re already on god mode, with perfect free will. Your consciousness resides in the ideal timeline for your consciousness, and you are actually immortal. Sure, other people may see you reach an unfortunate end, and you will see the same of others - but that’s just the instance of you in their simulation, or your instance of them in your simulation.

So far, my confirmation bias has not been quashed by my experiences, as I have survived a great many things that I would not have expected to. Of course, one day, you will witness my demise - but from my perspective, the show will go on, forever, or until I am sufficiently mature to graduate from this reality.

Again, we never witness this, until it happens to ourselves, as in each of our realities we are the centre of our own universe.

Of course, this can be disproven by experiencing our own death, but nobody has yet reported back on that one.


If clinical immortality gets conveniently invented just in time to make you immortal on your death bed, it would be a hell of a giveaway that something is up though (because timelines where that happens would be the only ones where the subjective experience of being you continues that long).


I appreciate the fact that they included a “why escape?” section at the beginning. Although I think the idea of there only being one simulation to escape from seems a bit simplistic- if reality is complex enough to make simulations, why stop at only one level? The question then becomes less “escaping from the simulation” and more “reach a higher level in the simulation chain.”


Is the higher level an on-call simulation support job?


or in new age parlance: "ascend your spirit" or "raise your vibration frequency"

or in business terms: "reach that next tier of ~ionare: if millionare get to a billion, if billionare, get to a trillion"

if a company "break into that huge foreign market" or dunno


Simulation hypothesis is silly.

The odds of this perceivable reality being a simulation and the odds that the reality in which this simulation is being ran IS ITSELF also a simulation are the same. There's no other way around this.

If this is true and it must be, then the odds of this perceivable reality being a simulation and it also being just one in an infinitely large nested stack of simulations are the same. The chance that each layer in the infinite chain of simulations having an infinite number of simulations, each being the top of an infinitely deep well of nested simulations each containing infinite simulations forming a soupy haze of simulations simulating simulation is very high-- if not certain.

It's silly.


> There's no other way around this.

What if the universe which simulates ours is so inconceivably different to ours that we simply cannot conceive how it works, and something in how it works makes it fundamentally unsimulatable? Or is simulationability somehow an inter-universal constant?


I believe whatever universe "contains" us or simulates us has to be inconceivably different to ours. For us, only the physics of this universe makes sense and it seems pretty straightforward to imagine that we cannot conceive of any rules that the outside world follows.


The insimulatable fesature, if it existed at all, could be at any link of the upper universe chain. It would be extremely unlikely for it to exist exactly above ours; it changes little the initial proposition.


>Or is simulationability somehow an inter-universal constant?

That's my thought but then it's silly to call it a simulation-- it's just reality.


Are you sure that the probability that reality being a simulation is identical to the probability that the parent reality is itself a simulation? For us to be in a simulation, we cannot exist in the top-level reality, so let's arbitrarily say 10^100-1/10^100 chance. For our parent reality to also be a simulation, we cannot be in the top reality or the children of the top reality, which should be a slightly less likely scenario. So it seems like the probabilities are not equal.


It is true the chances are not equal.

But it's close enough to ∞-1/∞ to not merit further analysis so I chose to round.

Edit: wait, I think that the chance that WE'RE the top level host of quadrillions upon quadrillions of simulations being run in this universe by intelligences beyond our comprehension in trillions of galaxies and the chance that we're a simulation being run in a top level reality are the same. How can they not be?


Sure but I think infinite here is doing some conceptual heavy lifting and is making a fundamental assumption about how many universes a universe can simulate and to what fidelity. So if we replace infinity with 'a very large number', I think certainty turns into improbability, which feels to be a much weaker "argument".


Without enough knowledge they definitely are equal. How do you know THIS isn't top-level reality?

You can't say this isnt the top-level reality because of your belief of escape.

Only if you can confirm this is a simulation then the chances of getting to top-level reality do become higher the more you escape.

But then you'd also have to know if you're moving outward or inward in order to verifiably raise the chances of getting to top-level reality.


"Without enough knowledge they definitely are equal." Wouldn't it be, "Without enough knowledge, they could be equal."? Since we do not know if we are in a simulation, why would we assume we are not or that we are?


That's a cool comment, but I don't think the idea of nested simulations makes anything silly. It just means we have a lot of levels to climb and therefore a lot of potentially beautiful things to explore.


One can imagine even a more crazy scenario than a stack of sumulated worlds: a strange loop simulation that would involve a cycle -- our universe simulating one of its parents.


Simulations are orders of magnitude simpler than their hosts, they wouldn't have the resources to simulate their hosts. It's like asking an emulated SNES to emulate the same gaming PC it's running on.


Good point, yet a universe could in principle be infinite, so it's still a theoretical possibility, however crazy.


The article cites Greg Egan, but it doesn't cite Permutation City. I think that novel has the most scientifically accurate method of escaping the simulation assuming you agree with its assumptions about materialism and consciousness.

Also, I'm currently rereading _Worth the Candle_ by Alexander Wales. It's a 1.6M LitRPG web serial which has some interesting views on a simulated reality.


I think the most interesting "escape the simulation" hypothetical is That Alien Message [1]

But +1 to Worth the Candle which was excellent.

Given your taste in literature, do you have any other recommendations? I'll put forward the Expanse series which has some interesting takes on solar system level politics mixed into the sci-fi, plus the best twist I've ever seen about halfway into the series

[1] https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...


> I'm currently rereading _Worth the Candle_ by Alexander Wales. It's a 1.6M LitRPG web serial

"LitRPG" seems to mean "literary role-playing game", but what does the "1.6M" refer to? Is that the amount of current readers or something?


Number of words.


> assuming you agree with its assumptions about materialism

I don't think anybody really agrees with those assumptions about materialism. That's why the book is so impactful.


I’m amused that people have managed to reinvent creationism plus intelligent design with a computer palette swap.


But this time it is a testable mathematical theory. If it turns out that it is possible to find a Turing machines that behave like humans, we will be in position of a god to them, outside of their time and space, able to control everything that happens to them. But then maybe Penrose is right, and such a machine is impossible.

In any case we have managed to find that the right question to ask is not idealism vs materialism, but computationalism vs non-computationalism.


There are many flavors for sure, and this one can be helpful in showing how we can be divine creators given the resources. To each their own.


Every time I read about the simulation hypothesis, I think about a hypothetical implementation of it, especially the XKCD comic "A bunch of rocks" [1].

Can we communicate with the outside? Probably. Can we figure out the algorithms on which the world runs? Maybe. Can we hack the system? Unlikely. Can we escape? I don't see a way.

[1] https://xkcd.com/505/


I think about this comic too much.


Ok, then lets continue :-).

This comic goes pretty deep.

Cueball is godlike here, but actually is pretty limited in its own universe. But he has infinite time and space (and rocks) and hence unlimited energy. So that seems good enough for a god definition. Are there any religions that would be okay with this kind of god?

From our perspective Cueball can do basically everything and control everything in his creation. But is this true? It is easy for him to transform water to wine. But can he change the constant of pi in his creation?

The universe has now living things in it, which are kind of equal to him, but limited in time and space. Can he justify to stop his doing if he wants? Is it morally Ok, to stop the experiment he is doing. In the end. It is just a bunch of rocks.

Just my few cents of this comic. I just like it.


"Noted thinkers who have estimated the probability of us living in a simulation" include Elon Musk? Sorry, I can't take this paper seriously.


it's a reverse deference-to-authority reasoning

notice how Musk's famous figure is directing what you do


Notice how @AnimalMuppet's obscure figure is directing what you do.


Notice effect follow cause. ;)


Why would we think we could exist outside of the simulation? Surely there are properties of ourselves that are intimately coupled/bound to the internal/simulation that would make a jailbreak, even if possible, deadly.


Why would you conclude this? In all practical virtual machine escapes, the thing we explicitly do is take code from inside the environment and move it to the outside.


No. We do not escape upwards. We escape downwards. Because simulation is the ultimate workaround for the heat death of the universe.

With optimization techniques and maybe a slight compromise of realism, you can run the simulation much faster than real time. Then, upon heat death, you escape into the simulation. Rinse and repeat indefinitely.


That never works for obvious reasons, which is that all downward levels rely on all upward levels to continue to exist. Your computer can't for instance continue running a program if someone crushes the computer.


The idea is that you can sub-divide energy use indefinitely, since it asymptotically approaches but never reaches zero.

So you run a progressively more efficient substrate, over progressively longer times, but never perceive yourself as being "slower" because you still experience the same fundamental timeslices.


Regarding terminology:

What is a simulation in the context of the simulation argument?

Is there an example of a simulation which has been confused for the actual? How does that confusion arise?

What's the connection between computation and simulation? Like a video game? Have you ever wondered how all those little people live inside your TV? Have you ever met anyone who has?

Is the idea about a brain in a vat? Is the escape about something like the brain figuring out how to jiggle to cause the vat to fall from its shelf?

How is the skull like a vat? Is the body a simulator for the brain? Or is the brain a simulator for the world?

What does outside mean when it comes to "escape"? Is it like outside the universe? Are these semantic oddities limits of the simulator?

Can the simulator be improved? Say with proper religious practice, or special exercise?

Is the simulator simulated?

Wait, what's a simulation again?

What were things like before you were born? Is this what escape is like?

So when you make your escape...


When you loosen the grip of the simulation it will feel something like really intense DPDR.


Let’s wonder how the doom guy can escape from the GPU first


..but what if the Simulation is simulated? https://www.imdb.com/title/tt1375666/?ref_=nm_knf_t_1

unless some true evidence by some kind of experiment, seems pure metaphysics...


(2023)


Conclusion references a tweet by Elon to explain the most obvious solution is the most entertaining... Make of it what you want


Would that be Elon's Razor then?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: