Pratchett would have then said, each of these frames is an entirely new universe, the previous one being destroyed.
"And they are told: 'Wen considered the nature of time and understood that the universe is, instant by instant, recreated anew. Therefore, he understood, there is in truth no past, only a memory of the past. Blink your eyes, and the world you see next did not exist when you closed them. Therefore, he said, the only appropriate state of the mind is surprise. The only appropriate state of the heart is joy. The sky you see now, you have never seen before. The perfect moment is now. Be glad of it.'"
I've often wondered if the speed of light was somehow related to the computation cycles per second of our universe.
For example, imagine if the universe was made of a three dimensional fabric of tiny pixels, and the only way information could move to another adjacent pixel was one universe calculation at a time.
So when something without mass is moving across a field of pixels, its speed is capped by the number of universe calculations happening per second.
There are such theories, even with mass requiring more calculations. So when there is mass nearby, you need to make more calculations, which slows down that speed in that place. Slowing down speed in medium typically provides refraction/bending of light, so maybe bending of path of light/mass (aka gravity) is due to computational complexity.
How would you propagate information in a 3D pixel array so that it would move at the same speed (and generally, everything would behave identically) in any direction?
Like Conway's game of life. Time and distance are the same thing. On a chess board, it takes a King 7 moves to reach the other end of the board, regardless how long it takes someone to make that move, or how big chess board is - from King's pov its all the same. Also heard somewhere "speed of causality". There must be some good argument for why it couldn't have been any other way.
I meant, how to make the calculation so that the speed is the same in horizontal and diagonal direction.
Imagine a chessboard where every square is 1 meter wide. When the king moves horizontally, his speed is 1 meter per turn. When he moves diagonally, his speed is 1.4 meter per turn. How would you define the rules so that the speed is same in all angles? (Not just 0 and 45 degrees, but also e.g. 13 degrees.)
Can you make in Conway's game of life an expanding circle that remains circular at large scale (and doesn't become e.g. some kind of octagon)?
In our universe all directions are equivalent. If you fly in a rocket, you can't distinguish between moving horizontally or diagonally according to some hypothetical absolute coordinates of the universe. (The theory of relativity is based on the assumption that there is no such thing as absolute coordinates, not even the coordinate of time!) Which makes me suspect the models that assume the existence of some absolute coordinates (such as a 3D grid of Planck-sized cubes).
What if space is discrete and time is also discrete? Then there is no paradox. It disappears.
I find the possibility that the Universe computes in `ticks` and space is discrete to be very soothing and I'm sure it has nothing to do with me liking computers :)
Some years ago, when learning about quantum mechanics, and how you could find a particle inside a forbidden region, I connected that with physics engines in games (and how computing them not fast enough can cause things to intersect in odd ways), and I had this idea:
What if time flows in discrete steps, but the `ticks` aren't uniformly separated? For example, they could be distributed probabilistically as a Poisson distribution. In that case, a particle hitting a potential barrier could pass it briefly, until the tick happens and it moves out of it.
There might be a way to compute an optimal probability distribution for that case, but I'm not sure if it could be consistent with Schrödinger equation.
But I never had time nor knowledge to search whether someone had already thought of it, or if it could be even remotely plausible.
A few years ago, I saw a proposal for multiplayer games which suggested that the time passage between ticks vary depending on how many users are playing within a close space, to help deal with the increased network load. This is something that Eve Online does, but it segments its game-space into discrete sectors and therefore it slows down time uniformly in these sectors. This proposal was to do the same, but variably within one contiguous gamespace, meaning players would observe each other “slowing down.”
This seemed interesting to me because, well, what on earth would it do to your game physics to have varying time passage between ticks depending on your position in space? So I rigged up a simple simulation of 2 points connected by a spring force (a “body”), gave them an initial velocity, and set the tick to slow down linearly according to distance from a “center.”
The end result was that the body orbited around the center. The varying time passage meant that the spring force’s effect on the point masses would be reduced for the points in the slower tick speed. So it seems like you can create a kind of “gravity” by bending time but not space.
There's a theory on how the universe is structured that instead of space-time being like a uniform grid, a Cartesian space, that space-time is made up of nodes which are linked together. I guess kind of like a voronoi diagram in 4 dimensions. I wish I could remember the name of those theories.
If space-time is packed together in most places, then it might resemble a grid to us. This model works in a lot of interesting ways too: wormholes are just links between two otherwise unconnected regions, the curvature of ST can vary depending on the region, and time can flow at varying rates depending on how "dense with time" an area is.
One thing I like about it is that Planck units kind of just represent a minimal distance between nodes (indeed, the connections between nodes defines "distance" and "duration") and everything else just falls out of the geometry of this ST graph.
I took a creative writing class at University (mostly on a whim). One of the stories I proof-read was a novel about this idea. That the universe didn't tick at a uniform rate and there were people who could "tick" faster or slow. Of course, you run into all sorts of people with breathing and moving through very hard air. The core tenet of the book was that the clock domains of the universe were drifting apart at an accelerated rate and the protagonists were trying to stop it. It was an interesting concept and a fun short story.
>core tenet of the book was that the clock domains of the universe were drifting apart at an accelerated rate and the protagonists were trying to stop it.
Another idea would be to have the clocks running backwards.
To me a neat consequence of quantized motion and dimensional extent is that there's a hard limit on angular orientation and hence accuracy. That is to say, it's possible to be far enough away and in a proper position such that a particle in another certain position could not be propelled directly at you, only at positions around you since you could be standing in the middle of the maximum angular resolution of the propelling force.
Wow, I had never considered this! Although, as "UnFleshedOne" points out, that would imply a fixed geometry (not necessarily a grid) wouldn't it? We don't know that angles are limited within a "Planck volume," so they could be continuous (not that anything else seems to be!)
If this were true though, I'd think it would give us a fascinating tool!
There is no evidence the Planck length is the smallest possible length in the universe, a pop sci misconception. It is the length at which quantum effects dominate the structure of space time, ie "quantum foam". It is perhaps the smallest measurable distance according to some calculations. Most experimental evidence indicates space is continuous.
I think it's possible to detect if we're in a simulation or not. I think of it in a similar way to detecting whether or not a system is running in a virtual machine or on the hardware machine itself. Even if the VM is really, really good, there may be sidechannel tests that can be done, like measuring the amount of heat generated for a given computation, or weird timing attacks, or edge conditions that affect the underlying hardware.
For every action taken inside the simulation, there will be things that happen on the "hardware" outside of the simulation and since it's simulated, not real, the "hardware" won't respond in precisely the same way as it would if it were actually real in every possible circumstance.
The only simulation that would respond in exactly the same way as a "real" universe under all circumstances has to be a perfect 1:1 mapping in terms of "universe hardware", which is exactly the same as running on "real" bare metal.
You can only tell it's a VM via sidechannel because you know what not-a-VM would do already. In the case of the universe we have nothing to compare to, no way to say "that's an effect of running on the underlying hardware" vs "that's a special case of how the hardware runs". E.g. one could take speculative execution rollback after certain workloads as timing jitter from being in a VM if they didn't know that's just how things work on a CPU and the instructions recently ran have an impact on how the next ones will predict.
The difference is that there's an underlying assumption that base reality is analog and perhaps continuous rather than discrete, and definitely not CPU or memory constrained.
In the scenario you mention, reality would be a program because it is still run on a computer in either case. If reality isn't a simulation, then we shouldn't be able to DDOS it, or overflow it, crash it, access a memory buffer, get root access, or execute timing attacks as those things are artifacts of computation.
Let's assume mankind could create a machine that could crash the universe. Should we ever press the button to test it? And if the button is successful (the universe terminates), how would we get to know the result of the experiment?
How, without information on the "real" world, could you make a determination as to the fidelity of the simulation? Heck, a simulation doesn't even have to attempt to have the same natural laws as its host reality.
If you're on an emulated x86, and you know a lot about x86 architecture, you may detect that you're on an emulator. But if all you get is information on the one single instance you're running on right now, no references, no documentation, no comparison with bare metal... at that point what's the difference?
The problem with the VM example is that in this case you can actually compare the VM behavior with your knowledge of real bare metal behavior and try to detect differences. But what if you have nothing to compare to. If you are inside a VM and never heard of bare-metal and have no chance to get at it for comparison.
AFAIK whether we are in a simulation is undecidable as long as the simulation rules are self consistent...
Philosophy had been dealing with similar problems for a long time: https://en.wikipedia.org/wiki/Solipsism . The best they come up with is: "I think therefore I am."
What's the actual "problem" that solipsism deals with? Solipsism is simply a poor explanation of all phenomena, since when one says "nothing exists except my own mind," one could just as easily say "nothing exists except my own mind, that table over there, and the piece of bubblegum stuck under that table." Both explanations account for precisely the same set of phenomena (all observed phenomena), and both can have a multitude of variations applied to them without losing any explaining power. That makes them poor explanations for phenomena, just like "God did it" or "a magician did it" are poor explanations for phenomena.
I welcome a better explanation for the phenomena. AFAIK we've been trying to come up with one for a long time. A brain floating in a nutrient tube and experiencing extremely high fidelity VR you would also claim that table and bubble gum exist. But from perspective of outside observer the only thing that is real is brain in a vat belief that table and bubble gum are real. You are right that solipsism is not the best example, I should have used Descartes Demon: https://en.wikipedia.org/wiki/Evil_demon
The overarching point, is that I think it does not matter if we are in simulation as long as it is self-consistent. For the sake of argument, let's assume the world is deterministic. This will allow us to construct two scenarios. One is a real person living and dying in a 'real world'... the other one is a brain in a vat living the same simulated life. Both of their subjective experiences are identical, even if you randomly swapped them they would not notice.
Based on the above thought experiment, I think it does not really matter if we are in simulation.
Some counter arguments:
* But the world is not deterministic! What if things are better in the 'real' world. Sure, but what if they are worse? There is already a wide disparity in this world. Some people live in post-scarcity society and others don't know if they will eat tomorrow. Even if we are in simulation, I would argue average external world experience would fall somewhere on the spectrum we have.
* But what if whoever runs the simulation decides to shut down the simulation. This is honestly one of the most benign rapid ending of the world I can imagine. Other events that could cause termination of human species would cause far more suffering. This would be global pandemic wiping out everyone, all out nuclear war, massive meteor or other white sky event... Just as well as shutting down the simulation beings that run it could choose to prevent one of other world ending event from happening. So I think this one is also a wash.
> I welcome a better explanation for the phenomena.
For which ones specifically? Generally any traditional scientific or layperson's explanation for a phenomenon is much better, namely the part that explains my observation of some entity by accepting that the observed entity exists. (Note that this doesn't imply that all observed entities exist. There are many cases where we have good explanations for why one would observe an entity that does not exist.)
> The overarching point, is that I think it does not matter if we are in simulation as long as it is self-consistent.
I suppose it might be okay to say that you believe that, as long as you're actually still generating new knowledge in order to solve problems. That's roughly the same as an astronomer saying "it doesn't matter whether Mars exists, as long as the entire Universe behaves consistently as if Mars exists." That astronomer can probably still help solve problems like figuring out how to land a rover on Mars, although I suspect that odd bit of epistemological gymnastics would introduce at least a small amount of difficulty compared to the astronomer whose epistemological stance is simply that Mars indeed exists.
Perhaps that's the difference in epistemology here: in your quest for knowledge, is your goal to solve problems, or to experience some feeling like "certainty" or "justification"?
I think that's what he's trying to say here - most of us don't care if the universe is a simulation, just if we can continue to solve problems in it (live).
I think you might struggle with the justification or certainty portion.
Otherwise, adverse feelings towards simulation tend to stem from elsewhere, namely the inability to ascertain what is real (the root of madness), an aversion to that greatest of promised simulations, heaven, or an innate desire to not be trapped in substandard eventualities - e.g. all man made simulations thus far suck hard donkey balls in a myriad of ways.
In light of solving the past three problems, sollipsism at least as I understand you to mean in this context, solves the problem of providing us a framework to tackle dealing with those issues, otherwise for the rest of us, we don't care.
I don't much care for the OPs article either, it's as much nonsense as calculating the size or speed of the universe using Genesis's 7 days of creation - that is to say it is hypothetical if not theoretical and ultimately conjectured.
Are we running in a simulation that has a specific FPS target cap, or is that the "hardware limit"? Maybe the universe will discretely lower the resolution of rocks on Mars to increase FPS for localized users on Earth that it knows are testing such theories.
What if the reason we have not received alien signals is because the universe outside of the solar system has a much lower resolution.
If we are living in a simulation and the goal of the experiment is to study a living civilization it may be unecesary for the experiment to simulate other civilizations if travel between the two is impossible.
The simplest I answer I could come up with is to try to record video of light moving at over that framerate. If there's still smoth movement across frames we're probably real. If it looks like a crappy gif we're a simulation.
Imagine a robot trying to argue that it doesn't if they live in Westworld; I mean, the robot bar tender still has to tend bar. Imagine Truman trying to convince himself that it doesn't matter if he's in a TV show or not.
Their arguments, no matter how clever, would be absurd when looking in from the outside.
If there's a high probability that the simulation hypothesis is true, it seems clinically insane to believe that it doesn't matter.
The thing is Truman and the characters in Westworld (I think, haven't seen it) are being manipulated to create a narrative.
I we were in a simulation where the creator(s) were hands off, just letting it run its course, then I don't think it really matters. If, on the other hand, they're pulling strings and manipulating outcomes for whatever reason, then it seems like it matters a lot more. At least that's my intuition.
Absolutely. If we are in a simulation, then I want our species to eventually learn how to grab control of the developer console, use that to get information on the physics of the developer's world, use that to learn how to perform the equivalent of Drexlerian nanotech, wet-print ouselves out into the developer's world, and bootstrap into becoming "real", or finding out how far the turtles go from the developer's world. If it is a simulation, then I want to alter the damn rules to eliminate the scarcity we currently labor under.
> Absolutely. If we are in a simulation, then I want our species to [...] bootstrap into becoming "real"
Why? Undoubtedly the world outside our own has its own problems, all of which are possibly utterly alien to us.
> or finding out how far the turtles go from the developer's world.
To what end? Why is the top-most reality more valuable? If it is merely a matter of having more to explore, well, there's still plenty of this universe left.
> If it is a simulation, then I want to alter the damn rules to eliminate the scarcity we currently labor under.
Fair enough, but if there's a way to do that it hardly matters if we're in a simulation or not. Any exploit we pull off may as well just be some quirk of physics from our perspective.
Selfishly, I don't want our reality to be turned off.
All other motivations follow from that, as intermediate steps to figure out how to break out and become more permanent. Even if I'm a consciousness inside a Boltzmann Brain near the heat death of the reality-actual, a hundred Sol years passing for every "second" of simulation "tick", I want help in whatever way I can on a solution to beat entropy. Not going gentle and all that...
For those interested, the "3D projection of a 2D surface" that the author mentions is more formally called AdS-CFT Correspondence[0]. The AdS stands for "Anti-de Sitter spaces" and CFT for "Conformal Field Theories". This finds that the "surface" of the former looks like an example of the latter.
Man, the author has stumbled onto such a cool bit of physics/math. Wish I could ramble on about it more, but I'm about to head to work. Hopefully, someone else can elaborate!
I recall a blog post or article posted several years ago, somewhere, that discussed the networking consequences of lightspeed being an upper bound for bandwidth. Unfortunately "lightspeed networking" is not a unique search term! Does this ring a bell with anyone and does anyone have a link to that article?
I can't recall any specific article, but I recall reading about how it could complicate interplanetary communication if we ever colonized another planet.
Tangential to my original question, but as I was trying to track down the article I remembered, I re-discovered Paul Krugman's Theory of Interstellar Trade[1], written in 1978 which discusses the economics of shipping goods at near-light speed.
> It's still fascinating to think of the universe as a computer simulation. Modern physics make it seem more like a video game than ever.
The double slit experiment is all I needed to see to become fairly convinced that the universe is a simulation. I view it as a race condition in our physics.
Funny I never linked the two, to me the duality was mostly a hint that 'atomicism' is an anthropologic projection of our brain onto matter.. we need to find limits and boundaries instead of infinite fields waving around.
"And they are told: 'Wen considered the nature of time and understood that the universe is, instant by instant, recreated anew. Therefore, he understood, there is in truth no past, only a memory of the past. Blink your eyes, and the world you see next did not exist when you closed them. Therefore, he said, the only appropriate state of the mind is surprise. The only appropriate state of the heart is joy. The sky you see now, you have never seen before. The perfect moment is now. Be glad of it.'"
https://www.goodreads.com/work/quotes/46982-thief-of-time