Hacker News new | past | comments | ask | show | jobs | submit login

I don't think this follows. What you are saying is basically "All Bs must be a subset of A. Therefore dogs like the sun."



Simulations are more computationally bound than the root universe. Simulating a full being vs just a mind takes a lot more computation than simply simulating a mind.

Therefore from an optimization standpoint simulations are unlikely to simulate worlds nearly as complex as the real one. Either they are going to simulate minds, or they will simulate tiny subsets of the real world, or vastly less complex worlds. We are not operating as minds without body's which rules out the pure mind simulation leaving far less efficient options which must therefore simulate smaller spaces, less complex universes, and or shorter periods of time.

ED: And by complexity I mean in terms of building blocks useful for simulating a universe or hosting life. Complexity that's not useful for either purpose is useless.


Sure, the physics of Grand Theft Auto are an incredibly limited approximation of real-world physics. If the denizens of GTA somehow gained consciousness and decided to build a computer out of their available physics, they would probably be able to build large-scale difference-engine-like mechanical computers, integrated circuits would be forever out of their reach. They'd be lucky to simulate a universe much more complicated than Pong.

So, granted, there's undoubtedly a massive and permanent loss of resolution at each level of simulation depth. But this is not an argument that our own universe, with all its complexity, cannot be a simulation. We have no idea what the complexity of a "base" universe could be. Perhaps the gap between our own physics and the physics of the universe we're being simulated within is as great as the gap between GTA bytecode and quark-gluon interactions.

For that matter, we have have no idea what base-universe consciousness might look like. Perhaps running a universe-simulation which (if implemented in our own universe) require roughly a galaxy-mass worth of computronium -- perhaps this is the sort of thing that fifth graders do for a science project.

"But that would be ridiculous" isn't an acceptable objection. Geocentricism was able to sustain itself in large part because the alternative -- heliocentricism -- seemed absurdly disconnected from the human experience, implying a sun that was ridiculously huge, planets absurdly far far, and other suns that were so far away that they didn't even appear to move. This, quite obviously, had to be wrong. Nothing is allowed to be so vast.

Now, of course, we know that those too-far-away stars are just one tiny corner of one galaxy among hundreds of billions of galaxies. Turns out that the scale of the universe doesn't have any regard for what we consider to be reasonable. So let's not put any artificial constraints on the capabilities of our parent universe.


The bounds of the computing power isn't even worth considering because we have no knowledge of the resources, let alone the nature of the reality, in either a parent universe or parent simulation. Sure, fidelity must drop in each nested simulation, but we have no idea if we are 2 levels deep, 1000 levels deep, or in the actual real universe.


That's irrelevant as Life and computation both take resources.

In a more complex universe the bacteria equivalent could be more intelligent than humans. But it does not go the other way simulations of a life form will always be less efficient than the least complex lifeforms in a parent universe.


>But it does not go the other way simulations of a life form will always be less efficient than the least complex lifeforms in a parent universe.

That doesn't follow logically.

It's perfectly fine to be able to simulate life forms MORE efficient than the one's in your universe.


Simulation involves a lot of overhead. Computers and life can be built from the same stuff QED. the simplest simulation of a lifeforms must be more complex than the direct equivalent of that lifeforms.

That's not to say the simplest possible equivalent organism exists in that universe, but anything that could make a simulation should also be able to make that life form.


>Simulation involves a lot of overhead

True for some values of "a lot", false for others.

What's undeniably true is that it incurs some overhead over NOT running a simulation.

But that doesn't prove that a simulated life form incurs overhead larger than its real life counterpart.

For one, there might not be any real life counterpart.

We say "simulation" here, but what we actually mean is "virtual world", which might simulate an actual world, or it might be its totally own thing (the same way I can chose to write a simulation of actual things, like e.g. "the Sims", or a simulation of a domain I only imagined). If, for example, as per TFA, our universe is a simulation, is doesn't mean that it actually simulated something else. Just that it's a simulation in itself.

So, "simulated" in this discussion means "not an organically created world made of some physical substrate, but consciously created/programmed by some advanced civilization".

So, the thing simulated could be totally unlike (in properties, physical laws, etc) what exists in the universe of those doing the simulation.

Second, a simulation (as we know it and practice it ourselves) usually has much less overhead than the real life thing it simulates (when it does simulate some real life thing). That's like, it's whole point. E.g. a weather model running in some supercomputers has some overhead, but nothing like that of the actual weather. Similarly, Sims has some overhead, but nothing like the equivalent real-life place and humans would have.

Where you seem to be confused is that you assume that: (a) a simulation must be of something that exists, (b) a simulation must be perfect, e.g. 1:1 to the thing it simulates. Only then would your argument make sense.

But neither of those things are necessary -- even our Earth and universe, if they are simulations, they could be very crude models, running with very low resources, in a vastly more complex and powerful real universe.


(a) is not an objection because 'life form' is flexible. Any computing sub strait could directly compute say a neural network vs a simulation of a neural network. The simulation will always be slower, but the 'real' version running on "FPGA" or it's higher dimensional equivalent imposes direct mapping between what happens in the 'real' world and what is being computed. EX: If we use a FPGA that's hit by a cosmic ray that bit is flipped, on the other hand if you have an array of FPGA's and compare them then that's decouples the 1:1 mapping creating a simulation but adds overhead.

(b) a simulation must be perfect.

No, if you can get away with a less accurate simulation you can get also get intelligence from less computational power using the same approach in the 'real' world.


Adding on to this with an analogy from simulating dynamics in cellular automata, most of the interesting models of physics we see act with high redundancy in both spatial and temporal locality.

A simple rule like Conway's Game of Life that isn't too physically realistic but is instructive because of how intimately it's been analyzed while exhibiting some relatively high complexity, shows remarkable compressibility using techniques such as memoization in HashLife[0].

Even more striking is the potential for superspeed caching where different nodes are evolved at different speeds often allowing _exponential_ speedups of pattern generations to be calculated for longer than the timeframe of the universe we speculate about today for real physics.

[0] https://en.wikipedia.org/wiki/Hashlife


No free lunch. Hashlife takes more memory and only works well in low entropy environments.

But, consider if you want to run a simulation 100 times using the same data you can speed that output up by just copying the output of the first simulation 100 times. But that's not simulating the same mind 100 times it's simulating the mind only once. Hash life and similar approaches don't increase your ability to compute unique mind states.


I'm not sure if that's entirely true. My analogy here would be how we observe classical mechanics or even just regular objects. We can predict the path a ball will take through the air in painstaking and incredibly accurate detail (more detail than we have tools to properly measure), but that doesn't require that we simulate all of the component pieces that make up the ball. We compress that by just calculating what the whole lot of atoms will do on average and substitute that in when we have millions of them and the error is negligible. That's essentially the basis of statistical mechanics. Why could we not make a simpler simulation of a life form (that its 'direct' real implementation) with similar processes of compressing information on component pieces and making approximations where the magnitude of the error is smaller than the accuracy of the measurement?

It seems to me that there is also a similar analogy to be made in the methods that we can use to compress information. Is there a functional difference between a compression algorithm that is lossless and one that would maybe corrupt 1 rgb value a tiny nudge in an entire image with millions of pixels if our only tool for examining it for corruption were our eyes? What if we then used that compression algorithm only in situations where we know the tools used to examine the results wouldn't be able to identify the losses, and used a lossless compression only when such tools were available?

All this is to say that I believe you could feasibly create a perfect simulation of something more complex than the thing performing the simulation with proper optimizations, but it would require certain stipulations like knowing when to use various optimizations to avoid contaminating fidelity of the simulation. Simple example would be simulating human history before a microscope was invented would allow us to constrain all approximations to have less error than would be visible to the most precise observer's senses of measurement.


At that point you are not simulating physics you are simulating minds. From a practical standpoint it's viable, but introduces the possibility to notice the simulation. To counter that you could add overhead detect when when something would notice the simulation, but that's not going to be cheap computationally.

Further, simulating worlds vs keeping real people in pods to see those simulated worlds seems to favor people in pods. Especially if you alter the biology of those pod people to have real physical brains operating on some hardware and little else. Philosophically you can argue about simulations vs "FPGA" boards ruing minds, but direct minds on "FPGA" boards still introduces direct impacts from the real world vs pure simulation.


I do not believe it is accurate to say that what I am positing would only simulate a mind and not the world. It is simply saying that detail below a certain level is insignificant to the simulation of the world as a whole. The entire purpose of minimizing error isn't specifically to avoid detection, it's so that the error cannot be propagated between interactions and end up simulating something that is significantly deviant from the thing you're trying to simulate.

If I were to use the kinematic equations to simulate throwing a ball through the air, but my simulation only used 1 significant digit, rounding errors would quickly add up to produce a path for the ball that would significantly deviate from the path if we were to make the same calculation except treating the ball as made of quarks/atoms/molecules and painstakingly analyze forces on every single atom until the collection of them being the ball reached the end of the throw. It is that deviation from the result that we need to avoid for our simulation to retain enough fidelity to be said to be simulating throwing a ball, otherwise we're just simulating some other interaction that doesn't really match what we would observe.

Your post also makes the assumption that the simulator even cares if we notice that we're in a simulation. I don't think that this premise as a whole includes any assumption as to the motivation for our simulation (if we indeed were to be in one). For all we know, it could be a simulation to determine how long it takes to develop sentient life to a point that it can observe inconsistencies in its environment and deduce that it is in a simulation.

We just don't know anything about the 'real world' in this instance and I think guesses in that realm venture into the realm of being impossible to verify. It can still be fun to think about, but it can't really be based on any experimental evidence unless it were deliberately placed there by a hypothetical simulator.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: