We do know that the real world is richer than the simulated world, since it holds a computer that runs the simulated world. Therefore if you exist, then it's more likely that you're the result of evolution in the real world than the result of evolution in the simulated world.
Imagine the warehouse-size computer that is needed to simulate a bacterium here on Earth. Computers are dusty, and dust contains bacteria, so if you're a bacterium, then it's more likely that you're one of the billions of bacteria in the dust on the computer, than the bacterium being simulated by the computer. The same reasoning should hold for other worlds.
This is faulty reasoning. The game "The Sims" has sold over 200 million copies. If the average number of characters created per game is over 40 then there have been more sims characters than people in the world. Add in a few more games and there have been more game characters than people that have ever lived. And that's with computing being in its infancy not even 80 years old yet. Give 1000 years and its not even close.
Additionally your fidelity is backwards. Thre fact that the simulation is simpler than the real world means we can fit many more people/entities in it - because the computer doesn't have to simulate at full fidelity.
This is true if you assume that the simulating world is what’s being simulated in the simulated world. That is, if bacteria exist only in the simulated world, then if you’re a bacterium there’s 100% probability that you’re simulated.
That's a powerful argument against the usual anthropic argument for the world being a simulation. I haven't encountered it before but it makes total sense.
It's logically necessary, not just an assumption. The simulated world with all its richness is by definition a strict subset of the simulating world. So the latter must be richer than the former.
Only if you talk about the simulated features of the simulated world, rather than compare the "simulated world as seen by its inhabitants" with the simulating world.
We don't have dragons on earth, but I can simulate dragons.
In the sense that this simulation exist in our world, you are right that the simulating world will then always be "richer" because it contains the simulation.
But if I could enter the simulated world, I could ride dragons. I can't ride dragons in "our" world, so in that sense it is clear that we can simulate things that do not have a concrete existence in our own world, and I to me at least that would make the simulated world "richer" in that respect by making things possible in the simulation that requires you to be in the simulation for it to be possible.
Similarly, we can clearly simulate something with more detail - e.g. we could simulate a world where our elementary particles can be subdivided endlessly, if we choose to. In the simulating world this would "just" be a simulation, but in the simulated world it would be that worlds reality.
There is even no reason why, with sufficient resources and time dilation, it would not be possible for the simulating world to simulate a world equivalent to the simulating world, so it could well be turtles all the way down.
I believe it is not correct. You can put new features that do not exist in the physical world in a simulation. For example, you can double the number of quarks in a proton as long as you define a mathematically consistent interaction to allow so.
Could you possibly explain/reason why this must be, without using "by definition"? Many people in this thread agree with you on this, but I don't understand it (see my other comment using a video game analogy).
Is the richness you describe in your comment implicitly constrained to that which exists physically perhaps?
Unless I misunderstand what your conception of a simulation is, I don't see why a virtual world is limited by the constraints of the parent world, any more than video games are limited by the constraints of our world?
I would think this would apply to the individual molecule tracking requirement above as well.
I'll answer the energetic question. Splitting water produces oxygen and hydrogen, so with our oxygen we have the choice between burning hydrogen or burning Titan's hydrocarbons. Burning the hydrogen would bring us back to square one, so the question is whether burning hydrocarbons yields more energy than burning hydrogen. It appears to not be the case. Some numbers I found online:
burning 1 mole of O2 with hydrogen yields 572 kJ
burning 1 mole of O2 with methane yields 444 kJ
burning 1 mole of O2 with butane yields 443 kJ
burning 1 mole of O2 with octane yields 437 kJ
burning 1 mole of O2 with glucose yields 467 kJ
Consciousness is a computation, and neurons are certainly capable of elementary computation.
So the building block has been pointed at (neurons), and its property given (computation). Is the problem that you don't believe that consciousness can emerge from elementary computation, or you believe that it is possible but we have no proof of it?
I have no problem with agreeing that computation can emerge from neurons. For example, one can show how different neural configurations correspond to logic gates, persistent memory (this requires recurrence) and so on. This is precisely what I mean by valid emergentist models. No magic steps, just complexity.
The problem is that you start by stating that "consciousness is a computation", but I don't know if this is true, and neither do you.
> Is the problem that you don't believe that consciousness can emerge from elementary computation, or you believe that it is possible but we have no proof of it?
My problem is that your hypothesis that "consciousness is a computation" is not testable, and so it does not count as a scientific theory (according to the standard Popperian falsifiability criterion).
Unless/until we have a scientific instrument that measures consciousness, we are just assuming things. I assume that other humans are conscious (by analogy), but I don't know it to be true in a scientific sense.
So it's not a matter of what I believe or not, it's a matter of what science can investigate or not. So far, it looks like the phenomenon of consciousness is beyond its grasp.
> With consciousness, the emergentists are not capable of pointing at the first principle, or building block.
...it sounded like there were no plausible candidates. If computation is a candidate, then it's certainly something they can point at (with the caveat that it's only a candidate and not currently testable). I think if instead you had written something along those lines and avoided the words "not capable of", then hoseja and I wouldn't have reacted.
This is begging the question. Consciousness is the sheer seeming-ness of my experience. Perhaps it is reducible to computation, perhaps not, but this is precisely what is under contention.
The paper Mind the Gap: Analyzing the Performance of WebAssembly vs. Native Code reports that, on average, WebAssembly is running at 67% of native speed in Firefox and 53% of native speed in Chrome (called 50% slower and 89% slower in the paper). Whether 67% can be called "near" 100% or not is subjective.
For context, these figures are roughly in line with what Java, C#, and other "fast" managed runtimes manage on the Benchmarks Game. [0] Since WASM's MVP is designed towards manual memory management(GC is proposed but not yet specified), there is a good likelihood of the results improving beyond these runtimes.
That paper does not report "WebAssembly is running at 67% of native speed in Firefox and 53% of native speed in Chrome".
That paper reports — "… applications compiled to WebAssembly run slower by an average of 50% (Firefox) to 89% (Chrome), with peak slowdowns of 2.6× (Firefox) and 3.14× Chrome)."
When you write "53% of native speed" that's really confusing!
Sorry, I reformulated the findings for easy consumption. Why is it confusing and not an improvement? Ask 10 people what speed is 89% slower than 100 mph. See how many give the article's intended answer (53 mph).
I'm not sure what you computed. The interesting question is to pick a recessive trait from one of the parents and ask for the chance of it being passed on. The probability that it's being passed on from the parent who has it is 1/2. The probability that it's being passed on from the other parent is (1/2)*(1/128). The combined probability is 1/512, not 1/65,536.
Indeed. Subway cars move forward without any horse pulling them, how is this miracle possible? What turned out to work was the steam engine, and now electromagnetism, explained by science. What didn't turn out to work was an elite team of people praying for the train to move.
The qualifier "dwarf" doesn't mean that the celestial body is too small to be considered a planet, it means that it hasn't cleared the neighborhood around its orbit. (Blame the IAU if you find this counterintuitive). This means that your conclusion is wrong: the asteroid belt actually has enough mass to be a planet.
Other than not having enough mass, what would cause a celestial body to not clear its orbit? The only other factor that occurs to me is time, is there anything else? If not, does that mean Ceres may eventually become a planet?
Faster WebAssembly isn't listed on the roadmap, so you are disappointed, right?
Wasm is currently considered "around half as fast as native"[0]. The Wasm design, Emscripten compiler, and browser compilers all have a part of responsibility in this, but I suspect that browser compilers have the largest share. Mozilla created The WebAssembly Explorer[1] which shows good-looking ".wat", but bloated "Firefox x86 Assembly" compared to "LLVM x86 Assembly". I hope they intend to use this tool to improve Firefox x86 Assembly.
(-,-): antimatter would fall down, but we could break conservation laws with a mechanism.
(+,-): antimatter would fall up, but we could break conservation laws with a mechanism using electrically charged particles.
(-,+): antimatter would fall up, but ruled out by the experiment.
So what remains is (+,+)?