Hacker News new | past | comments | ask | show | jobs | submit login

Their argument hinges on one particular claim:

> As unlikely as it is for a BB to form at all, it’s drastically more unlikely for additional things to simultaneously form around it. And even if additional things did form around the BB, it is not especially likely they will be the kind of stable and sensible objects a brain could even perceive. Therefore, the evidence we have is more likely supposing we are OOs than supposing we are BBs.

which doesn't ring true to me. Assuming that universe is indeed dominated by BBs, it's not at all clear to me that any observations we could possibly make "is more likely supposing that we are OOs". While the number of BBs "with decorations" would be dwarfed by the number of BBs without, it is still entirely feasible that there are many more such BBs than there are OOs.

I also found their argument as to why all observations shouldn't be considered hallucinations (including "over time", "history of" etc) as a matter of probability to be incomprehensible.




It's philosophically gauche but I often like to criticize arguments based on “what if they were right?”...

So for example the ontological argument putatively argues for the existence of a Perfect Being but it would seem to work even if you restricted the domain somewhat to something smaller than “all beings”, and so presumably also argues for the existence of a perfect Toaster.

Similarly here, the claim is that in a BB universe, even though countlessly more brains see the exact same stuff as you, there is something about the Bayesian update factor that you all have where you all still should conclude you are not the Boltzmann Brains and the evidence is never enough.

How do you look at that description, and not conclude that according to that argument, Bayesian reasoning is just strictly wrong? Like everyone (more or less) is “it” and everyone (more or less) says “it’s not me!” and everyone (more or less) is wrong and here is our philosopher dusting their hands saying ‘yep! sounds good, solved the problem!’


> How do you look at that description, and not conclude that according to that argument, Bayesian reasoning is just strictly wrong?

I believe you're conflating epistemics with decision theory. Sure, the measure of all minds experiencing your current mind-state may be dominated by Boltzmann Brains, with observations that do not correspond to any local state of the world, and which will dissipate momentarily.

But, since your decisions as one of those BB's have no effect, you should make decisions based on the fraction of minds-like-you which are living in a persistent world where those decisions have effects which can, in principle, be predicted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: