Hacker News new | past | comments | ask | show | jobs | submit login
Conterintuitive facts in mathematics, CS, and physics (axisofordinary.substack.com)
936 points by raviparikh on Oct 5, 2021 | hide | past | favorite | 323 comments



> 16. If you let a 100g strawberry that is 99% water by mass dehydrate such that the water now accounts for 98% of the total mass then its new mass is 50g: https://en.wikipedia.org/wiki/Potato_paradox

I really like this one. It's a perfect combo of intuitive from one perspective and mind bending from another.

> 18. A one-in-billion event will happen 8 times a month: https://gwern.net/Littlewood

This one, on the other hand, I don't like. Depending on a whole bunch of subjective definitions, a one-in-billion event can happen a million times a second or practically never or whatever else you choose.


#16 is something video games taught me, particularly Path of Exile. In PoE resistance are a flat multiplier to incoming damage. Eg monster does 100 damage per attack and you have 60% resistance then you take 40 damage.

The interesting thing is that the higher your resistances the more effective each additional percentage point of resistance is.

Let's say a monster does 100 damage per attack.

If you have 0% resistance and increase it to 5%, then your incoming damage went from 100 to 95. You take 5% less damage than before.

If you have 75% resistance and increase it to 80%, then your incoming damage went from 25 to 20. You take 20% less damage than before.

It is pretty unintuitive until you realize that you need to focus on the remainder rather than the other part.


A similar but not quite the same mechanic is fuel economy.

Let's say you have two vehicles, both doing 10,000 miles per year. One gets 10 mpg and the other 50.

Would you rather upgrade the 10 mpg vehicle to 13 mpg, or the 50 mpg to 100 mpg?

Not only should you pick the former - you should pick the former even if you could upgrade the 50 mpg vehicle to run on nothing.


Yes, the fuel savings in the former case are larger than the initial fuel consumption in the latter case, but is it really unintuitive in practice? What I mean is that we generally pay for fuel per volume, not per mileage. Now, I am not sure about others, but I would always base my decision based on the money I'd save over a period of time, which in this case would require considering each vehicle's actual fuel consumption over that period of time.


It is the assumption that the two cars always make the same number od miles indepedently od the cost that is unusual and unintuitive.


If you think of it as a family that keeps obstinately driving both cars the same amount despite massive cost differences its weird but you could think of it as a mixed fleet of delivery or work trucks that are all needed regardless, the question is how you manage or prioritize upgrading them.


A similar example with league of legends:

One point of resistance gives 1% effective extra health no matter how much you already have.

100 armor give 50% reduction (100/100+100) while 200 armor give 66% (100/100+200)

The percentage 50 -> 66% is shown ingame and players often think the value per point of armor drops.

What does actually happen is your effective bonus health changes from +100% to +200% and every additional point will be worth the same


Interestingly, the same logic applies to vaccination rates. Going from 0% to 5% vaccination has no impact on the course of the pandemic (except for those few vaccinated people, of course). Going from 75% to 80% has a much larger impact, and could stop the pandemic in its track (depending on R_0, and many other real-world complications of course).

(And the reason is just the same: what matters is the remainder.)


OK, let's assume you have 75% elemental resistance. You also take 15% reduced damage and 10% less damage. You have 5% chance to avoid elemental damage and 10% to dodge it and are under the effect of Elemental Equilibrium and Gluttony of Elements. How much better is it to just kill everything before it can kill you?


You joke, but PoE's use of stuff like 'increased/decreased' and 'more/less' to distinguish between additive and multiplicative calculations is one of the smartest game design decisions I've seen.

The game has a lot of seemingly arbitrary distinctions and concepts that you just have to learn over time, but once you actually learn them, the consistency of it all makes it very easy to handle the large amounts of complexity in the game.

It's completely unbearable going back to other games that just say crap like "+30% to x" without actually distinguishing between the different ways calculations can be done, forcing you to either experiment endlessly, look up every tiny thing on a community wiki, or just wing it.

It's a nice contrast to something like WoW where every time you get a new item you just chuck the item code into some ten million LOC simulator and fuck around with limiting permutations until it doesn't take 15 minutes to run, just to find out through some totally opaque process that you have 189 more dps. And then 2 months later you find out there was a bug in the simulation and the item you deleted 1.9 months ago was actually better.


Yes, it doesn't explicitly state the rate or distribution of events. But it is a good reminder of what happens when your whatevers/second are pretty high - see the famous "One in a million is next Tuesday" [1]. "Rare" is soon if you roll the dice fast enough.

Any time your service has a high TPS, your API gets a lot of calls, a button in your app gets pressed a lot, ... this applies.

Critically, "a lot" is defined relative to your failure tolerance. It may actually be very fast or a lot, or not particularly fast but it really really needs to work.

It highlights the fallacy of equating "low probability" and "won't happen".

[1] https://docs.microsoft.com/en-us/archive/blogs/larryosterman...


> "Rare" is soon if you roll the dice fast enough.

Indeed. One fun example is LHC[1], where the probability of a proton in a single bunch hitting a proton in the bunch going the opposite direction is on the order of 10^-21, yet due to huge number of protons per bunch and large number of bunches per second, it still results in ~10^9 collisions per second.

[1]: https://www.lhc-closer.es/taking_a_closer_look_at_lhc/0.lhc_...


> And I’ve seen some absolute doozies in my time – race conditions on MP machines where a non interlocked increment occurred (one variant of Michael Grier’s “i = i + 1” bug)

I could not find any info about that bug, anyone got a link or a source?


I assume that bug is referring to the fact that while i = i + 1 may look atomic to you as a human, in the computer it turns into

    Read i into register.
    Add one to that register.
    Write i back to the memory location.
And there's a window during that "add one to the register" where you can obviously have something jump in and write something else to that memory location.

What happens on your real processor is more complicated since this is going to relate to cache coherency between the processors, not directly writing RAM at that point, and that's a deep rabbit hole. I couldn't describe it all in detail anyhow. But I can observe it doesn't take much at all to turn that one cycle vulnerability into something with a larger target.


Or everyone's newest favourite, the virus randomly mutating is "rare".


"Given the scale that Twitter is at, a one-in-a-million chance happens 500 times a day."

https://www.ted.com/talks/del_harvey_protecting_twitter_user...


16 is like the money hall problem. I understand the answer, the answer makes sense to me. And yet but when I think of it how I think of it initially ... it still makes no sense.


The way to make sense of this is not to think about the water weight but the solid matter. By changing the proportion of water from 99% to 98%, you're also doubling the proportion of solid mass from 1% to 2%.


> you're also doubling the proportion of solid mass from 1% to 2%.

And the final step is then, that the solid mass didn't change and therefore the liquid must be halved, instead of the solid doubled.


The limit case can be helpful here, a strawberry made of 100% water can be dehydrated to practically nothing and is still made of 100% water.


What you said makes no sense. If I'm only judged by how much money I have, then even if I have $0 I'm still a billionaire?


Easiest for me if thinking only in fractions and percentages, and realizing that the dry mass is a constant.

The obvious example is 99g water, 1g dry mass. Knowing that 1g cannot change, what do we need water to be to equal 98%? 49g.


For me the mistake was 1:1 relation between percentage and weight which didn't remain true after weight loss but I thought it did


*Monty Hall


The strawberries remind me of the pricing of long-term bonds.

Let’s say newly issued 100yr Treasuries pay a 1% coupon today, but tomorrow the coupon will be 2%, how much does the price of the older bond change?

The answer is a near 50% loss, simply because this is required to bump the yield of the older bond to 2%.

(If we include the discounted principal payment the exact answer becomes a 41.3% loss)


> > 16. If you let a 100g strawberry that is 99% water by mass dehydrate such that the water now accounts for 98% of the total mass then its new mass is 50g: https://en.wikipedia.org/wiki/Potato_paradox

>I really like this one. It's a perfect combo of intuitive from one perspective and mind bending from another.

Comes up a lot lately because of vaccine effectiveness e.g. 95% is twice as effective as 90%.


It's probably more intuitive if you say that it's "half as ineffective"


I first saw this in a discussion of power supply efficiency. A 95% efficient supply generates ~half the heat that a 90% one does.


Wait, how is 95% effective twice as effective as 90%?


just as going from 98% effectiveness to 99% effectiveness halves your risk (you go from 2% chance of falling ill to 1%)

its a common concept in games in which armor follows a linear formula (each point of armor is more effective than the last when calculating effective health)


Then there's the polynomial wizard.


90% is 1 in 10, 95% is 1 in 20.


It halves your risk.


and sunscreen (SPF) calculations...


Here's my attempt to understand what's going on:

100g strawberry total weight where the 100% is made up of 99% water and 1% solid matter. 100g strawberry total weight where the 100g is made up of 99g water and 1g solid matter. 99/100 = 0.99

50g strawberry total weight where the 100% is made up of 98% water and 2% solid matter. 50g strawberry total weight where the 50g is made up of 49g water and 1g solid matter. 49/50 = 0.98


>> 18. A one-in-billion event will happen 8 times a month: https://gwern.net/Littlewood

> This one, on the other hand, I don't like. Depending on a whole bunch of subjective definitions, a one-in-billion event can happen a million times a second or practically never or whatever else you choose.

I think this is about events happening to people, the number of people alive (and assuming they all communicate "miracle" occurrences"), and how many things they experience.

That is, if I understand if correctly it's not that you can choose a random number between one and a billion and run a CPU to randomly check numbers in that range as fast as possible and get lots of results in seconds, it's that based one how we have roughly 8 billion people all communicating events that things we consider "one in a billion" occurrences will be experienced about 8 times a month across the populate, and we'll all pretty much hear about it, which may not match with our expectations of how often we should see a "one in a billion" event reported.

Edit: Here's some relevant info from the paper "Methods for Studying Coincidences"[1]:

The Law of Truly Large Numbers. Succinctly put, the law of truly large numbers states: With a large enough sample, any outrageous thing is likely to happen. The point is that truly rare events, say events that occur only once in a million [as the mathematician Littlewood (1953) re- quired for an event to be surprising] are bound to be plentiful in a population of 250 million people. If a coin- cidence occurs to one person in a million each day, then we expect 250 occurrences a day and close to 100,000 such occurrences a year.

Going from a year to a lifetime and from the population of the United States to that of the world (5 billion at this writing), we can be absolutely sure that we will see incred- ibly remarkable events. When such events occur, they are often noted and recorded. If they happen to us or someone we know, it is hard to escape that spooky feeling.

A Double Lottery Winner. To illustrate the point, we review a front-page story in the New York Times on a "1 in 17 trillion" long shot, speaking of a woman who won the New Jersey lottery twice. The 1 in 17 trillion number is the correct answer to a not-very-relevant question. If you buy one ticket for exactly two New Jersey state lot- teries, this is the chance both would be winners. (The woman actually purchased multiple tickets repeatedly.)

We have already explored one facet of this problem in discussing the birthday problem. The important question is What is the chance that some person, out of all of the millions and millions of people who buy lottery tickets in the United States, hits a lottery twice in a lifetime? We must remember that many people buy multiple tickets on each of many lotteries.

Stephen Samuels and George McCabe of the Depart- ment of Statistics at Purdue University arrived at some relevant calculations. They called the event "practically a sure thing," calculating that it is better than even odds to have a double winner in seven years someplace in the United States. It is better than 1in 30 that there is a double winner in a four-month period-the time between win- nings of the New Jersey woman.

1: https://www.gwern.net/docs/statistics/bias/1989-diaconis.pdf


I agree with the parent that the cited "fact" is sort of questionable (along with some other things on the site even though I really enjoy it overall) because of ambiguity in definitions, assumptions, and so forth.

However, the law of truly large numbers, as you frame it, is something you experience firsthand working in high level severity hospital settings in large metro areas. There's a large enough hospital catchment area that you start to see, on a fairly regular basis, the medical outcomes of all the bizarre and unbelievable things that happen rarely to any given person. It gets to a point it's difficult to know how to explain because the details of each case would be potentially identifying given how strange they are. And yet something happens all the time. Maybe not that one thing, but something of similar impact. It gives you a distorted sense of risk.


>https://en.wikipedia.org/wiki/Potato_paradox

The Wikipedia link above says:

> Fred brings home 100 kg of potatoes, which (being purely mathematical potatoes) consist of 99% water. He then leaves them outside overnight so that they consist of 98% water. What is their new weight? The surprising answer is 50 kg.

It annoys me when mass is used interchangeably with force (weight), so I went to the Wikipedia source, and the source is accurate in using units of force throughout.

https://web.archive.org/web/20140202214723/http://www.davidd...

Wonder why the person that wrote the Wikipedia article changed it up when it is supposed to be a direct quote.


You're being overly pedantic (and I would argue actually incorrect). Kilograms and pounds are both referred to as "weight" in general conversation and nobody is going to be confused by this.

Go to any supermarket in a country that uses the metric system, potatoes will be sold by the kilogram - it's the natural way to phrase this outside of America.

In a physics context the definition of kilogram might be specifically mass, with newtons referring to weight/force. But words can have different meanings outside of technical contexts.

If you go to a metric country, and ask someone how much they "weigh", approximately zero people will say "x newtons", they will say "x kilograms" (or "x pounds" still in a lot of commonwealth countries if we're being pedantic).


Although the more I think about this the more I think the difference between technical and colloquial is actually that "weight" in colloquial use refers to mass, because force is not commonly relevant.


"Weight" historically referred to mass, in common speech dating back forever. It’s the Germanic word which has been used throughout the history of English, whereas "mass" comes from Latin via French, like 5–6 centuries ago. The two words are almost exact synonyms, in historical/colloquial use.

Both historically and today, a "pound" (Roman libra) is a unit of mass. People use a pound-force as a unit of force only in somewhat specialized contexts.

At some point in the relatively recent past, someone (not sure who) decided that we needed to have 2 separate words for mass vs. force, and we should keep the Latin word for mass and use the Germanic word to mean force.

Now pedantic people are constantly insisting that using the standard English word weight to mean mass is "wrong".


Actually in the past "weight" or the Latin "pondus" (=> pound) always referred correctly to what is now named "mass".

When someone mentioned "weight" just in a qualitative way, as a burden, they might have thought at the force that presses someone down, but whenever they referred to weight in a quantitative way, they referred to the weight as measured with a weighing scale, which gives the ratio between the mass of the weighed object and the mass of a standard weight, independently of the local acceleration of gravity.

Methods that measure the force of gravity and then the mass is computed from the measured force, i.e. with the force measured either mechanically with springs or electrically, have appeared only very recently.

The distinction between force of weight and mass became important only since Newton, who used "quantity of matter" for what was renamed later to the more convenient shorter word "mass".

Perhaps it would have been better to retain the traditional words like weight and its correspondents in all other languages with the meaning of "mass", because this meaning has been used during more than 5 millennia and use a new word, e.g. gravitational force, for the force of weight, because we need to speak about this force much more seldom than about the mass of something.


> Actually in the past "weight" [...] always referred correctly to what is now named "mass".

That’s the same thing I just said. Why add “actually” in front? Yes, weight was historically measured with balance scales.

I guess I should have been clearer that the term “mass” as used in physics only dates from 3 centuries ago (from Newton), and did not historically mean weight in Latin. (Mass comes from Latin via French for lump of dough.)


You are right, I have misunderstood what you have said, because it indeed looked like if "mass" would have been some traditional word having anything to do in any language with what are now called "weight" and "mass" instead of a recent post-Newton word choice for naming one of the 2 quantities, while keeping the old names for the other.

I still think that the choice of which of the 2 should get a new name was bad, because the traditional quantitative meaning almost always referred to what is now called "mass"(with extremely few exceptions such when somebody would be described as so strong as to be able to lift a certain weight).


Yes, in colloquial use weight refers to mass _most_ of the time. But can also refer to inertia or mass. Or be used metaphorically.


In this situation, the mass and weight are proportional and irrelevant to the problem. Other than proper respect of units, why would it really matter? I would agree that using mass+kg would remain correct and be less unusual, but it doesn't matter a lot.


It does not matter, it is just a pet peeve of mine. Might be due psychological trauma from when I was a kid and arguing with an older cousin about how pounds and kilograms are not units of the same thing, and the older cousin "winning" the argument in the eyes of the elders because the cousin was quite a few years older than me and considered to be smart in school.


From what I remember from intro physics, we distinguished between pounds and pounds force, the latter having the 32ft/s^2 multiplied in.

And wikipedia seems to agree with me, see pound (mass) vs pound (force).


Oh wow, learning a lot today. I was taught in the US that pounds are a measure of force, and that is how it was always used in physics problems.


As a European I learned in metric. When I first learned pounds, it was as the imperial system's equivalent of grams and a conversion factor was given. Force in physics class was taught in Newtons (kg*m/s^2).


Pounds as a mass unit are perverse enough. Things like pound force and psi (pounds per square inch) were used only to make fun of old mechanics papers and textbooks. Also, btu. It is quite amazing actually that someone would see the SI and think “no, too simple; I’ll keep my pounds, ounces, inches, and feet”.

Anyway, yes, the proper unit of force is the Newton.


In engineering school (in the US), we used pounds mass (lbm) as the unit of mass, and pounds force (lbf) as the unit of force.


I think there is some weird dual usage that makes them either mass or weight depending on the context. For example, torque is in ft•lbs or N•m so the pounds there are lbs force.

Though checking wikipedia again, it actually specifies that torque is measured in as lbf•ft. I take that to mean that 1 ft•lb is the torque of 1/2 oz (1/32 of an lb) at 1 foot. I expect that's a test question almost everyone would get wrong, myself included.

https://en.m.wikipedia.org/wiki/Pound-foot_(torque)


Think like an ordinary person. You know, an ordinary person who would say that they weigh about 80 kilograms. Only science nerds would say that they mass about 80 kilograms, or that they weigh about 785 Newtons. Similarly, anyone who's used to living with US customary (or Imperial) units understands that a pound of force is what a pound of mass weighs and would see no reason, under any circumstances, why anyone would want to divide the gravitational acceleration out of a pound-foot to arrive at a "real" torque value. When the pound value is expressing a weight-equivalent force, that force is the force of a pound under normal gravitational acceleration at or near the surface of the Earth.


To me (Netherlands) a pound is simply 0.5kg. Force is expressed in Newton.


When I buy potatoes I am interested to buy 10 kg mass of potatoes, not the amount or potatoes of which has a gravity force of 98,1N. On mars you need to eat 10 kg of potatoes a week, not the amount of which has 98,1N gravity force.

The scales in the supermarket on earth automatically convert the weight into mass for my convenience by applying a constant factor of 1/9,81.

I am hardly far away enough from earth that the constant changed, so I did not need to distinguish between the two measures so far. When carrying the potatoes home I just use the mass of 10 kg as a proxy for the force I need.

And to determine the increased breaking distance of my car, I need to know the mass again.


> Wonder why the person that wrote the Wikipedia article changed it up when it is supposed to be a direct quote.

They likely changed it from lb to kg because that would be more friendly to an international audience, without realizing that lb is a measure of weight and kg is a measure of mass. Therefore they didn't know to change "weight" to "mass".


I have been to the US quite regularly, and been living in the UK for a number of years, and I have never seen someone using pounds as a unit of force instead of a unit of mass meaning roughly 500g, give or take. Second meaning was about 1.20€.

FWIW, the pound is a proper unit of mass: https://en.m.wikipedia.org/wiki/Pound_(mass) .


I'm a civil engineer. In statute terms pound is the unit of force and slug is the unit of mass. That might have colored my thinking.


Oh yes, that would make sense.


The difference between weight and mass is domain specific to physics.

I actually get annoyed at people who are pedantic about these things. Precision is important in some conversations, but just elitist in other.

Anyway, the term “weight” to refer to mass outdates its use as a force - its only since Newton that we distinguish the two, after all.


I really don’t like that example because it makes no sense. In no logical circumstance could the potatoes dehydrate so quickly when left out over a single night.


And they also don't consist of 99% water. That is why they called them "purely mathematical potatoes" and they could've chosen any type of fruit or vegetable. Heck, I'm just waiting for a car analogy now!

Brake fluid anyone?


But that 99% water simplification is needed for the purpose of the exercise.

The left out overnight is basically an absurd statement that is intended to confuse and not really related to the actual question.

The statement could be “left in a dry environment until” or simply “left to dry a few weeks”


You definitely could write that but it wouldn't change anything and you could make the same "argument" you are making now. "The left to dry a few weeks is basically an absurd statement that is intended to confuse" and it still wouldn't be true. It's not intended to confuse at all. It's intended to get the point across that you let this imaginary thing dry from 99% to 98%. They could've said "sponge" and let it out to dry any number of minutes. The point isn't to make a 100% accurate example of the drying properties of any actual 'thing'. They just needed something that people intuitively know "has water content" and that "can dry".


Wow by the downvotes I learned people react extremely negatively to any critique of math word problems. Spherical cows and 99% water potatoes are fine, those over simplifications are required for the analysis.

Saying Leaving the potatoes out overnight to imply they halved in mass, sounds as reasonable as “the potatoes were in the ground for 12 hours and then doubled in amount, what percent of water are they now?” It’s so gratuitous and requires ignoring all other laws of physics, while the goal of the spherical cow type simplification is to only ignore a few key challenging ones.


You probably won’t be thrilled by the spherical cows in the nearby pasture then.


Do spherical cows dream of mathematical potatoes?


Spherical cows, naturally, dream of spherical potatoes in a vacuum whilst grazing next to spherical chickens.


Honest question to know how others think.

It doesn't really matter, does it? The rate of evaporation is irrelevant to the problem. Mr. Potato could have waited a year, or dried them on the Uyuni salt plains.

Why do you care? Would this distraction affect your ability to solve the problem?


The list is missing one of the most astonishing discoveries of all time: if you reflect the universe in a mirror, you can tell whether you are in our universe or in the mirror because the laws of physics are different in the mirror. See https://en.wikipedia.org/wiki/Wu_experiment


It is a great discover I agree, it might not quite fit in the list because it's not so counterintuitive for someone not into particle physics though.

There are actually many examples of things working different for the mirror image, e.g. many medical/chemical compounds work differently based on chirality.

The topic of chirality is fascinating, one of the big puzzles in nature is for example why it so strongly prefers right-handedness.


> There are actually many examples of things working different for the mirror image, e.g. many medical/chemical compounds work differently based on chirality.

It "works different" because you consider only change in one part of system. You change compound, but rest of system stays the same. If you mirror entire system, it works almost the same way (differences due to parity violation by weak force are millions time smaller than required to even detect differences).


Maybe nature prefer right-handedness just because it started in North hemisphere.


Really like that one, especially because it's just so ... random. All the other forces just don't work that way, but the Weak force does, wtf.


How can someone rigorously prove that? E.g. perhaps there is something inside quarks that has chirality which we haven't discovered yet.


One can't. It's science. One can't rigorously prove anything via induction.


Surprising fact about the sun: it actually produces less heat per unit volume than a compost pile - this makes sense when you consider that fusion events are very rare.

The reason the sun is so hot is that it has an enormous ratio of volume to surface area. Heat emission due to radiative cooling is proportional to the surface area, but heat creation due to fusion is proportional to the volume.


The density of the sun is very low, like much lower than the atmosphere (throughout most of it, anyway,) But yes it has an enormous volume.


The density of the sun is 1.4 g/cm^3, which is 1.4 times the density of the water.

The atmosphere is 0.0012 g/cm^3 or three orders of magnitude less dense than the sun.


The density of the Sun isn't uniform of course, but according to this[0] StackExchange answer, the density of matter in the Sun already reaches that of air at about 95% of its radius.

https://astronomy.stackexchange.com/questions/32729/at-what-...


Obviously by "density" we mean weight divided by volume. Here is a collection of four textbooks that all agree on the 1.4 figure.

https://hypertextbook.com/facts/1999/MayKo.shtml


Given vastness of sun, it's age and existence of extremophiles, I would be surprised if there is no life there.


Life requires a stable environment in which persistent structures can be maintained over long periods of time. High temperatures are inimical to that. This is why we live in a part of the universe where temperatures rarely rise much above a few hundred degrees Kelvin. Above that most complex chemical structures break down.


Even more fundamentally, a life form needs to be able to pump entropy out more quickly than it comes in, no matter what its substrate is. With all that heat and light and magnetic fields running around and pressing in on any conceivable life form living in the sun, there's no way it could possibly pump it out fast enough.

With that analysis you don't have to get into the weeds of what exactly plasma and magnetic fields might theoretically be able to cohere into and whether it may be able to be life someday... it doesn't matter. There's no way sun life can pump out the entropy fast enough no matter what.

(On the flip side, one can imagine some form of nebula, gas-cloud life, but they would have to be so slow that there's no chance any of it could evolve into anything terribly complicated in the life time of the universe. If we ever did find some it would double as proof that there must have been some other life form that created it.)


Perhaps a Boltzmann-brain every now and then.


This is true only on its surface. Most fusion energy production happens in Sun's core and that stuff is both more dense and produces much more energy per unit volume than a compost pile.


Conclusion: to solve all our energy problems, we just need a sufficiently large compost pile.


I mean, yeah. But at some point the pile will become large enough to collapse in on itself from gravity and start fusion, I would imagine.

So, according to Cornell[don't know how to do a citation], "1,000 BTU per hour per ton" is a good heat capture rate from manure compost. Then, according to something called 'alternative energy geek' "the earth receives 82 million quads of Btu energy from the sun each year. A "quad" is one quadrillion British Thermal Units (BTUs) of energy."

So, to recreate that, we would need. Um. One massive shitload of compost to recreate that.

Citations: https://smallfarms.cornell.edu/2012/10/compost-power/ http://www.alternative-energy-geek.com/solar-energy-per-squa...


I learned about this many years ago, and even now, I am occasionally amused by this thought.


It's so humbling to think about how huge the Sun is, and then, how small it is compared to the really big giants in the universe.


Here is one of my favorites. Take a light source and shine it at a filter that is polarized up/down that is in front of a filter that is polarized left/right. None of the light will get through: the first filter removes all of the L/R components of the light, and the second filter removes all the remaining light.

Now add a third filter, between the two, which is polarized at 45 degrees. Now some of the light goes through!

If this doesn't surprise you, imagine there was a man firing a machine gun at a pair of walls. The two walls are thick enough that they absorb all the bullets. But when you add a third wall in between them, some of the bullets go through.


It makes more sense if you think of the polariser sheets as fences with vertical bars, and light like a string going through between two bars and being wiggled.

If the direction of the waves in the string are at right-angles to the slit, the wave can't propagate through the bars.

If the wave is at an angle, then some of the wave gets through, with a maximum of 100% when the wave wiggles up and down in the same direction as the slit.

In this model you can visualise how three 45-degree slits will allow some light to go through even though two 90 degree slits don't -- the light is transformed by its passage through the polarisers.

Where QM people get themselves confused is that they assume that the light is not transformed, even though it clearly is...

I.e.: If the polarisers didn't rotate the angle of the light waves, then successive aligned polarisers would result in very close to 0 light making it through, but this is not what happens. (Excluding losses due to non-polariser-related effects)

All of this goes back to a fundamental confusion between two possibilities:

1) Is it the EM field in free space that is quantised?

OR

2) Is it just the interactions of the EM field with matter that are quantised?

If you actually run the experiments to check which one, it turns out that (2) is true, but most QM people think (1) is true because most of the time they're indistinguishable.

See this video for such an experiment: https://www.youtube.com/watch?v=SDtAh9IwG-I


IIRC, you can grok this with linear transforms:

given X = [[1,0],[0,0]] and Y = [[0,0],[0,1]], aXY = 0 for all a (because XY == [[0,0],[0,0]]).

But, given V = [[1,1],[1,1]] (or, in fact, a lot of other matrices/linear transforms), XVT = [[0,1],[0,0]], which blocks all vertical components and rotates the horizontal component 90 degrees. So if you imagine light as a wave with a horizontal and vertical component, and the polarizing filter applies a linear transform to the light.


Can you explain the double slit experiment with this approach?


Not really, the waves in the double slit experiment travel through vacuum (or air), not through a solid like with polariser sheets.


But they interact with the matter at the boundary of the slit.


You can do that 'continuously' by inserting many layers of slightly rotated polarizers to rotate the polarization plane by an arbitrary angle.

An applied example of this are liquid crystal screens. Molecules take the role of the polarizers there. The rotation angle depends on an applied electric field and when you sandwich a layer of crystals between polarizers, you almost have a screen:

https://www.britannica.com/technology/twisted-nematic-cell


For me this is as surprising as the double slit experiment, but much easier to reproduce.


My hunch is that light does get thru the first two filters, but it gets "squeezed" so it's hard to see it. The third filter "unpacks" the light.


Not surprisingly, this is related to the observer effect of Quantum Mechanics.


Great list!

> 33. "...if you flip fair coins to generate n-dimensional vectors (heads => 1, tails => -1) then the probability they're linearly independent is at least 1-(1/2 + o(n))^n. I.e., they're very very likely independent!

Counterintuitive facts about high dimensional geometry could get their own list. A side-1 cube in n dimensions has volume 1 of course, but a diameter-1 sphere inside it has volume approaching zero! The sphere is tangent to every one of the 2n faces, yet takes up almost none of the space inside the cube.

Note that the distance from the middle of any face of the cube to the opposite face is 1, yet the length of a diameter of the cube (corner to opposite corner) is sqrt(n).


Only sort of true. It doesn't make sense to compare n dimensional volume to n+1 dimensional volumes, so the limit of the volume of an n-sphere isn't meaningful. The limit that does make sense is the ratio of volumes of n-sphere to an n-cube. That that goes to zero is maybe not so surprising.

In particular, it's equally valid and frankly nicer to define the unit n-sphere to be volume 1 rather than the unit cube. Do that and we see that this statement is just saying that the n-cube grows in volume to infinity, which makes sense given the fact you point out that it contains points increasingly far from the origin.

I have a hobby of turning surprising facts about the n-sphere into less surprising facts about the n-cube. So far I haven't met one that can't be 'fixed' by this strategy.


> The limit that does make sense is the ratio of volumes of n-sphere to an n-cube. That that goes to zero is maybe not so surprising.

This is why I start by recalling that the volume of the n-cube is always one, as the frame of reference. But I think people still find it surprising. Hard to tell, because...

> I have a hobby of turning surprising facts about the n-sphere into less surprising facts about the n-cube. So far I haven't met one that can't be 'fixed' by this strategy.

Hard to tell, because I don't find any of these facts surprising anymore -- would guess you're in the same boat!

Another good one is how you can fit exp(n) "almost-orthognal vectors" on the n-sphere.


as an aside: what the hell is the "o(n)" here? Some term that is proportional to n?


>>> 18. A one-in-billion event will happen 8 times a month: https://gwern.net/Littlewood

This is certainly counterintuitive, given that one-in-a-billion and 8 per month have different units of measure.


The implicit information is that there is one-in-billion chance that this event happens to someone in a given month.


I didn’t see this one listed, and thought it was pretty cool when I studied it in a course a few years ago:

https://en.m.wikipedia.org/wiki/Skolem's_paradox

“ Skolem's paradox is that every countable axiomatisation of set theory in first-order logic, if it is consistent, has a model that is countable. This appears contradictory because it is possible to prove, from those same axioms, a sentence that intuitively says (or that precisely says in the standard model of the theory) that there exist sets that are not countable.”

“ Skolem went on to explain why there was no contradiction. In the context of a specific model of set theory, the term "set" does not refer to an arbitrary set, but only to a set that is actually included in the model. The definition of countability requires that a certain one-to-one correspondence, which is itself a set, must exist. Thus it is possible to recognise that a particular set u is countable, but not countable in a particular model of set theory, because there is no set in the model that gives a one-to-one correspondence between u and the natural numbers in that model.”


I like to think of this as a game, with one player choosing the axioms and the other choosing a model. If the first player picks a (countable) set of axioms, the second player can always respond with a countable model. Likewise, if the second player picks a countable model, the first player can always extend the axioms in a consistent way, to rule out that model. This can alternate back-and-forth forever.

Uncountability is hence a 'leaky abstraction': something we want to investigate and study in general terms, even though particular occurances might have some loophole/edge-case.

I think about infinity and infinitesimals in a similar way, like iterative processes (e.g. the natural numbers arise from a process that increments; calculus arises from iteratively shrinking 'dx', e.g. by halving; etc.). Combining/interleaving such processes is tricky, so it's often more convenient to take their limits individually and manipulate those as objects; that's justified if those manipulations could potentially be implemented by some interleaving, but can otherwise result in paradoxes (e.g. Thomson's lamp)


Just to make this more concrete:

1. There is a countable model of real numbers.

2. There even is a countable model of the entire set theory.


> There is a countable model of real numbers.

What exactly happens when you try to apply Cantor's diagonal argument to this model?

I guess that at some step, you get an answer like "outside of the model, yes this exists, but inside the model the answer is no", but I would like to see it precisely, how exactly the in-model reasoning diverges from the outside-model reasoning.


Inside the model, “the reals are uncountable” means you have two sets R and N, and there is no surjective function from N onto R. That function would be a set as well; a certain subset F of NxR, say. But even if we can externally enumerate R, there is no reason to expect that our external enumeration corresponds to a set F that exists in the model.


Consider that there are "real numbers" as in some kind of construct that we as humans are interested in understanding, and then there's ZFC-real numbers, which is an attempt to formalize "real numbers" rigorously. What we know is that we can never rigorously take "real numbers" and uniquely formalize them and so any formal investigation of "real numbers" will be prone to multiple interpretations.

Given this, consider all models M that contain some set R(M) that satisfies ZFC's definition of real numbers and where R(M) is actually countable. Furthermore M also contains some set N(M) that satisfies ZFC's definition of natural numbers.

Within this model M, since N(M) satisfies ZFC's definition of the naturals, it is ZFC-countable (that is it satisfies ZFC's definition of a countable set). Furthermore applying Cantor's diagonal argument to M, one can show that M does not contain a set that represents a surjection from N(M) to R(M), hence R(M) is ZFC-uncountable (it satisfies ZFC's definition of being uncountable).

That said, all this means is that ZFC-countable, and ZFC-uncountable do not fully capture what it actually means to be countable or uncountable. ZFC-countable means a set has the same cardinality as whatever set satisfies ZFC's definition of natural numbers, which is not the same as what we as humans consider to be actual natural numbers.

Similarly being ZFC-uncountable just means a set has a greater cardinality than the set that satisfies ZFC's definition of natural numbers, but that does not mean that such a set is actually uncountable.

There is no way to extend ZFC so that what we consider to be actually countable or uncountable has one single unique interpretation. If there were then we could claim that said unique interpretation captured precisely our notion of countable and uncountable.

What we can do is jump up a level to second order logic, and in that logic it actually is possible to have one unique interpretation of countable and uncountable sets so that there is a unique and countable set of naturals and a unique and uncountable set of reals, but second order logic comes with its own set of ambiguities and issues that for the most part mathematicians reject outright.


Here is one of my favorites: The specific heat of a star is negative. (This also applies to a galaxy or any other gravitationally bound object.)

As a star loses energy, it heats up. If you inject it with energy, it cools down. It's a trivial corollary from the virial theorem, but it leads to counterintuitive behavior (like the gravothermal catastrophe).


So what happens to a star inside a "perfect" Dyson sphere?


The same thing that happens to a star not in a Dyson sphere. It slowly radiates energy away and in the process contracts and heats up. In fact when the Earth was initially forming the Sun was only about 70% as bright as it is today.

(How liquid water could form under those conditions is still something of a mystery: https://en.wikipedia.org/wiki/Faint_young_Sun_paradox)


I really wish we would have a class K or M star as they would have a far higher life expectancy. The theorized instability in form of solar flares would probably be an acceptable compromise.

Perhaps we could just use a straw to give our sun a liposuction.


29 is not correct as stated and falls prey to logical errors. Hamkins presents a formal take on it here: https://mathoverflow.net/questions/44102/is-the-analysis-as-...

His conclusion (which I agree with) is

> The claims made in both in your question and the Wikipedia page on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property.

Despite arguments about countability, which ignore how difficult it is to pin down "what is definable," it is possible (although not necessary) for all real numbers to be describable/definable in ZFC.


It’s also terribly phrased.

> The vast majority of real numbers can't be described. But it is impossible to give a single example.

If we accept the first sentence and assume the second refers to indescribable numbers, then isn’t it obvious that we have no examples of things we cannot describe?

If the second sentence refers to real numbers in general I can give one example or two.


How could you give an example? The example you give would be a description of a number.


He just means the phrasing is kind of poor. A general description of an undefinable number could be something like: given a sequence of Turing Machines, T_1, T_2, ..., T_n, where T_1 is the smallest representation of Turing machine over a grammar G, and T_2 is the second smallest representation of a Turing machine over G, and T_3 is the third smallest etc... take the limit of the ratio of said Turing machines that halt to Turing machines that don't halt as n goes to infinite.

I mean that's a description of some number, you could even write it out mathematically or write an algorithm to express that number. Of course neither the algorithm or the formula will ever converge and yet it will also always be bounded between 0 and 1 (hence it doesn't diverge to infinity).

So is that a description of a number? Well sure in one sense I just described it, there is only one single real number that can satisfy that description, and as I said I could in principle write it out formally and rigorously... and yet in another sense it also doesn't describe anything since no matter how hard you try, there will always be at least two real numbers that could potentially satisfy the definition and no way to eliminate one of them.


It’s only the vast majority that can’t be described.

So either it is claimed that it is counter intuitive that you can’t give an example of something you can’t describe. That is not counter intuitive- that is basically the definition of indescribable.

The other way the sentence can be read is that you can’t give an example of a real number. Of course you can. It’s only the vast majority of real numbers that can’t be described. There’s still infinitely many we can describe. 1 is a real number.


This is very interesting but I think it relies heavily on interpretation.

For example there exists models of ZFC where all "real numbers" are definable, but said model does not include all the actual real numbers, it excludes any number that is not definable in ZFC. The issue is that the term "real number" is overloaded. In the formal sense it may refer only to numbers that are members of a model in which undefinable numbers are excluded. In another sense the term "real number" refers to actual real numbers as we humans intend for them to exist but do not have a precise formal definition.

This actual set of real numbers does indeed contain members that are not definable in ZFC or any formal system, the issue is that there is no way to formalize this actual definition.

This is similar to what another poster mentioned about Skolem's paradox:

https://news.ycombinator.com/item?id=28767108


> In another sense the term "real number" refers to actual real numbers as we humans intend for them to exist but do not have a precise formal definition.

Ah a Platonist in the flesh. Don't see many of you on HN. I don't think real numbers truly, objectively exist and think of them more as artifacts of human thought, but that's a deep deep rabbit hole.

I'm kind of curious then, what do you believe the cardinality of the "real" real numbers is?


I think I'm with you on that. Real numbers don't exist in an objective sense, I mean they exist in the same sense that an Escher painting of a hand drawing a hand exists, but they don't exist in the sense that a hand drawing a hand actually exists.

When I was in high school I remember thinking that computers use the discrete to approximate the continuous and that it is the continuum that is real and the discrete that is an imperfect representation of the continuous. Then a high school teacher blew my mind when he told me to consider the opposite, that in fact it's the continuous that is used to approximate the discrete. The discrete is what's real and we humans invented the continuous to approximate the discrete.

That simple twist in thinking had a profound effect on me that influences me to this day 30 years later.

If anything I may have some extreme opinions that frankly no one takes seriously and I'm okay with that. For example I think the finitists had it right and infinity does not exist. There really is such a thing as a largest finite number, a number so large that it's impossible even in principle to add 1 to it. I can't fathom how large that number is, but there's physical justification to believe in it based on something like the Bekenstein bound:

https://en.wikipedia.org/wiki/Bekenstein_bound

At any rate, I like thinking about this stuff, I do appreciate it, but I don't take it literally. It's poetic, it can inspire new ways of thinking, but I also remind myself to compartmentalize it to some degree and not take these ideas too literally.


If you're sympathetic to the finitist cause, the idea that all mathematical objects are in fact definable is right up that alley. It's nice that this happens to line up acceptance of infinity, but finitism is basically entirely predicated on definability.


Number 19 is a clinker. Banach-Tarski applies only to objects in real-number space, but there are no such objects. For that to work, objects have to be infinitely divisible, but all of our objects are made out of atoms.

Real-numbered space is a good enough approximation to our experience that we hardly ever encounter a model failure like this one.


Feynman talks about exactly this in his book "Surely you're joking, Mr. Feynman":

"Then I got an idea. I challenged them: "I bet there isn't a single theorem that you can tell me - what the assumptions are and what the theorem is in terms I can understand - where I can't tell you right away whether it's true or false."

It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?"

"No holes."

"Impossible!

"Ha! Everybody gather around! It's So-and-so's theorem of immeasurable measure!"

Just when they think they've got me, I remind them, "But you said an orange! You can't cut the orange peel any thinner than the atoms."

"But we have the condition of continuity: We can keep on cutting!"

"No, you said an orange, so I assumed that you meant a real orange."

So I always won. If I guessed it right, great. If I guessed it wrong, there was always something I could find in their simplification that they left out.


I agree. Also, until you get super technical, it isn't really any different to "if you take the natural numbers, and split them into odd and even, you get two copies of the natural numbers".


I disagree ... the "two copies of the natural numbers" is sorta fine, except that they're more "spread out" so it's not at all surprising.

The surprising thing about BT is that the "pieces" are "moved around" ... there's no expansion or contraction.

Yes, the natural number thing helps to understand that simply counting things doesn't help, but the "rigid motion" aspect of BT takes it further.


True, that is the subtle bit -- but I think most people misunderstand (I'm not saying you are!), and don't realise you can always split an infinity into two -- this is just about splitting a sphere into some 'point clouds' such that you can cleverly stitch them back together into the same original space. In particular, the 'cutting' really makes no real-world sense (at least, as far as I can understand).


I agree that the "cutting" makes no real world sense. In a way, that's one of the points of the exercise.

And I agree that most people don't (initially) understand that an infinite set can be divided into two infinite sets that kinda "look the same", such as dividing Z (or N) into the evens and odds.

But BT is more than that. What follows isn't really for you, but is for anyone following the conversation.

Let's take a set A. It's a subset of the unit sphere, and it's a carefully chosen, special set, not just any random set. It's complicated to define, and requires the Axiom of Choice to do so, but that's what the BT theorem does ... it shows us how to define the set A.

One of the properties of A is that we can rotate it into a new position, r(A), where none of the points of r(A) are in the same position as any points of the original position, A. So the sets r(A) and A have a zero intersection. For the set A there are lots of possible choices of r ... we pick a specific one that has some special properties. Again, the BT theorem is all about showing us how to do this.

Now we take the union: B = A u r(A)

The bizarre thing is this. If we've chosen A and r (and therefore by implication, B) carefully enough, it ends up that there's another rotation, call it s, such that s(B)=A, the set we started with.

So whatever the volume of A, the volume of A u r(A) must be twice that, but that's B, and B can be rotated to give A back to us. So B must have the same volume as A. So 2 times V(A) must equal V(A), so A must have zero volume.

Well, we can kinda cope with that.

But if we've chosen A carefully enough, we find that a small, finite number of them, carefully chosen and rotated appropriately, together make up effectively the entire sphere (we miss out countably many points, but they have zero total volume, and we can fix that up later). So if finitely many copies of A make up a solid sphere, they can't have zero volume.

And that's the "paradox".

The conclusion is that we can't assign a concept of "volume" to the set A, and this is explained a little more in a blog post I've submitted here before:

https://www.solipsys.co.uk/new/ThePointOfTheBanachTarskiTheo...

There's a lot more going on than just the "I can split infinite sets into multiple pieces that kinda look the same as the original", although that is certainly part of it, and lots of people already find that hard to take.

To any who has got this far, I hope that's useful.


What do you mean "spread out"? Aren't there the same amount of even numbers as natural numbers?, because they both are countable sets. https://en.wikipedia.org/wiki/Countable_set


>>> ... if you take the natural numbers, and split them into odd and even, you get two copies of the natural numbers ...

>> ... the "two copies of the natural numbers" is sorta fine, except that they're more "spread out" ...

> What do you mean "spread out"? Aren't there the same amount of even numbers as natural numbers?

Yes, there are the same number, but when you look at just the even numbers, they are each distance 2 from their neighbours, whereas the natural numbers are all distance 1 from their neighbours. So people are less surprised, because the even numbers are "spread out", they are less dense in any given area. To map the even numbers back onto the natural numbers you have to "compress" them.

But this is not the case with the Banach-Tarski Theorem. There is a set, A, and another set B, which is just A rotated around, and they are disjoint. So they have a union, C=AuB. But when you rotate C, you can get an exact copy of A. There's no squashing or spreading needed.

So we have A and B, with B=r(A), and A intersect B is empty. Then we have C=AuB. No problem here.

The challenge comes that there is a rotation, s, such that s(C)=A.

So even though C is made up of two copies of A, it's actually identical to A. So start with C, divide it into A and B, then rotate B back to become a copy of A, and then rotate each of those to become copies of C. So you start with C, do some "cutting" and rotations, and you get two copies of C.

Finally, when you take a few of these and put them together, you get a full sphere, so you can't say they have zero volume.

Does that make sense? Does that answer your question?

Does that help?


One of the big things though, is that you can't know exactly what this division looks like. Like you said, at a high, non-technical level that is kind of fundamentally what is going on, with the caveat that the actual "cut" you are making isn't clean like splitting numbers down the middle.


Also this only works when the highly contested Axiom of Choice is used.


But isn't Banach-Tarski one of the main reasons why AoC is highly contested in the first place? (I don't know much about higher math, so this only my amateur impression.)


It's also contested because while intuitive to a countable set of countable sets, generalizing to higher cardinals is not intuitive. Although the same could be said of the Power Law as well...


The paradoxical decomposition of the free group doesn’t rely on choice though, and that’s arguably the “heart” of BT.


I'd deny that our objects do not live in a real space. Spacetime is real-valued.

The issue isn't the reals, but that "solid object" isn't defined properly, ie., the sets under question don't have well-defined volumes.

As soon as you fix that problem, via measure theory, the paradox resolves. You dont need to ditch real numbers.


> Spacetime is real-valued

There's really no evidence for this, as far as we know the real numbers are a pure mathematical invention and don't have any physicality.

Even if you want to say that spacetime is dense (i.e. infinitely divisible), there's an infinite number of fields like that, the real numbers are just a convenient superset.

There's no evidence that spacetime is dense either, and many practical ways in which it is not, as an obvious upper bound if you took all the energy in the observable universe to make one photon, it would still have a finite wavelength.


That is correct and should be highlighted more.

In a way the real numbers are a model (or maybe a 'language') to describe physical phenomena. They work exceptionally well at that, but they are not backed by evidence and do come with (theoretical) limitations.

This bachelor's thesis is a good starting point [1], search for 'finite precision physics' or 'intuitionistic math/physics'.

[1] https://www.math.ru.nl/~landsman/Tein.pdf


Well this is a popular idea imported from comptuer science, but there's absolutely no evidence for it -- and plenty against.

Eg., QM is only linear in infinitely-dimensional real-spaces, etc.

Essentially of a physics uses real spaces indispensably. There is no evidence whatsoever that this is dispensible; other than the fever dreams of discrete mathematicians.


Remember, though: QM is wrong. Relativity also depends on continuous spaces, but it is also wrong. All the theories in physics that depend on continuous space are also wrong.

By "wrong", I mean, we know they can't predict everything correctly. QM itself can't derive relativity. Relativity doesn't have QM in it, and break down at extremes like black holes. They're both very, very, very accurate in their domains, but physics knows that neither theory has the domain of "the entire universe". This is not a wild claim by an HN commenter, this is consensus in the physics world, just perhaps not phrased in the way you're used to.

It's possible the eventual Grand Unified Theory will still have continuous space at its bottom, but it's also entirely possible it won't. Loop quantum gravity doesn't. And personally I expect some sort of new hybrid between continuous and discrete based on physics history; whenever in the past we've had a similar situation where it couldn't be X for this reason, but it couldn't be the obvious Not-X for some other reason, it has turned out to be something that had a bit of both in them, but wasn't either of them.


They're not "wrong" in tests of their real-valuedness though.

I'm somewhat confident there is an empirical test of real-valuedness in areas of physics which require infinite-valued spaces.

However, either way -- the positions of the other commenters was that *geometry* is somehow a dispensable approximation in physics!

This is an extremely radical claim with no evidence whatsoever. Rather some discrete mathematicians simply wish it were the case.

It is true that *maybe* (!) spacetime will turn out discrete, and likewise, Hilbert spaces, etc. -- and all continuous and infinite dimensional things will be discretised.

This however is a project without a single textbook. There is no such physics. There are no empirical predictions. There are no theories. This is a project within discrete mathematics.


"They're not "wrong" in tests of their real-valuedness though."

Yes, they are, or more accurate, they're not right enough for you to confidently assert the structure of space time at scales below the Planck scale. You are doing so on the basis of theories known to be broken at that scale. You are not entitled to use the theories that way.

Even the Planck scale being the limit is a mathematical number; I'm not sure we have concrete evidence of that size being the limit. I've seen a few proposed experiments that would measure at that resolution (such as certain predictions made by LQG about light traveling very long distances and different wavelengths traveling at very slightly different speeds) but I'm not aware of any that have panned out enough to have a solid result of any kind.


The real numbers are a man-made axiomatic system. They were developed to make analysis mathematically rigorous to the high standards of pure mathematicians.

The real numbers are popular outside of mathematical analysis because they provide a "kitchen sink" of every number you could possibly need.

The downside is that the reals include many numbers that you don't need. The number 0.12345678910111213... is a transcendental real number, but it is not very useful for anything. It is notoriously difficult to prove that a given number is transcendental, i.e. part of the uncountable part of the reals and not the countable algebraic subset. Which is ironic because the uncountable part is infinitely larger!

I'm not suggesting that physicists should drop their Hilbert spaces. Rather that a distinction should be drawn between mathematical model and physical reality.

-

As for whether spacetime is countably infinitely divisible:

Infinity is big. Infinitely small implies that if you used all the atoms in the universe to write in scientific notation to write 10^-999..., that space would be more divisible than that. In fact for whatever absurdly tiny number you could think of, perhaps 1/(TREE iterated TREE(3) times) spacetime would be finer than that.

I'll admit it's possible, but I have trouble believing it.


Well functions have properties in virtue of being defined over the reals, eg., sin(x) --

I don't see that these properties are incidental.

Yes they obtain in virtue of /any possible "dividing" discrete sequential process/ never terminating, eg., space being "infinitely divisible".

However I dont think this is as bizarre as it appears. The issue is congition is discrete, but the world continuous.

So we are always trying to project discrete sequential processes out onto the world in order to reason about it. Iterated zooming-in will, indeed, never terminate.

I dont see that as saying anything more than continuity produces infinities when approached discretely. So, don't approach it that way, if that bothers you.


I have very little understanding of physics and math and might be wrong.

But if we accept that the Planck length is the smallest possible length and the Planck time is the smallest possible time, then it seems logical that the universe is an integer lattice of these. ("Spacetime is not real-valued")

https://en.wikipedia.org/wiki/Planck_length https://en.wikipedia.org/wiki/Planck_units#Planck_time


It’s not known if those are fundamental limits.

However the idea that space time is discrete is a reasonable hypothesis to test, we don’t currently have any ways to probe at those resolutions, though.


Sqrt(2) is a real number yet I bet you to properly measure up to the last point the diagonal of a square.


>It is possible to compute over encrypted data without access to the secret key

I don't think this is counterintuitive for most people. The most basic encryption scheme that everyone knows is the Caesar cipher. It's easy to see that shifts of the cipher text will cause shifts in the plain text.


I agree, I really don’t like this one either. There are many things in math that are counterintuitive, but the idea of a homomorphism is not one of them in my opinion.

Once someone explains the idea, and provides a few examples it is very natural.

I also don’t like the text explaining zero knowledge proof. It needs the phrase “practically speaking” somewhere or “for practical purposes” since it’s not true in a strict sense

But overall there were some fun ideas on the list!


What do you mean by the line about zkps? We have perfectly-hiding proofs that reveal no information about the secret information, no matter how powerful the adversary is.


Yes, but they're not proofs in the mathematical sense, since there's always an (exponentially-shrinking) chance that the answers were only correct due to coincidence.


Exactly, practically it makes no different that the method could be fooled with a very tiny probability, but when making these counterintuitive statements I think it is important to be precise.

Ideally the reader should fully understand the statement and still feel amazed, rather than doubting the statement for a valid reason: perfect zero knowledge proof systems (which do not fail sometimes) are impossible and a reader would be right to think so


>It is possible to compute over encrypted data without access to the secret key

This is counter intuitive to me. For one, I don't consider the Caesar Cipher to be an encryption scheme that I would actually use for data.

In addition, when I want to "compute" data, I want to do things like identify sentiment analysis in free form text or identify key themes in a paragraph - and I'm not sure this actually IS possible with data that is encrypted


Maybe a simpler example would be doing a not operation on a ciphertext for a one time pad? When you decrypt it you will get the plain text with all of the bits flipped.

Doing computations on cipher text is very limited which is why it's not very efficient to do complicated operations.


But in a Caesar cipher only some types of computation is possible (to wit, addition and subtraction).

Now, "multiplication" of letters is, well, dodgy as a concept. But, the thing is that you can build encryption systems where D(E(k) op V) is equal to k op V, and op contains both addition and multiplication.


The interesting part is performing arbitrary computations over encrypted data.


Someone asked about #4 then deleted after I typed the response, so here it is.. :)

It's queuing theory in general, related to "utilization". The utilization curve is always shaped the same; 50% utilization always doubles waiting time, and the curve is practically vertical when you get to 99% utilization. That's what explains the difference - 5.8 customers per hour, for a throughput of 6 per hour, shoves the efficiency to the almost-vertical part of the curve, which impacts waiting time.


What really throws people for a loop is the wait time after queuing starts. People expect the wait time to drop once arrivals return to normal, but there just isn't capacity to catch up.

In the real world, except at the DMV, people give up and shorten the queue (or don't join it to begin with), causing the arrival rate to go below nominal allowing the workers to catch up. In must-have or automated situations they see the full consequences of under-provisioning.


This also explains the supply chain issues we are experiencing now because of the move to just-in-time manufacturing across the board.


Cool. Ships waiting to dock on California cost. This explains it!


a couple more:

- at any time while stirring a cup of coffee, there will be a point on the top that is right where it started. (if we pretend coffee stirring is 2-dimensional, Brouwer's Fixed Point theorem)

- a drunk man will eventually make it home, unless he can fly. in which case he only has a 34% chance. (if we assume the man is walking/flying on a grid, Pólya's recurrence theorem)


- a chair with four even legs placed on any undulating continuous surface can always be rotated such that all four legs touch the ground at once.


Do the legs have to be even?


Yes. Imagine a chair with one pair of diagonally opposite legs very long and the other pair of legs very short. On even a flat surface it is impossible to touch all four legs to the floor simultaneously.


Not even, but the feet have to be coplanar.


And since the spacing between the feet is obviously irrelevant, that further implies that it's always possible to find 4 coplanar points on the surface arbitrarily close together. Or arbitrarily far apart.


Huh, I wonder if there's any relationship to the Hairy Ball theorem. They appear to describe similar situations, but on different dimensions.


There's at least the following relationship: both the Brouwer fixed-point theorem and the hairy ball theorem are easy consequences of a more-highbrow thing called the Lefschetz fixed-point theorem.

Unfortunately even the statement of the Lefschetz fixed-point theorem is a bit complicated, but let's see what I can do. I'll have to miss out most of the details. Depending on how much mathematics you know, it may not make much sense. But here goes.

If you have a topological space X, there are a bunch of things called its "homology groups": H_0(X), H_1(X), H_2(X), and so on. I will not try to define them here. If you have a continuous map f from the space X to the space Y, then it gives rise to corresponding maps from H_k(X) to H_k(Y).

The machinery that manufactures homology groups can be parameterized in a certain way so that you can get, instead of the ordinary homology groups, "the homology groups over the rational numbers", "... over the real numbers", and so on. (These can actually be obtained fairly straightforwardly from the ordinary homology groups "over the integers".) If you do it "over the rational numbers" or "over the real numbers" then the resulting things are actually _vector spaces_, and if your space is reasonably nice they're _finite-dimensional vector spaces_.

(What's a vector space? Well, there's a formal definition which is great if you're a mathematician. If not: let n be a positive integer; consider lists of n numbers; for any given n, all these lists collectively form a "vector space of dimension n". You can do things like adding two lists (element by element) or scaling the values in a list by any number (just multiply them all by the number). A finite-dimensional vector space is a thing where you can do those operations, that behaves exactly like the lists of n numbers, for some choice of n.)

And then the maps between these vector spaces, that arise (magically; I haven't told you how) out of continuous functions between topological spaces, are linear maps. You can represent them by matrices, with composition of maps (do this, then do that) turning into multiplication of matrices.

OK. Now I can kinda-sorta state the Lefschetz fixed-point theorem.

Suppose X is a compact topological space, and f is a continuous mapping from X to itself. Then you get corresponding maps from H_k(X) to itself, for each k. For each of these maps, look at the corresponding matrix, and compute its trace: the sum of its diagonal elements. Call this t_k. And now compute t_0 - t_1 + t_2 - t_3 + ... . (It turns out that only finitely many of these terms can be nonzero, so the sum does make sense.) Then: If this is not zero, then f must have a fixed point.

So, whatever does this have to do with the Brouwer fixed-point theorem or the hairy ball theorem?

The Brouwer fixed-point theorem is about maps from the n-dimensional ball to itself. It turns out that all the homology groups of the n-dimensional ball are trivial (have only one element) apart from H_0, and that whatever f is the map from H_0 to itself that arises from f is the identity. And this turns out to mean that the alternating sum above is 1 - 0 + 0 - 0 + ... = 1. Which is not zero. So the map has a fixed point.

The hairy ball theorem says that a continuous vector field on the 2-dimensional sphere has to be zero somewhere. Suppose you have a counterexample to this. Then you can make a whole family of maps from the 2-dimensional sphere to itself, each of which looks like "start at x and move a distance epsilon in the direction of the vector at x". If epsilon=0 then this is the identity map. If epsilon is positive and sufficiently small, then the fact that the vector field is never 0 guarantees that the map does actually move every point; in other words, that it has no fixed points.

But all the terms in that infinite sum that appears in the Lefschetz fixed-point theorem are (so to speak) continuous functions of f. And it's not hard to show that the value of the sum for f = identity is exactly 2. So for very small epsilon, the value of the sum must be close to 2, and in particular must be nonzero. So, for small enough epsilon, we have a map with no fixed points and a nonzero value of the sum, which is exactly what Lefschetz says can't happen.


I work in customer service and the queuing theory is something I noticed while working. A single cashier is usually unsustainable for very long but two will get you through quite a rush. Very cool to see it formally expressed.


My favorite paradox in physics: what happens when you spin a disk at relativistic speed? The circumference should contract, since it's parallel to the direction of motion, but the radius is perpendicular, and thus should not contract.

https://en.wikipedia.org/wiki/Ehrenfest_paradox


I believe Veritasium covered this: spinning at relativistic speeds generates centripetal forces that overcome the nuclear force. There is no material you can use to test this paradox, and there can be no such material because your device would atomize - or worse - in the attempt.

Effectively there are g-forces so high that you end up with subatomic particles.


It's mean to be a thought experiment but some people (like Veritasium) can't get around this simple realization.

Like, "I wonder what could be happening inside of a black hole?"

"Oh we would never know, because if we send a camera it would break. Also suppose we MaaaAaAKKeee aa cCAaaMeerra with ThE STROooonngGEEstt Material In the UUunniveersssee, so it wouldn't break, you would find that there's no wifi in there so how would you transmit your findings ;)"

I hate this kind of "smart" people.


(Disclaimer: I am not a physicist.)

I think it's actually a pretty reasonable description? Sounds like it's a paradox because we assume a rigid body, but nothing in real life is a perfect rigid body. So the situation simply becomes a bunch of particles following circular orbits. If you measure distance along one direction (while constantly accelerating in relativistic speed) you get one number; if you measure distance in another direction you get a different number. But that's what relativity does.

In other words, it's similar to the simpler question: "If I have a perfectly rigid rod that can reach the moon, and I push it, then the other end pushes the moon immediately. But speed of information cannot exceed speed of light. How come?" Answer: There's no perfectly rigid rod.


The example you give and its answer follows a line of reasoning that leads you to an interesting conclusion, that's the point of thought experiments.

What I was saying is more akin to answering your example with:

"Oh no you can't! There's not enough steel on Earth to build such thing and even if you had it, it would require an EEeEeeenOOOOrrrMMMoooUUUusss amount of energy to put in place ;)."

That would be quite a moronic interpretation of the problem that completely misses the goal of said thought experiment, which is, well, to make you think.


The difference there is, I think, that the pragmatic argument of "not enough steel" doesn't prove the non-existence; while the argument about "there are no truly rigid bodies" does.

I haven't seen that Veritasium video, but it sounds like it makes the exact same point. It's not that we haven't found a material that's rigid enough; it's that a rigid disk is counter-factual to begin with, even in non-relativistic conditions.


It's one of the reason I dislike physical terms that has words like "ideal" or "perfect" as their part. They subliminally suggest that their properties and behaviours are the "true" ones while the real material things are their imperfect counterparts whose imperfections you can sometime disregard.

Of course, it's exactly the opposite: it's those "ideal" concepts are imperfect approximations of the real things, omitting lots of details which sometimes are not that important but sometimes are absolutely crucial.

My personal favourite example is attaching a perfect source of voltage (zero internal resistance) to a perfect wire (zero resistance). You can't arrive to this scenario starting with the real world entities: both the battery and the wire will have non-zero resistances and depending on their proportions, you end up with approximating either one of those as zero, or none, but never both.


Hence the “perfectly spherical cow” joke.


Relativistic speed rotations means that you are close to trapping light, meaning your force is similar to being very close to the event horizon of a black hole.

And what is one of the properties of gravity wells like black holes? Well, their circumference isn't equal to 2 pi times radius, since the space has enough curvature to not be flat, similar to how circumference isn't 2 pi times radius if your space is a 2d sphere. So the paradox is correct, you would get that effect, at least if you used a black hole to cause the fast rotation. Although I'm not sure if the maths adds up to be the same.


Yes, this is similar to transmitting a message faster than the speed of light by moving a very long perfectly rigid rod; a perfectly rigid rod is not realisable and neither is a perfectly rigid disk.


Relativistic paradoxes are the most insane. Twin paradox, ladder paradox, grandfather paradox, things get so mindbendingly weird at very large, very fast, and very small scales.


I think if you rotate the disk at maximum speed ignoring all material material properties, all points on disk will approach speed of light and you will end up with "spin through" because layers won't be able to maintain same angular speed (needed to consider the thing a disk) with fixed linear speed...


Spin the observer at the centre of the disc, and keep the disc stationary... now what happens?


not a physicist, but remember that accelerated frames break the symmetry.


Nice list, like it. Let me add my favorite: Wet air is actually lighter than dry air.

Consider an ideal gas. H₂O is lighter than both N₂ and O₂. Now water replaces some of the oxygen and nitrogen ...


For some reason, the one I have the most trouble intuitively grasping is "Two 12 Inch Pizzas have less Pizza than one 18 inch pizza."


The area enclosed by a circle is πr^2 while 12 and 18 are the diameters, right? The radius of a disc is half the diameter, so 6 and 9, respectively. 2π6^2 < 1π9^2 ~~ 226.2 < 254.5. In other words the area is proportional to the square of the radius, not linearly proportional to the diameter in any way.


> In other words the area is proportional to the square of the radius, not linearly proportional to the diameter in any way.

Awesome, thanks for that geometry refresher, that makes sense now of course :).


Similarly, if the toppings are distributed evenly across the pizza, the average distance of a piece of topping from the edge is one third of the radius: most of the pizza is near the edge.


To make it simpler, let's assume that the shape of pizza is a square, i.e. we are talking about a 12×12 inch pizza and a 18×18 inch pizza. This increases the size of both pizzas by the same ratio, so it shouldn't change the answer to "are two small pizzas smaller than one large pizza?".

Now let's measure the size in a new unit which is 6 inch long. So the small pizza is 2×2 units, and the large pizza is 3×3 units.

twice 2×2 = twice 4 = 8

3×3 = 9


18/12 = 1.5 > sqrt(2) = 1.4..


> 0% selected the right answer on this SAT question: Circle A has 1/3 the radius of circle B, and circle A rolls one trip around circle B. How many times will circle A revolve in total?

That's fun. I of course immediately selected 3 which means I could have a bright career in test preparation ahead of me.


That one got me good, so my future at College Board is as bright as yours. However, I don't think the argument by demonstration in that video is particularly convincing.

Instead, I think's its easier to note that that the _center_ of a circle of radius r travels 2 * pi * r distance over one rotation. In the problem, the center of the smaller circle has to travel further than the circumference of the bigger circle - it traces a circle whose radius is the sum of the two radii.

So, if 3 * r_small = r_big, the center of the small circle has to travel 2 * pi * (r_s + r_b) = 4 * 2 * pi * r_s, then divide by 2 * pi * r_s per rotation to get 4 rotations.


You could also argue that it is a matter of perspective. From the perspective of either circle, A will only revolve 3 times.

Only by introducing a larger frame of reference, a grid or in the video a table, you gain an outside perspective. From this outside perspective you redefine a revolution according to some new orientation and end up with n+1 revolutions.

Or maybe the argument is backwards and I just try to justify answering 3.


The demonstration would be clearer if the "point of contact" at the start of the rotation would be marked on the smaller circle and the larger circle would be divided into 3 segments of different color. That would make it obvious that 1 rotation of the small circle doesn't trace a whole circumference of the small circle on the large one.


Without the math:

Radius is always proportional to circumference, so a circle twice the size is twice as big around.

Take the case of two identical circles. To move a point on the first circle from 12 o'clock back to 12 o'clock, it only goes halfway around the other circle, which you can prove to yourself by imagining you've wrapped a string around the circle and marked it at 12 and 6 o'clock. If you unwrap half of the string and wrap it around the other circle, then the end of the rope is at 6 o'clock. To roll the string back up by moving the circle, the top of the circle will be pointing upward again when it reaches the bottom. 1 full revolution. Now wrap the other half of the string around the other side, 6 has to go back to the bottom again to roll the string back up. 2 revolutions.


It’s interesting to read others reasoning on this. I find it hard to follow and much harder to generalize without some notation (and a diagram which this comment box struggles to reproduce)


I like to decompose it.

Hold center of small circle still, rotate big circle once counter clock wise, small one rotates 3x clockwise.

Glue them together, rotate big circle once clockwise, small circle also rotates once clockwise.

Sum them together, 0 rotations for big circle, 4 for small. I'm not at all sure how to rigorously generalize.


I don't think that argument holds water. Imagine rolling it on the inside of the circle – the center of the circle traces a path with only twice its radius, yet it still rolls 4 times around.


I don’t think that’s right. It if the radius of the inner circle is one third the outer, it only rotates twice rolling around the inside, which makes sense as it’s center traces a circle that’s the the difference of the radii.

Imagine the limiting case, as the inner circle approaches the size of the outer circle - the inner circle completes much less than one rotation per lap around the inside edge of the outer circle, and ‘seizes’ (if we’re imagining these as gears), completing zero rotations per lap when the circles are the same size. However, rolling around the outside, a circle of the same size completes two rotations.

In general the problem is like the old Spirograph toy (which I had to break out to convince myself)


I think you're right!

I made a diagram that helped me think through it visually:

https://i.imgur.com/dospt2w.png

If circle A is rolling around the edge of circle B from within, it is actually revolving around a new, smaller circle C which has the Radius Circle B - Circle A.


Is this correct? wouldn't the radius of the circle it's revolving around be significantly smaller if you had it follow the edge of the larger circle but from the inside? I don't know much about math or physics, so I could be wrong, but I think it would be significantly less, closer to two, right?

The reason that this problem is tricky, and has a counterintuitive solution is that Circle A is rolling around Circle B, and so the 'radius; of the circular path it is following is the radius of Circle B + the radius of Circle A.

I'm no expert, but some quick math:

Circle B has a radius of 9, circumference then is 56.55 Circle A (1.3) has a radius of 3, circumference is 18.84

56.55 / 18.84 = 3. This suggests that you could "unwind" circle A (say it was made of pipecleaner), and you would need 3 circle As to fully ensconce Circle B

BUT that wasn't the question. The question states that Circle A is rolling AROUND circle B.

So the circumference of the circle we are now trying to 'ensconce' is Circle A Radius + Circle B Radius = 12, so the new circumference is 75.39, and divided by Circle A Circumference, we end up with 4, which makes sense, and matches the demonstration from the video.

HOWEVER, if we are 'rolling' around the inside of circle B, then I think you're right, the radius of the circular path Circle A will take is 9 -3 = 6, and the circumference of said circle is 38, so we it only will take two rolls. I think this is correct, i do not think it will be 4 rolls.

Think of it this way: when Circle A is rolling around the inside of Circle B, the way you're picturing it is circle A following the outside of Circle B, which is why I think intuitively it feels like the answer will still be 4 revolutions. BUT a better way to think of it is that Circle A is actually revolving around a new Circle, Circle C, which has a circumference of 38. Circle A is not revolving around Circle B, it's revolving around this new, smaller circle within circle b. Does that make sense?

I made a quick diagram to illustrate my point. I did it in a wire-framing tool that has snap to grid, so it's definitely not perfect:

https://i.imgur.com/dospt2w.png

You can play around with what I made here:

https://whimsical.com/r-DoXHwe2kUdgvSiAu2yuGpE


>A rolls one trip around circle B

Without the diagram (which I didn't see until watching the video) this is ambiguous. In my visualization, the plane of the small circle (A) was perpendicular to the plane of the larger circle (B). (Think of circle B drawn on paper, while circle A is a coin on its edge)

With that interpretation of "rolls one trip around", 3 is indeed is the correct answer.


I just thought of a simple argument: unroll the bigger circle into a line. Then as the smaller circle rolls from one end of the line to the other, it makes 3 revolutions. After that, roll up the line back into a circle (with the smaller circle still attached to the end). That adds one more revolution.


Imagine Circle B is reduced to infinitesimal size, like rolling a quarter around a needle. It still makes one full revolution, even though the ratio of the circumferences is effectively infinite.


I'm confused, I also immediately came to 3 when I read this question. Is that wrong? What's the correct answer?


the answer is 4. The reason it's 4 is because distance traveled is relative to the center of the circle. A circle will travel it's circumference in a rotation, but the distance traveled isn't actually the circumference of the inner circle, because that isn't where the center of the circle is. It actually travels the sum of the two circles radii.


It was implied that the correct answer is 4.


Similarly, if you set out in a boat and circumnavigate the world in an easterly direction, ticking off a day on your calender every sunset, when you arrive back at your setting-off point your calendar will be one day ahead of everyone else.


> 11. Knowing just slightly more about the value of your car than a potential buyer can make it impossible to sell it: https://en.wikipedia.org/wiki/The_Market_for_Lemons

This is new and interesting to me, although I think the phrasing of 11 is untrue as it's more about a cumulative effect in a market than an individual sale. Still I think this explains a lot of things in a way I've never really thought about it before. For example, dating apps.


I remember I struggled to wrap my head around this in my microeconomics class when we first explored information asymmetry.


Picard's Great Theorem would be one of my favorite counterintuitive facts:

As a holomorphic function approaches an essential singularity, it takes on every possible value (except at most one) infinitely often. An astonishing result.

https://en.wikipedia.org/wiki/Picard_theorem

And the nice thing is you can prove this with fairly elementary techniques (e.g. first year complex analysis is all you need).


So, are there any popular or commercial games that make use of intransitive/non-transitive dice? That seems like it would be fun to break out with friends over.


Remedial question about zero-knowledge proof. Isn't "proof" a misnomer since the concept is about just making it incredibly likely to be true?


In the literature [*], the "proof" is an actual proof, but that's not what the prover sends to the verifier. Rather, the verifier queries random pieces of the proof, eventually convincing themselves that with high probability, the prover knows a valid proof.

[*] At least in most of the older literature descending from the PCP literature. Modern papers sometimes abandon that distinction and just use "proof" and "argument" interchangeably.


That's why it's sometimes called an "argument" or specifically a "cryptographic proof". You can construct the statement such that it can be "proven" in a traditional sense by adding qualifiers such as "with probability more than 1-1/(2^256)". You'll generally need an assumption like knowledge-of-exponent or at least hash soundness.


In cryptography you prove security through showing that an adversary has a negligible [1] chance of winning a game of guess the secret.

[1] https://crypto.stackexchange.com/questions/5832/what-exactly...


No — the word 'proof' is accurate, it establishes with certainty that you know the value of x.


I thought you were right until I decided to look into it myself, turns out we were both wrong. Zero-knowledge proofs are probabilistic and contain a soundness error which is the probability of guessing the correct answer. This can, in some but not all cases, be brought down arbitrarily close to 0, but it can never be 0 exactly.

https://en.wikipedia.org/wiki/Zero-knowledge_proof#Definitio...


They list homomorphic encryption as the first fact, but has anyone created a truly complete HE implementation yet? My impression is that there are a lot of great theories and experiments in the field, but nobody has really created a practical standard for it. I'd love to be proven wrong though, it's a fascinating field.


> There are as many whole positive numbers as all fractions

According to a specific non-colloquial definition of "as many".


My sister and I used to figure out who had more candy at halloween by lining up the pieces next to each other. The concept of bijection might be more intuitive than counting itself.


For all we know it's significantly older than counting. Pebbles representing bijections to wares like sheep (called calculi like in calculus) occur earlier than counting marks and much earlier than anything resembling numbers.

There are still today human tribes that don't count at all.


> Pebbles representing bijections to wares like sheep (called calculi like in calculus) occur earlier than counting marks and much earlier than anything resembling numbers.

How was this determined? I wouldn't expect that using pebbles this way would leave any distinctive marks or damage or residue on the pebbles that would allow an archaeologist several tens of thousands of years later to tell that was what the pebbles were used for.


Hm, I'm not so sure that just because the definition of a bijection is technical that it is not intuitive.

I'll start an enumeration of the rationals

1 1/3

2 1/4

3 1/5

...

If you can prove that you can do this, is that really so non-colloquial? It is certainly what we mean by "as many" for all finite sets, so what is wrong with doing this for infinitely many sets?


If every whole positive number is a fraction, but not every fraction is a whole positive number, then colloquially, I wouldn't define them as having "as many" elements as each other. Now, if you want to say they have the same cardinality (and you define cardinality as existing a bijection), then I would agree fully.


Wouldn't there be exactly twice (or twice + 1, if you allow negative fractions) as much fractions, since fractions are represented as two positive numbers (plus a bit if you consider the sign).

(The encoding could be "represent both numbers in binary, put the denominator in the odd bits (LSB = first bit), and the numerator in the even bits" so 2/3 => 10/11 => 1110 => 14)


The thing is that you could also use this kind of logic to show that there are more natural numbers than there are natural numbers. For example, you could associate 1 with all of the numbers from 1 to one million, and still have enough numnbers 'left' to associate each of 2,3,... with distinct natural numbers above one million.


What's worse, there are an infinite number of fractions that equal each whole number.


There’s an infinite number of each, so we’re already stretching colloquial definitions by comparing them


> 17. At any given moment on the earth's surface, there exist 2 antipodal points (on exactly opposite sides of the earth) with the same temperature and barometric pressure: youtube.com/watch?v=cchIr1OXc8E

This is not necessarily true. They say a picture is worth 1000 words: https://media.deseretdigital.com/file/5894488349

Regardomg Gabriel's Horn and Banach-Tarski, the paradox is described as a trumpet, or a ball, made out of molecules, atoms, electrons, protons, neutrons, quarks-- but the mathematical proof is... not that. It's pretty common that intuition about objects made out of a finite number of parts breaks down when describing a construct with infinitely many parts.


I don't get what thought that picture is supposed to provoke.


The earth has "holes" and as such the proof of that statement for a sphere does not apply to the earth.


Thank you, something about this didn't make total sense to me.


Borsuk-Ulam applies to surfaces homeomorphic to a sphere, but the picture shows that the earth's surface is a sphere with at least one handle, a donut with at least one hole.


Closing roads to improve commute times makes obvious sense if you think of the TCP back pressure mechanism.


> 21. In two dimensions, there are infinitely many regular polygons. In three dimensions, there are five Platonic solids. In four dimensions, there are six platonic polychora. In all higher dimensions than four, there are only ever three regular polytopes. (Maths 1001, Richard Elwes)

This implies that Platonic Solids are the 3D analogue of the 2D regular polygon. This is not the case.

The Platonic Solids are merely all of the convex solid regular polyhedra. When the caveats are removed there are (at least) 48 regular polyhedra: https://www.youtube.com/watch?v=_hjRvZYkAgA


How is 27 counterintuitive?

> Let alpha = 0.110001000000000000000001000..., where the 1's occur in the n! place, for each n. Then alpha is transcendental. (Calculus, 4th edition by Michael Spivak)

Nearly all infinite sums involving factorial are transcendental.


In fact, almost all numbers are transcendental (algebraic numbers have measure zero).


I agree. Probably most people who know what "transcendental" means would guess that the number described is transcendental.

However, only a small proportion of people who know what "transcendental" means are capable of proving that any number is transcendental.


> 6. Causation does not imply correlation: https://arxiv.org/abs/1505.03118

And in the paper, from the abstract,

> We demonstrate that the Faithfulness property that is assumed in much causal analysis is robustly violated for a large class of systems...

In preliminaries,

> The Faithfulness assumption is that no conditional correlation among the variables is zero unless it is necessarily so given the Markov property.

So probably a more accurate laymam description would be "not all causations are observable as a statistical correlation relationship".


> 15. Two 12-inch pizzas have less pizza than one 18-inch pizza.

That is surprising, because it's not true. Any calculator will confirm that 2 (2 π×6) > 2 π×9. Or am I missing something?


That's the formula for circumfence, not area.


Ahhhh, of course. facepalms


It's still a good proof that the smaller pizza's have more crust.


Area is calculated using pi*r^2:

2 * pi * 6^2 < pi * 9^2


Thank you. I feel kind of stupid now.


it's true but really what they're missing is the number and variety of toppings on these pizzas.


Variety maybe, but surely number is also proportional to area (and presumably it's worse because "crust width" doesn't scale with diameter?)


well not if the big one is a margherita pizza.


> 19. Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball.

While you're at it, you can completely turn that sphere inside out without creating any holes or creases[1].

[1] https://www.youtube.com/watch?v=wO61D9x6lNY&t=92s


> The Earth makes 366.25 rotations around its axis per year.

This one isn't quite correct, unless you're talking about artificial Julian years. It is closer to 366.24, and even closer to 366.24219 = 1+365.24219 (https://pumas.nasa.gov/sites/default/files/examples/04_21_97...).


The SAT question one is brutal.. they did not include the correct answer on a multiple-choice question!


Missing one that still makes no sense to me: The Ramanujan Summation: 1 + 2 + 3 + ⋯ + ∞ = -1/12

https://en.wikipedia.org/wiki/1_+_2_+_3_+_4_+_%E2%8B%AF


If you don't use the regular definitions of how things are defined, you can get surprising results. If I define + as -, it might be surprising if I said 1 + 1 = 0.


Ok, here's my attempt at an elementary explanation. Consider the function

  f(N) = floor(N) 
       = 1 + 1 + 1 + ... + 1, whenever N's an integer
       = 1^0 + 2^0 + 3^0 + ... + N^0.
Then f(N) ~ N - 1/2 is f's simplest unbiased polynomial estimator. It's a succinct description of f and its asymptotic behavior.

Next, let's do something like integration, and increase those zeroth powers to one, ie. consider the function

  g(N) = 1^1 + 2^1 + 3^1 + ... + (N - 1)^1
       = 1 + 2 + 3 + ... + (N - 1).
In this case, the estimator ends up being

  g(N) ~ N^2/2 - N/2 - 1/12
       ~ integral(N - 1/2) - 1/12,
where I've written this estimate as the difference of a divergent term and a constant term.

Here's the point: the divergent term is really just left over (integrated up or 'induced') from our asymptotic description of the degree 0 sum. In a sense, it's degenerate and doesn't provide any sort of 'new' information. So in the same sense, the interesting semantic content of 1 + 2 + 3 + ... is just the constant term: -1/12.

Ps. This way of thinking is related to trace formula truncation.


I love that one, what this taught me is that infinite sums are fundamentally different from finite sums, and that when operations performed on the finite are generalized to work with the infinite, there are often ambiguities and subtleties about how to perform that generalization.

We wish to use the same familiar notation with the infinite that we do with the finite, but we must keep in mind that while they do share similarities, they are not the same operation. The way certain ambiguities are resolved in order to extend the finite to the infinite can lead to counterintuitive results and absurdities that may not even be apparent at first.

For me, seeing how one approach to generalizing infinite sums yields the -1/12 result, and how that result actually has some relevance in physics is quite profound and insightful and I am happy that you reminded me of it.


This is a good video explaining it: https://www.youtube.com/watch?v=w-I6XTVZXww

Although it relies on a particular interpretation of the left side of the equation, it's used in some areas of physics.


I find a response video to it more enlightening. It's often found as the number one related video.

https://youtube.com/watch?v=YuIIjLr6vUA


> where the left-hand side has to be interpreted as being the value obtained by using one of the aforementioned summation methods and not as the sum of an infinite series in its usual meaning


what? that's incredible ...


Literally incredible, because it's not. It's just really awful notation. It's really more like F(1+2+3+...) = -1/12 for a certain F (pedantically it's not a function I don't think, but whatever).


My favorite is a variation of #7: Earth rotation period is actually 23h 56m, not 24 hours https://en.wikipedia.org/wiki/Earth%27s_rotation


Fitch's paradox has an incorrect assumption about conjunction: if I know that all digits of pi are between 0 and 9, that assumption then suggests that I know all digits of pi, because they are a part of known truth. Or it's used incorrectly.


Monty Hall is there, albeit inside the answer to a stack overflow question linked in Misc #33


Do lemon markets actually occur in real life? It seems a combination of: people may need the money, inventory is not free, and most goods depreciate in value will inhibit the forming of a lemon market.


I needed a half cup of something for a recipe and only found the 1/3 cup measure. Then it occurred to me that a third and a half (of a third) is equal to a half.

So simple but somehow doesn't feel right.


3 measures = 1 cup

Now divide both sides of the equation by 2.


Number 7: “The Earth makes 366.25 rotations around its axis per year.”

Errr, the Earth doesn’t ‘roll around the sun’ - hence the author’s (+1) is wrong, I think (hope!). It’s 365.256 according to Wikipedia.


> Both the stellar day and the sidereal day are shorter than the mean solar day by about 3 minutes 56 seconds. This is a result of the Earth turning 1 additional rotation, relative to the celestial reference frame, as it orbits the Sun (so 366.25 rotations/y).

https://en.wikipedia.org/wiki/Earth's_rotation


So we would need 367 unique date identifiers … but we’ve only got 366 (Feb 29th being the non-annual one).

I get I may be being unintelligent, but isn’t the author confusing the rolling coin paradox with an obscure astronomical reference system and coming up with a ‘mistakenly technically correct’ result that doesn’t match experienced reality?


Imagine you're standing on a set point on the surface on the outer coin, e.g. the one touching the inner coin. As the outer coin rotates around the inner coin, your experienced reality will be that you see n-1 rotations.

In the example of the outer coin having 1/3 the radius of the inner coin, as the outer coin rolls around the inner coin 4 times, you would actually only touch the inner coin 3 times.


I made a visualization for

> Circle A has 1/3 the radius of circle B, and circle A rolls one trip around circle B. How many times will circle A revolve in total?

https://www.shadertoy.com/view/7ddSR2


It matches experienced reality if you look at the stars rather than the sun, or if you use something like a Foucault pendulum to measure the rotation speed.


Why 367 dates?

Dates are not used to count Earth revolutions. They are counting days/nights and there are only a bit more that 365 of those in one year.


I was mistaken. Here's the video! https://www.youtube.com/watch?v=yCsgoLc_fzI


The professor lost in the video -- he did not win. The one who _didn't believe_ that one could go faster downwind than the wind gave the $10,000 to charity. Or did I miss something?


Thanks!


17 reminds me of this one: If you have a three-legged stool on an uneven (but continuous) surface you can always find a stable position for the stool just by rotating it.


Is there a joke or double entendre to the misspelling of 'counterintuitive' in the HN title (the article title is correct) that I'm missing?


I'm pretty sure it's just a typo. The actual title is "The most counterintuitive facts in all of mathematics, computer science, and physics", which is slightly too long for HN and a bit click-baity. No need to retype or abbreviate CS though.


> Adding 3 feet to a tightly tied rope around the earth would allow you to raise it uniformly by almost 6 inches.

There is no way my brain can accept that this is true.


> > Adding 3 feet to a tightly tied rope around the earth would allow you to raise it uniformly by almost 6 inches.

> There is no way my brain can accept that this is true.

Yeah, my brain wants the size of the Earth to matter even though I know c = πd => c1 - c0 = π(d1-d0) means only the delta matters. The Earth, a soccer ball, the Milky Way galaxy...the amount of additional diameter you get from the same added circumference is constant.


77+33 is not 100 and that's quite saddening.


>Two 12 Inch Pizzas have less Pizza than one 18 inch pizza

Love this one. Always go for the biggest pizza you can


No. While two 12" pizzas have more pizza than one 18" pizza, it's possible for the two 12" pizzas to be a better value.

If the total price of the two 12" pizzas is $7.20, for example, they are a better value than the 18" pizza once the price of the 18" pizza is greater than $8.10. More generally, the two 12" pizzas are a better value if the 18" pizza's price is higher than 112.5% of the two 12" pizzas' total price.


But that's almost never the case.


How do you know, unless you do the calculation?


Don't know about you, but I have never seen any place that charges more for one large pizza than for two medium pizzas.


For example the place by me charges 11 for a 12 inch pizza and 25 for a 24 inch. That's 4x the pizza for a little over twice the price!


Considering spacetime, matter, and energy are all quantized, why is something like Gabriel's Horn significant? I don't see how it has any more relation to reality than phrases like "negative surface area" would.

Also, it's patently absurd someone would include Fitch's Paradox, a piece of philosophy, on a list of "counterintuitive facts."


> Considering spacetime, matter, and energy are all quantized

First, we don't know that spacetime is quantized; that's a plausible speculation but we have no theory of quantum gravity.

Second, "quantized" is not the same as "discrete". A free particle in quantum theory is "quantized" but the spectrum of all of its observables is continuous.


It's not even a plausible speculation; all of the best models we have, quantum mechanics, special relativity, quantum field theory, general relativity, and string theory have a fully continuous space-time. The one notable exception is loop quantum gravity.


Gabriel's Horn was cool till someone pointed out to me that you can have a line of infinite length within a square (trivially).

When comparing something of a certain dimension with something of a higher dimension, it's not at all surprising that the lower one can be infinite and the higher one finite.

Usually it's phrased as "a finite amount of paint can paint an infinite area." But why do I need the Horn to realize this? It works in the Horn only if there is no lower limit to the thickness of paint. If you accept that, then I can take a drop and paint an infinite plane with it. Why do I need the Horn to demonstrate this?


The way I'd heard the paint comment was along the lines that "Gabriel's Horn can hold only a finite quantity of paint, but requires an infinite quantity of paint to cover the surface".

So if you think of it as a bucket that can't hold enough paint to cover itself, that is at least a little surprising.


But that's exactly my paint. If you allow for infinitely thin paint, then a finite volume of paint can always cover an infinite surface - you don't need Gabriel's Horn to show that.

If you don't allow for infinitely thin paint, then no - Gabriel's Horn surface cannot be painted even with an infinite amount of paint.


It's not supposed to be especially tricky. Yes, as soon as you realise that the horn is a bucket with a very deep section that is narrower than the assumed thickness of a coat of paint, the surprise dissipates.

I believe the point of the exercise is an introduction to curves with infinite length but finite area under them, in order to expand one's intuition about such objects, which is then transferable to other examples like space-filling curves.


Of course Gabriel’s horn doesn’t exist in physical reality, but it’s still interesting that such a thing exists in a mathematical theory that is normally a pretty good model of physical reality.


I think you're taking things too seriously (and in one case not seriously enough): These are all conclusions that are true if their premises are true. Some of their premises can obviously be satisfied, others obviously can't and many others are in between (unknown or debatable.) They're all counterintuitive results though.


I don't think we know that spacetime is quantized.


> The Earth makes 366.25 rotations around its axis per year

Isn't it 365.25?


Imagine that the Earth was showing always the same face towards the Sun (like the Moon/Earth situation).

At the end of the year the Earth would have rotated once (not zero times).

Edit: Imagine now that you knew only that the Earth was rotating around its axis (not parallel to the orbital plane) once per year. Then either we are in the previous case (no day/night cycle) or there are two day/night cycles.


Here's a nice image that explains the difference between a solar day (meaning the time it takes until the sun is at the same azimuth again) and the sidereal day (meaning the time it takes for earth to rotate around it's axis once): https://qph.fs.quoracdn.net/main-qimg-ab0d69361311b4f15b0064...


John Conway's free will theorem could go here


very cool collection - truly something to pick from at a party. should have much more upvotes imho.


I have actually told this one at a party (well, not exactly a rager) "Two 12 Inch Pizzas have less Pizza than one 18 inch pizza."

and we did the math to prove it to ourselves. Blew our minds... an actual valid use for middle school math.


Agreed, some real mindbenders there!


There is a typo in the title.


the planet closest to earth is Mercury.


There is a typo in the headline: Conterintutive -> Counterintuitive


Great list.

The statement “causation does not imply correlation”, while true in some contrived settings, is obstructive as a heuristic in my opinion. It is an anti-productive and unimportant diversion in any setting I can imagine it coming up.

Causation does indeed imply correlation, by tautology, using a very reasonable definition causation in the context of linear correlations.


I love “causation does not imply correlation.”

We still reason so much from the implicit premise that correlations are meaningful. It’s so intuitive, but wrong. The thing that finally got me to wrap my mind around it was this:

https://tylervigen.com/spurious-correlations


The most counterintuitive fact, er... fib is this nugget:

"0% selected the right answer on this SAT question: Circle A has 1/3 the radius of circle B, and circle A rolls one trip around circle B. How many times will circle A revolve in total?"

You know how hard it is to get 100% of the people to do something? Don't insult our intelligence like that, or of SAT test takers in general.


The correct answer was not provided as a multiple choice option, which means by definition that 0% of test takers selected it.


All right, that's certainly counterintuitive...


> Knowing just slightly more about the value of your car than a potential buyer can make it impossible to sell it: https://en.wikipedia.org/wiki/The_Market_for_Lemons

From that Wikipedia page:

> This means that the owner of a carefully maintained, never-abused, good used car will be unable to get a high enough price to make selling that car worthwhile.

This is bullshit-- the word "enough" was sneaked in there without any rationale provided. The most we can say is that a seller-- if they decide to sell-- won't get as high a price as they would if the information assymmetry didn't exist. But that's just a truism.

I'd be willing to agree for the sake of argument that we are representing humans here as some commonly known set of JSON values. But before we go anywhere else from that argument, I'd need to know that the speaker will at some point halt the simulation and come back to Earth with insights into the real world.

Does that happen in this paper? If not, then how does the paper have relevance for the economic transactions among the set of bona fide human beings?


Can make it, but I believe there is likely a point where sellers and buyers utility value considerations cross. That is seller considers price of car to be same as buyer considers this. As these two values are not necessarily tied together. Maybe needs of both parties are different.


> Knowing just slightly more about the value of your car than a potential buyer can make it impossible to sell it

> This means that the owner of a carefully maintained, never-abused, good used car will be unable to get a high enough price to make selling that car worthwhile.

Both statements are a wrong understanding of the phenomenon. The market collapses because the transactions happen outside of the market. Something similar has in my opinion happened on the job market. Most good jobs are outside the regular job boards. Most good applicants could not be bothered to apply for a job opening but are asked by recruiters or friends.


Under this model, if the transaction of quality goods happened anywhere, some buyer reached information symmetry. But among the assumptions in the article is that sellers have no alternate market (they just "leave", whatever that means) and there is no credible way to provide information on the quality of goods. But then how do buyers know to lower their prices with what's left in the market?

There're a lot of unreasonable assumptions and maybe that's why the paper was rejected 3 times. You can also run the thought experiment backwards: lemons should be removed from the market first since, as stated, those are the ones that sell, but then you should be left with a market full of peaches.


I really don't like how the OP phrased this one, because it's about more than knowing the value of your car.

This paper was published in 1970, but it demonstrated how markets can fail, as well as a method of correcting that failure. Imagine the following scenario:

There's a used car market of private sellers that is comprised of a mixture of peaches (good) and lemons (bad). To keep it simple, let's assume we're just talking about one model of car.

Additionally, there's no way to identify which car is a lemon, but it's known that they're worth much less than a peach because of the much higher cost of maintenance.

If you have a market where the above conditions exist (only seller knows if car is lemon/peach, and a mixture of lemons/peaches), you'd potentially end up with a market failure.

This is because sellers of peaches can't get the price they want for their car, whereas sellers of lemons can profit over the expected value of theirs.

The problem with assuming that lemons would be removed from the market is that any buyer of a lemon would want to sell it once they've realized what they purchased, putting it back on the market. This effect compounds to where a greater percentage of cars being sold on the market are lemons, further depressing the price and removing peaches from the market.

Akerlof's solution to fix the market was to introduce warranties. Owners of peaches would be willing to offer warranties, because they trusted the quality of the cars they were selling. Eventually, buyers would see the lack of a warranty as the indication of a car being a lemon, forcing the sellers of lemons to either offer a warranty or lower their asking price below the market price of the vehicle.

His work applied to the function of other markets, like insurance (older people are the costliest for health insurance companies) and employment markets (Certain classes of people have difficulty finding a job despite similar skills), as well as the institutions that have formed (Medicare, professional licensing) to improve the functioning of these markets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: