Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is basically an article describing why you can’t just look at an event after it occurs, see that it has some extremely rare characteristics, and then determine it was unlikely to happen by chance.

It is like asking someone to pick a random number between 1 and 1 million and then saying, “oh my god, it must not actually be random… the chances of choosing the exact number 729,619 is 1 in a million! That is too rare to be random!”





“You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won’t believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!”

-Feynman, from Six Easy Pieces


Feynman is one of the best ever at explaining complicated concepts in ways almost everyone can understand. That is a very rare skill for the super intelligent to have.

Agreed on Feynman, but not necessarily on the generalization that it being a rare skill to simplify things. When you understand a thing so well, you can simplify it enough.

I think it also takes a certain humility of character (which can coexist with tremendous self-esteem and even ego; see Feinman, Richard for an example).

I know plenty of smart people that know a topic well and are still not great at simplifying it in a way that can be understood well by laymen. Separate skill imo

yea i always remember this when people put on their tinfoil hats about some rare event.

Your comment made me think of an interesting story and a funfact.

During WW2, allies tried to guess the number of German tanks by observing the serial numbers on captured tanks.

https://en.m.wikipedia.org/wiki/German_tank_problem If, say, the serial numbers are unique, and come in sequence, if the five first numbers you see are all less than 100, it's a far chance that there aren't produced 200 tanks. (Provided some assumptions, of course.)

The funfact is that you get different results if you follow the frequentist or the Bayesian approach.


The Bayesian results will depend on the prior. They use a uniform distribution over # tanks produced, in the limit of the distribution's maximum -> infinity. Is that reasonable? Something more constrained might be better, maybe a gamma-Poisson prior with gamma mean based on some plausible estimate of production rate.

(The frequentist/Bayesian estimates should converge as you collect more observations.)


Yeah tbh it doesn't really go into chess-specific stats either

You could look at a bunch of other metrics to identify cheating: how many errors/perfect moves^ and whether that's within the usual range. How well were the opponents playing? Etc

If you consider that Nakamura might have been having a good day/week, was already stronger than his opponents, and some of them may have had bad games/days, you can change something from "extremely unlikely" to "about a dice roll"

^ according to stockfish


not really. this may be true for the average player, but as Magnus has explained multiple times, all he or another top GM would need to be near-unbeatable would be to check an engine in 1 or 2 critical positions per game. this essentially impossible to detect statistically. even if a cheater were to use an engine on every move, it would be trivial to just vary the engine used for each turn, vary the number of moves picked, sometimes play a slightly worse move to evade detection, etc etc

What I don’t understand is that Hikaru can visualise in his head 30+ moves ahead from both plays, and yet he’s not better than Magnus?

??

I'm all for having a Hikaru/Magnus discussion--one of my favourite topics--but this just doesn't make sense


What I was trying to say was that Hikaru can essentially predict the future given he has stockfish running in his head, while I don’t think Magnus has that ability, yet Hikaru is ranked lower

Magnus is widely regarded, including by Hikaru, as having the best chess intuition (i.e. subconscious understanding) of any player alive, by some distance. the times he's beaten are almost always when he's out-calculated on a very deep line that he had intuitively disregarded. at the same time though, Hikaru himself is also far better known for his intuition than his conscious calculation, explaining his much stronger performances in faster time controls. if you want a player that's more calculative and, ergo, more like stockfish or another traditional engine, perhaps look at Gukesh, who almost exclusively plays classical for that reason

Is that bit in The Queen’s Gambit about chess players coaching each other between matches complete bullshit? Or should one expect a player to occasionally play uncharacteristically when the stakes are high because they would seek out advice which skews their play?

Also psychological games fall neatly into the scenario you describe. I play better and you play worse because I got into your head, or sent the noisy people to be across the hall from you instead of from me, so I slept like a baby and you didn’t.


The adjournments in The Queen's Gambit were rendered obsolete after chess engines became strong enough to be useful in analysis. The last year that they were permitted was 1996.

Match play at the World Championship (where the two players play each other repeatedly for many games) involves a ton of inter-game coaching and work as each player's team goes over what went well, what went wrong, and how the next game should be approached.

Round robin play in small fields also has a significant amount of preparation because the schedule is known in advance, so players will know whom they have to play the following morning and will prepare accordingly.

I'm not comfortable saying that Hikaru does exactly 0 preparation for 3-minute Chess.com blitz games, but it's probably pretty close to 0.


There are actually some freaky patterns in nature (including how people think) that can help identify fake data...

https://en.wikipedia.org/wiki/Benford%27s_law


The article itself states that this is not really a pattern of nature, but just a feature of log-normal distributions that sometimes do occur naturally.

This article feels like an illustration of how easy it is to fool top chess players. For example, if the accusation was against Hans Niemann, top chess players and their fans would be eating it up.

Not the same though because we aren't talking about random events. If a player with a significantly lower ELO than Hikaru got the same winning streak against the same tier of players then you could absolutely conclude that it was cheating.

I had a math teacher in college that told a funny joke:

"For safety, I always bring a bomb with me when I fly"

"Why?"

"Because the odds of two people bringing a bomb on the same plane are so low"


You can and you should.

If you flipped a coin 100 times and all you got are heads you really should assume it didn't happen by chance.


Yeah, but say 1,000 people each flipped a coin 10,000 times and one of them once got a streak of 29 heads out of 30 flips. Can we assume anything then?

Nope. A previous flip has no bearing on the next flip.

You can calculate the probability of having a fair coin and as N(Heads) increases that probability goes down. Each flip is indeed independent but the distribution of flips tells you something about the coin.

We aren't predicting flips based on "a" previous flip. We're predicting them based on the set of ALL KNOWN previous flips, which allows a statistical model.

The odds of that happening by chance is so astronomical it involves a number you've probably never even heard of.

'If you flipped a coin 100 times and all you got are heads you'

By starting the sentence with if, you are selecting the occurrence to look at.

If you said I am about to look at the results of this coin toss that happened yesterday, if it is all heads then I am going to assume it was not random, then you are making the claim before you have seen the results. You can still be wrong, but the chances of you being wrong is the rarity of the event.


My favorite way to describe this is in the context of predictions. It's the difference between throwing a dart to hit a target and throwing a dart to paint a target around where it lands.

>This is basically an article describing why you can’t just look at an event after it occurs, see that it has some extremely rare characteristics, and then determine it was unlikely to happen by chance.

No. That's not it. In this case, if you properly control for all the factors, it turns out that the odds of Nakamura having that kind of a win-streak (against low-rated opponents) was in fact high.


> This is basically an article describing <snip hot take>

This is entirely wrong and missing basic high school mathematics for non-theater kids.

The original claim is not archived, if you can be bothered you can track it down and do the correct 'hot take'. You can't just grab the first statistical principal you think of even if everyone else on Hacker News does.

Article - "it violates the likelihood principle", this seems wrong and Nakamura seems right, but you'd have to look at the original claim.

They were finding patterns in a long biased list of numbers, probably.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: