I don't want to knock the article, since this kind of exploration is fun and well written blogs are always a joy to read.
But knowing some statistics, the entire of Snakes and Ladders is an absorbing markov chain [1] and can be very quickly analyzed as such without having to resort to sampling.
Random sampling is easy but take a step back and the entire state space is an integer in [1,100]. (Actually there are fewer than 100 states because the bottom of a ladder or top of a snake isn't a state).
The state transitions are very easy to model, they're just 1/6 to each of the 6 next states (sometimes fewer, in which case they just add).
Having constructed our markov chain, we can instantly and accurately get back our time-to-victory from each square.
Interesting that these two articles have different rule-sets. The first reckons rolling anything about 100 is a win, whereas the second requires an exact landing!
(Neither plays the "bounce-back" rule always demanded by my friend's little sister!)
I also wrote a little blog post about that very thing back in 2011. In addition to using a Markov chain approach, also took a look at it from an information entropy perspective. And the code is also in R, to boot! http://bayesianbiologist.com/2011/12/31/uncertainty-in-marko...
Chutes and Ladders, like Candyland, isn't so much a "game" as an exercise in following a procedure involving random chance. That's useful, and can be fun; it also has the useful property that an adult and a kid can play together with an equal chance of either "winning", and it can teach good sportsmanship.
But yes, there are many kid-friendly games to replace it with for kids who understand basic game concepts.
Also check out the aMAZEing labyrinth. There's a "junior" version, but in my experience most younger kids can figure out the regular "8+" game just fine.
The first of these is more sophisticated than the current post, because it calculates the distributions analytically using the fact that the move sequence is Markov and the transition matrix is known. So, exact probabilities can be found.
That is just the analysis I think about doing every time I play Chutes and Ladders with my daughter. I think "I should just simulate this game, it would be much more fun."
Well, just take notes while you play then. ;) You're already simulating it by hand. How do you think people ran simulations before there were electronic computers?
Thanks for this awesome analysis. I've been wanting to do a simulation of how much of Settlers of Catan is luck vs skill. Has anyone seen an analysis like this completed?
I haven't seen a formal analysis, but I can tell you for sure that it is highly skill-based until everyone plays optimally at the game. Then it becomes luck-based.
Best example was a tournament for Settlers. In the early stages one player dominated each table. But in one of the last stages it was only the top players. The game ended with one person getting their 10th VP and everyone else at the table had their 10th VP in their hands.
Yeah completely agree with you. What do you think would be a good thought process between modelling skill and luck in game. I guess you have to create a skill attribute that can affect particular parts of game play in a simulation and then run simulations and compare outcomes with skill vs outcomes with chance.You would then create a cap on skill? Alternatively, can skill be capped by innate features of the game? I guess i'm wondering if its possible to model this without building in the conclusion a priori? I guess this means you have to make sure your model is actually representative of the game and the outcomes and not building in attributes that are only present in the model but not the real game. Just sort of thinking out loud here.
But knowing some statistics, the entire of Snakes and Ladders is an absorbing markov chain [1] and can be very quickly analyzed as such without having to resort to sampling.
Random sampling is easy but take a step back and the entire state space is an integer in [1,100]. (Actually there are fewer than 100 states because the bottom of a ladder or top of a snake isn't a state).
The state transitions are very easy to model, they're just 1/6 to each of the 6 next states (sometimes fewer, in which case they just add).
Having constructed our markov chain, we can instantly and accurately get back our time-to-victory from each square.
[1] http://en.wikipedia.org/wiki/Absorbing_Markov_chain