The crystal theme worked the best (it's basically some scales on the piano that go up and down).
With a first order Markov chain, the output was a random walk of notes going higher and lower. With a second order chain, the notes went up and down fairly steadily, but with scales peaking at odd points. With a third-order chain, you get close to the original song out.
(It's hard to explain in text, without knowing the correct music vocabulary, but I hope you get the idea).
I'd guess each section is unique so there is no frequency analysis; the next section is picked at random instead of saying "Whoop whoop" is followed by "gundam style" 20% of the time, by "hey sexy lady" 50% of the time and by another "whoop whoop" 20% of the time.
Yeah, thinking about it some more, that seems to be correct. It isn't what people usually think about when they think about Markov models, but it qualifies.
Actually, I think this kind of Markov model might be a better introduction than some of the others I've seen, since it's extremely simple to understand.
My team built something similar at a recent hackathon. It attempts to automatically do this to an arbitrary WAV file (break up into waveforms) and generate a new piece of music:
https://github.com/osnr/markov-music
I didn't take time to try to understand why. Either this is because echonest's analyzed mp3 isn't the same as mine. Or the function to compute distance between beats isn't generic enough.
Analyze your own, it'll be much better when you're doing synchronous stuff like this. Our canonical audio is likely very different in bitrate or time to start.