It does for me in Firefox, but I have the opposite opinion. It shouldn't start running compute intensive operations and downloading datasets untill I say go.
Maybe try for something even simpler than MNIST, so we can get the great immediate feedback effect of http://playground.tensorflow.org, which I consider the important aspect for learnability?
There's just no point getting fancy for MNIST. Using nothing but fully-connected layers and rmsprop, I got good results in less than one minute. https://imgur.com/a/7xzmQ
Hidden units for discriminator layers - 100, 40, 2.
Generator layers - 40, 100, 768, then "reshape" into 28x28x1.
Unlike regular classification, it's hard to quantify the performance of a GAN by just looking at the loss curves. You really have to make a subjective evaluation by looking at the generated images.
By playing around with it, the results I got from fully connected were nowhere near as good as the results I got from convolutional.
Yeah ok it's pretty misleading. And if you let it run longer the generator starts getting a little better. It took me a long time to figure out the UI.
In Israel we have a classic song that goes "if you're cooking spaghetti and the water doesn't boil, how about turning the stove on, because that's what everybody does"
(at the time of writing, the HN post points to the demo)