It's showing a visualization of all the intermediate activations of the style transfer network. The intermediate pictures are 4D, so they're visualized as a sequence of tiles.
There's a sequence of 9x9 and 3x3 convolutions that transforms that one big input image into a bunch of smaller images. They're processed by a sequence of residual convolutions. Finally, these tiny tiles are merged together back into a stylized image of the same size as the original input with a few deconvolution operations.
It´s just a demo of an upcoming open source API that allows running deep neural network models on the browser.
Steps (disclaimer: I´m not related to the creators, so this is just what I understand it does)
1.- You upload your image
2.- Select an image to be the origin of the style
3.- Downloading Model: downloads a trained (on style
transfering) deep neural net
4.- Colorful artifacts: the model is applied to your image. Probably the artifacts are a visualization of the network weights being transformed to WebGL shaders, or just a simple visualization of the internal hidden steps of the transformation