Hacker News new | past | comments | ask | show | jobs | submit login
Neural Network Visualisation in Clojure (clojurefun.wordpress.com)
63 points by c-oreills on April 10, 2013 | hide | past | favorite | 14 comments



Cool, with caveats. Although this is interesting for people who know how neural network structures are built and generally how backpropagation and its successor training algorithms work, it isn't particularly _informative_ as a visualization. It does show how easy it is to encode information visually, compared with how difficult it can be for the viewer to _decode_ that same information. This is a common problem with "information", as opposed to "scientific data" (such as volumetric scan data or vector maps) visualizations, where there's no obvious physical correlative that we can use to help us decode the information as viewers.


This is a distinction worth making (informative visualizations vs...well, other ones), but there is a corner case for whom this is a very helpful viz---the newbie playing around with NN's who could use a visual aid beyond clojure's pprint. As a member of said corner case, this would be very helpful to me. All the same, thank you for reminding me of the filter all new viz projects must pass---"does this communicate meaning?"


I have a question for the author, but please do not interpret this as the typical HackerNews-esque pessimistic attack, as it is a sincere question.

Do you really feel like visualizing Neural Networks helps to understand them better? I have yet to find one that has helped me understand it any better than a textual explanation or pseudo-code of the algorithm.


I'm the author. Yes I find it useful in various ways.

If the learning rate is too high, you can visibly see see weights flicker between different colours. In simple nets you can use it to identify the "meaning" of feature detectors by observing positive and negative links (green and red). You can debug learning algorithms by immediately seeing if something unusual is happening to the weights or activations.

As always caveats apply, but it is a useful technique (when used alongside a variety of other tools).


Not the author, but I don't think visualizing neural networks is useful for understanding the general concept. I think it's useful for understanding a specific network. For example, if you see the weights change extremely quickly, that means your learning rate is probably too high. If you run a network 3 times and see completely different sets of weights, that tells you your model is highly unstable (not necessarily a bad thing.) It can also give you a sense of whether you have a lot of features that matter or just a few.


Likewise, I do not find much value in the visualization of the network structure, on itself. However, I really would like to see something that visualizes the geometric representation of the outputs of such Neural Network.

When I was in school, I learned that a Perceptron is an inequality represented by an hyperplane in the space of R^n (for n inputs of the cell). The numeric output of the cell can be interpreted as the size of a vector perpendicular to such hyperplane.

At the time, I tried to make the next step for a Back Propagation network. Assuming the output in each cell in a given layer is a dimension in the inputs of the next layer. Then I could project each hyperplane onto the other plane to find the non linear geometric objects that resulted. Of course in order to do that, the network structure was constrained to be 2-2-2 (so every hyperplane would map to a line in 2-D).

I lacked the math background to make it work properly at the time, but if someone is interested in picking up the ball, i think it'd be an interesting idea.


Andrew Ng has a fantastic example of visualizing deep belief nets:

https://www.youtube.com/watch?v=ZmNOAtZIgIk

The example starts at about 18:20. You really need to have watched the previous material to understand it, but the basic idea is that he's plotting the hierarchy of features learned by successive layers of a deep neural net (a sparse autoencoder, IIRC, it's been a while since I watched it).


It probably depends on the observing person. I found it helpful at a conceptual level, and I think it could be a good introductory teaching aid.

I always run into trouble when visuals encode too much information in colors... I'm red-green color deficient (commonly called color blind), so discriminating colors that have reds and greens as components is challenging.


I think it's a useful visualization, but I prefer matrix plots to observe the weights. You can see the weights start differentiating themselves as training proceeds, and you'll notice that some layers tend to learn a lot faster than others. The unit activations (neuron outputs) are similarly useful to visualize.

Example of weights on matrix plot: http://imgur.com/T48Wal1


I wrote a commercial NN simulator in the late 1980s and I used a different approach to visualizing weights (that many others also use): if two connected layers are viewed a a 1-dimensional vector, then the connection weights are represented by a 2-dimensional grid. Each weight grid cell is color coded. This is a much more information rich display.


What is the color scale for the connection weight strength?


From github:

(defn weight-colour ([^double weight] (Color. (clamp-colour-value (Math/tanh (- weight))) (clamp-colour-value (Math/tanh weight)) 0.0)))

So when weight is 0, it's black. As the weight gets more positive, the colour gets greener, and as the weight gets more negative, the colour gets redder. clamp-colour-value makes sure the colour component doesn't go outside the interval (0,1)

tanh curve shown here: http://upload.wikimedia.org/wikipedia/commons/7/76/Sinh_cosh...


I'm getting a 404 on that URL, I think you forgot the g in .svg :P





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: