Hacker News new | past | comments | ask | show | jobs | submit login
Feature Visualization: How neural nets build up their understanding of images (distill.pub)
461 points by rrherr on Nov 7, 2017 | hide | past | favorite | 63 comments



Looking at the finger instead of the moon: I like the HTML layout (responsive, inline images with captions, lateral notes).

Any insights on how it's generated? Markdown, Rst, Latex -> HTML? I would love to produce my documentation in this way.

Edit: I was too hurried. Everything is explained in https://distill.pub/guide/, the template is at https://github.com/distillpub/template


I logged in to comment on the superb design as well. The design doesn’t only make it look good, every aspect is in full support of the content. And technically it’s executed perfectly as well. Very impressive.


I'm incredibly lucky to be working with a number of people who have an amazing intersection of design skills and scientific knowledge.

Ludwig is fantastic and put an incredible amount of work into polishing this article. And my co-editor Shan (who used to do data vis at the New York Times) seems like he has super powers half the time. We also get lots of outstanding advice from Ian Johnson and Arvind Satyanarayan.


Minor note, you should add the field DOI={10.23915/distill.00007} to the BibTeX citation. This is also missing from Google scholar, and is a particular pat peeve of mine now that DOIs are practically mandatory (copying the helpfully-formatted citation but then having to look around the page to find the DOI).


Thanks for the heads-up! We are not sure there is a standard for this, but we now include the DOI in our bibtex citation.


Thanks for the suggestion, added it to our issues... https://github.com/distillpub/template/issues/62


Looking at the finger instead of the moon

I like this metaphor.


(In case "I like this metaphor" isn't a comment about how you have liked and continue to like it, it's a reference to an old Buddhist saying. I could have sworn it also appeared on the first page of the Tao Te Ching, but I guess I had just remembered the gist.)


The page is broken on Safari 10.0 but works on Chrome 59


Hey! I'm one of the authors, along with Alex and Ludwig. We're happy to answer any questions! :)


As always, THANK YOU. You and your coauthors' attention to even the smallest details everywhere in this piece are evident. I've added it to my reading list, to read through it and look at the visualizations slowly :-)

That said, after skimming the piece and thinking about the amount of work that went into it, one question popped up in my mind: do you think it would be possible to train a DNN to learn to visualize the "most important" neuron activations / interactions of another DNN?


We're glad you enjoyed it! :D

> do you think it would be possible to train a DNN to learn to visualize the "most important" neuron activations / interactions of another DNN?

That sounds like a really hard problem. I'm not entirely sure what it would mean even, but it would not surprise me at all if there was some refinement that could turn into an interesting research direction! :)


Thanks. I asked the question in such an open-ended way just to see if you had any crazy ideas. It does sound like a hard problem.

In terms of what it could mean, one idea I just had is to take a trained model, randomly remove (e.g., zero out) neurons, and then train a second model to predict how well the trained model continues to work without those removed neurons. The goal would not be to 'thin out' the first model to reduce computation, but to train the second one learn to identify important neurons/interactions. Perhaps the second model can learn to predict which neurons/interactions are most important to the first model, as a stepping stone for further analysis?




Many of the visualizations seem interpretable even though they are (essentially, I presume) projections of a very high dimensional space onto 2d position + 3d color space. Is there any research on projecting these visualizations into 6-dimensions as well (3d position + 3d color in VR) or even higher dimensions by substituting other sensory modalities for extra spacial or color dimensions?


I suspect the extra fidelity won't have much payoff for high dimensions considering the large cognitive overhead required for a person to make sense of it. Even a six dimensional approximation of a 1,000 dimensional space is quite crude, and we're pretty confused by spaces larger than three.


I wanted to thank you not only for this new page but also for your blog and distill.pub, especially the research debt page!


Our pleasure -- very glad our work is helpful. ;)


Hi. Thanks for the great article.

Can you elaborate on what you mean by "directions in activation space" ? If I understand it right:

You take a few neurons in a layer, and you follow some linear combination of their weights; you are then walking along a random direction. You take a single neuron and walk along its weights; you walk along this neuron's direction. Is this correct?

Also another comment: Szegedy et al [9] (https://arxiv.org/pdf/1312.6199.pdf) has the following abstract:

> First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains the semantic information in the high layers of neural networks.

This is one of the predictions of the following paper, even though I think it came later: Opening the Black Box of Deep Neural Networks via Information (https://arxiv.org/abs/1703.00810). Here: https://youtu.be/FSfN2K3tnJU?t=1h19m23s, Tishby says: "One of the consequences of this theory is that single neurons in a layer don't tell us much."

Also, If you extend this idea to training examples, you get Mixup (https://arxiv.org/pdf/1710.09412.pdf)


No questions. Just a big thank you !

distill.pub is really a really amazing place to learn.


Do you often have beautiful pictures like this on your screen?

(a reason why I'm moving into scientific visualization and GIS instead of backend work...)


Yep, neural network interpretability research involves quite a bit of staring a pretty pictures. It's a difficult job, but someone has to look at all those images. :P


How useful are visualizations like this for improving the performance of the model?


Great presentation, but I do wish they'd throw in an equation or two. When they talk about the "channel objective", which they describe as "layer_n[:,:,z]", do they mean they are finding parameters that maximize the sum of the activations of RGB values of each channel? I'm not quite sure what the scalar loss function actually is here. I'm assuming some mean. (They discuss a few reduction operators, L_inf, L_2, in the preconditioning part but I don't think it's the same thing?)

The visualizations of image gradients was really fascinating, I never really thought about plotting the gradient of each pixel channel as an image. I take it these gradients are for a particular (and same) random starting value and step size? It's not totally clear.

(I have to say, "second-to-last figure.." again.. cool presentation but being able to say "figure 9" or whatever would be nice. Not everything about traditional publication needs to be thrown out the window.. figure and section numbers are useful for discussion!)


You're right: we are taking the mean of the activations of a given channel `z` over all its `x,y` coordinates. (We could sum, but we use mean so that step sizes are comparable between channel and neuron objectives.) Thanks for the feedback that this notation is not super clear, we will consider rewriting those expressions.

When we do feature visualization we do start from a random point/noise. For the diagram showing steepest descent directions, however, the gradient is evaluated on an input image from the dataset, shown as the leftmost image. There's no real step size either as we're showing the direction. You can think of the scale as arbitrary and chosen for appearance.

Section numbers are on their way—and figure numbers also sound helpful! I've added a ticket. (https://github.com/distillpub/template/issues/63) For now you can already link to figures like this: https://distill.pub/2017/feature-visualization/#steepest-des...


Ah, thanks for your explanation re the gradient images, I got it, thanks! I think it does say that more or less in the text actually, I was understanding it a bit wrong, my bad. For me this preconditoning part of the article is the hardest to get an intuition for.


There’s also an appendix where you can browse all the layers. https://distill.pub/2017/feature-visualization/appendix/goog...


Yes that appendix is not to be missed. Also clicking on an image goes deeper into an exhaustive catalog of each layer, which is an amazing and huge resource.


Are the layer names the same ones referred to in this paper? https://arxiv.org/abs/1409.4842

And how can e.g. layer3a be generated from layer conv2d0? By convolving with a linear kernel? Or by the entire Inception Module including the linear and the non-linear operations?

Thank you. Outstanding work breaking it down.

Here's another paper people might enjoy. The author generates an example for "Saxophone," which includes a player... Which is fascinating, bc it implies that our usage of the word in real practice implies a player, even though the Saxophone is an instrument only. This highlights the difference between our denotative language and our experience of language! https://www.auduno.com/2015/07/29/visualizing-googlenet-clas...

Also, for those curious about the DepthConcat operation, it's described here: https://stats.stackexchange.com/questions/184823/how-does-th...

Edit: I'll be damned if there isn't something downright Jungian about these prototypes! There are snakes! Man-made objects! Shelter structures! Wheels! Animals! Sexy legs! The connection between snakes and guitar bodies is blowing my mind!


This didn't include my favorite kind of visualization from Nguyen, et al., 2015: https://i.imgur.com/AERgy7I.png


I should have explained how these are made. They train another neural network[1] to produce an image that most maximizes each class. This acts as a prior that the image must have a very simple and regular structure. And so the results seem to be very simple and even abstract, instead of pixel vomit.

[1] Not technically a neural network, but a CPPN. Which is something like a neural network with many different mathematical functions as activation functions. This allows things like a neuron with a sine wave activation that can repeat a pattern across the image.


the school bus one really sticks out to me. it seems all the net cares about is seeing orange juxtaposed with black. No shapes, no vehicle features, just orange and black.

hazarding a guess with no knowledge of the subject, I wonder if that is because no other class in ImageNet can be defined by orange and black. The net simply doesn't need to learn anything more about orange and black, because on 100% of the samples it trained on, orange and black meant "school bus". Every time. So no need to learn any other features -- if you see orange and black, it MUST be a school bus, at least in the context of this data set.

I wonder if we introduced other "orange and black" classes to ImageNet, it would need to learn more features about the school bus in order to identify it.


That's a good observation. But when I see that image I definitely think of a school bus. The color pattern is very distinctive (in the US anyway where all school buses are painted the same color and style.) So I can't say the NN is wrong.

It doesn't necessarily mean that color is the only feature it uses to classify school buses. Just that that feature alone is enough.


It seems neural nets are quite prone to this sort of overfitting, given the article on adversarial objects from the other day: http://www.labsix.org/physical-objects-that-fool-neural-nets...


would be interesting to try these pictures, the stripes would compete with the other features of the picture, so it might not result in seeing a schoolbus.

https://static.comicvine.com/uploads/scale_large/6/67663/418...

http://www.fairmont.com/assets/0/137/13359/13476/13477/2a657...


Wow. That's incredible how psychedelic these images are. I'd be really curious to learn more about the link between these two seemingly distant subjects.


Check out this: https://www.ted.com/talks/anil_seth_how_your_brain_hallucina...

Neural nets are doing something that the mind does too - it takes visual input, then it "predicts" and "fills in the blanks" (aka hallucinates) so that you "see" what you expect to see. Its why you dont notice your blind spot.

Its also interesting to note that this process appears to change as we age: https://www.npr.org/2016/04/17/474569125/your-brain-on-lsd-l...


Brain and NNs, at least in the first visual cortex layers and the first perception layers accordingly, are doing the same thing - building optimal sparse coding for the body of imagery they've been exposed too (in general it isn't limited to images, it is just that our knowledge for image processing seems to be a bit more advanced). There are works from 199x proving that Gabor kernels - which form the first layer in the visual cortex and which (or very similar) happen to emerge in the first layers of well trained NNs too - are optimal for a wide class of image inputs (edge density, etc.), basically the class of images naturally surrounding us. That optimality provides for a good explanation of their emergence in NN.

It seems to be a natural speculation that the further layers of visual cortex/brain and NNs are subject to similar optimization, and thus they would have similar kernels. I think psychedelics somehow stimulate these deep layers to massively fire without waiting for the normally required input from the eyes to propagate to those layers to selectively fire neurons in them.


This pictures reminds me about what one's can see under psychedelics. All sensory input basically begins to break down to that kind of patterns, and thus reality dissolves into nothing. This is equally terrifying and liberating depends on look. The terrifying thought is that there's no-one behind this eyes and ears. The liberating thought is that if there's no-one there, then there's no-one to die.


> This pictures reminds me about what one's can see under psychedelics. All sensory input basically begins to break down to that kind of patterns

The neural structures in human brains that recognise edges/textures/surfaces are the same ones that generate trippy images when exposed to psychedelic drugs (or flickering light: http://journals.plos.org/ploscompbiol/article?id=10.1371/jou...): http://www.math.utah.edu/~bresslof/publications/01-3.pdf


have you seen "Deep Dream"? https://www.youtube.com/watch?v=SCE-QeDfXtA and https://www.youtube.com/watch?v=DgPaCWJL7XI are excellent examples.

I believe for anybody who has ever tried psychedelics seeing these was a watershed moment. It seems almost impossible that an algorithm could so faithfully reproduce the experience without somehow having recreated some fundamental structure of the human brain. That is compounded by the fact that this wasn't the result of actually trying to do so, but of an open-ended experiment of creating feedback loops in NN layers.


Seems like it isn't that reality dissolves into nothing, it's just a few channels that get overly stimulated.


The best explanation I ever heard for psychadelics was that they turn up the gain, the attenuation..on both thoughts and senses. So you get weird stuff which would normally be supressed.


Hi Chris, firstly thanks for all the work you've done publishing brilliant articles on supervised and unsupervised methods and visualisation on your old blog and now in Distill.

This question isn't about feature visualisation, but I though I'd take the chance to ask you, what do you think of Hinton's latest paper and his move away from neural network architectures?


Capsules are cool! I spent several days hanging out with Geoff & co a few months back -- they're exploring really exciting directions. I don't feel like I have anything particularly exciting to add, though.


Thanks :D


Interesting that simple optimization ends up with high-frequency noise similar to adversarial attacks on neural nets.

While I agree that the practicality of these visualizations mean that you have to fight against this high-frequency "cheating", I can't help but shake the feeling that what these optimization visualizations are showing us is correct. This is what the neuron responds to, whether you like it or not. Put in another way, the problem doesn't seem to be with the visualization but with the network itself.

Has there been any research in making neural networks that are robust to adversarial examples?


There are was a Kaggle competition on Defences against Adversarial attacks by Google Brain for NIPS 2017 https://www.kaggle.com/c/nips-2017-defense-against-adversari...


The article mentions pooling layers as one source for the high frequency patterns. Jeffrey Hinton recently introduced capsule networks (https://news.ycombinator.com/item?id=15609402), in part because he wants to get rid of pooling layers. Maybe this approach is effective to counter (at least visually indistinguishable) adversarial examples.

edit: ok, someone already tested it, and it does not seem to help that much: https://github.com/jaesik817/adv_attack_capsnet


Cool. Reminds me a bit of https://qualiacomputing.com/2016/12/12/the-hyperbolic-geomet...

(Though maybe not as symmetric?)


Is there any way to run images from a camera real-time into GoogLeNet?

E.g. like if I want to scan areas around me to see if there are any perspectives in my environment that light up the "snake" neurons or the dog neurons???


The Jetson TX2 can run GoogLeNet in real-time with the onboard camera, so it's definitely possible on mainstream GPUs too.

https://github.com/dusty-nv/jetson-inference

Inspecting layer activations in real-time is trickier, but presumably possible.


You might find it interesting to look at Jason Yosinski's "deep vis" framework: http://yosinski.com/deepvis


I just found CaffeJS which can kind of do what I want. . It doesn't show the individual neurons, but does go from webcam to classification: https://chaosmail.github.io/caffejs/webcam.html


If not in realtime, you could always take a video and post-process it offline?


Okay...maybe a stupid question.

Could they train on white noise from a television and see if the CBR shows a structure similar to the structure of the observable universe when examining the feature layers?



So can someone use this to show us where the rifle is on the turtle?


Actually I'd be really interested in seeing just that!


Awesome, but to me this stuff is also terrifying, and I can't quite place why.

Something about dissecting intelligence, and the potential that our own minds process things similarly. Creepy how our reality is distilled into these uncanny valley type matrices.

Also, I suspect it says something that these images look like what people report seeing on psychedelic trips...


I remember friends telling me back in 88 [1] about driving round London from one rave to another high on MDMA & LSD. Back then I thought driving on acid was insane. This sounds like driving while the car is on acid! Which is a terrifying thought...

[1] https://en.wikipedia.org/wiki/Second_Summer_of_Love


No, the acid simply lets you see "how the sausage is made". You get temporary access to intermediate layers that should probably remain hidden (and for good reasons).

With AI it's trivial to get access to the inner workings of the intermediate layers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: