Nancy, one thing I love about your style is that once in a while you'll stop amidst the lecture and teach about the scientific method, critical thinking, being humble and open-minded in your findings - the whole enterprise of science. It is very inspiring.
Of course, the content is insanely interesting. I am hooked like the first episode of Breaking Bad.
I am about 4 hours in and still into it! Thank you for sharing the lectures.
Just finished the series on Face Recognition. I was wondering if the fusiform face area (FFA) get activated while thinking of someone's face instead of actually looking at one? Might also be interesting to explore the memory of faces in persons with prosopagnosia?
Haven't actuallly googled for these yet. Will do so in a bit.. after the next series of lectures may be.
Thank you! It means so much to me that some people are appreciating these lectures. I knocked myself out all spring preparing this course, and it was quite a gut punch to read my course evaluations a few days ago, which ere pretty negative. I was so dispirited, I was actually thinking of stopping posting the lectures, but if you guys/gals are into it, that is awesome,I will keep at it!
To answer your question: yes we showed long ago that if you close your eyes and imagine a face you turn on the face area, and if you imagine a place you turn on the place area - here is the article:
http://web.mit.edu/bcs/nklab/media/pdfs/OCravenKanwisherJOCN...
We and others have shown that in developmental prosopagnosia, the deficit is not just in remembering faces, but also in perceiving them.
Nancy
You mentioned David Marr's book. Given its age, I assume some parts have stood the test of time better than others - would you recommend reading it all or would you focus just on the first part you mentioned in the lecture? And are there other books you would recommend? (I haven't got very far though the lectures yet so apologies if you mention some later on.)
It is really the intro and first chapter of Marr's book that is still totally current. The rest of the book is brilliant but less representative of current thinking.
Hi, as a recent undergraduate at an engineering school in the Boston area, I can truthfully say these are some of the best lectures that I’ve been exposed to on a technical topic. Thanks for publishing them to the public.
What? Negative reviews? Why would anyone do that? Get fMRIs of their brains. That lump of entitlement is pressing down on important parts of the brain.
Personally I love the way you describe the brain in the simplest terms possible. You're describing what has the potential to be a very scary hard to learn topic in the simplest terms possible. I'd liken this to how Andrew Ng manages to explain ML/DL in a very calm simple manner.
So often lecturers hide behind terminology to make a topic seem more impressive / complex. So thank you. Plus great to see that you're on here too!
Awesome lectures! Did cogsci undergrad many years ago. If you had to estimate how much of the brain the field of neuroscience truly understands, what would that percentage be around?
I remember being shocked how little we actually know about the brain and wonder if we can unlock most of its secrets in our lifetimes. Thanks for being a part of the HN community :)
Thanks for all of this Nancy. I'm just a software engineer but I find the connection between technology and the brain intriguing. The brain seems incredibly complex but I'm glad we have content like yours to at least guide us in understanding it.
Thanks for the videos!
I took intermediate and advanced physiology, so this is more of a refresher. Is there videos for the more advanced courses available too?
Hey,I'm at lecture 2.6,you're talking about the recognition of faces and it's relationship with the holistic image view/image processing.
So, can I go ahead and abstract out the ability of mind being discussed? Basically, given a category, this vision processing module in brain is processing different features of the image(here feature in the machine learning sense). And these categories can be hierarchical. Like faces, humans, creatures, this can be a hierarchy that the brain may be referring to when it is trying to identify a face and switches to the mode where it needs a holistic image view rather than some isolated parts of brain. I understand that imagining how this happens biologically(physiologically) is hard for me.
My question is, am I correct in the above inference? I want to suggest an experiment now :D :P
Sounds like you got it, more or less. Current views of how object recognition works in the brain are a lot like current deep net models of object recognition (e.g. Alexnet and beyond): a heirarchical series of processing steps in which units at successive processing stages get more selective for specific things, and more invariant to image variation (size, position, lighting, etc).
One view of holistic face perception is that it is just the natural consequence of having units tuned to whole faces (or to large portions of the face). But why this should be implemented in humans as a specific category-selective patch of the brain is an open and fascinating question, that I am now hopeful network modeling may inform.
Interesting. In the past, I've read researchers commenting that "neural networks" was mostly unfortunate terminology because it implied the similarities between the physical patterns of the brain and connections between nodes in a neural network were a surface similarity that probably didn't offer insights into how the brain really worked.
But you're saying that there may be more similarity than we thought. I remember way back when there was some evidence of things like horizontal- and vertical-feature detection. It sounds as if there is still some evidence of this but perhaps more plastic than was once imagined.
As the Marr intro chapter explains so beautifully, there are many levels of analysis in cognitive science and cognitive neuroscience. Units in deep nets are very different from actual neurons, and the backprop methods used to train deep nets have no resemblance to how human brains get wired up. But for the case of object recognition at the level of representation, there are striking similarities between deep nets optimized for invariant object recognition, and parts of the primate brain that carry out this task. See this brilliant and seminal paper:
http://www.pnas.org/content/111/23/8619.long
I work with neural networks for complex scene processing and object detection on the roads. Best of part of my job is watching a network "learn/train itself" to classify various object categories.
Are there any good theories on what happens during the training process of the brain (for example, while learning a new skill or something very basic/simple) and how individual neurons get affected by this "learning" process? So, I understand from a psychological perspective, we see the brain as this beautiful system but I am asking from physiological perspective. What kind of changes can we observe in neurons when we learn something new?
P.S: Thanks a lot for your replies. Means a lot. :)
So glad to hear it, thank you. Do let me know if you encounter bits that are not clear or if you see ways the lectures could be improved.
BTW the lectures were all originally 1 hour and 25 minutes - I just broke them into pieces because everyone tells me that is what web viewers like. Is that working for you guys, or would you prefer to watch a whole 1.5 hour lecture not broken into bits?
I don't think pieces are necessarily what web viewers like, but it increases the potential virality of the video by breaking a broad lecture into tweetable, sharable soundbites, thus videos of this format become the de facto web video format.
I personally prefer long lectures, but if social media growth is what you are after, I'd keep it as it is.
Perhaps it is my internet addled reward circuits, but I personally prefer the shorter videos for learning because it helps me compartmentalize and break down topics for understanding.
AFAIK, youtube-dl will find all videos and download them if fed a playlist URL. I'm curious if this script is a workaround for some obscure brokenness or something.
Watched about an hour so far and this is fabulous. The talks very clearly outline the bounds of the problem space (known and unknown) and then start going region to region. The goal is to perceive the texture of the knowledge domain. This is my favorite way to learn about a subject.
Looks like it's gonna be a nerdy Saturday night binge watching these. Too bad there isn't a Netflix for this kind of stuff
Not quite the Netflix of online lectures, but there is http://videolectures.net. Quite some good stuff on there, e.g. lectures and conference videos on AI and robotics. The website feels very web 1.0 though.
If you have Amazon Prime, they offer a channel called The Great Courses that I believe still has a free trial. It has lots of high quality lectures on a wide range of topics.
Thank you, this is the most incredible thing ever. From the introductory video, Nancy is an amazing speaker on such an interesting topic. I've just ordered pizza. I couldn't be more excited.
There's a long history of really great MIT intro (or relatively so) psychology/brain science courses. Back in the 1970s, the intro course was a hugely popular lecture taught by the head of the department.
Yes, Hans-Lukas Teuber. https://wikipedia.org/en/Hans-Lukas_Teuber What impressed me so much about the course was it was completely focused on what, scientifically (via measurement), was known about the brain. I loved that it wasn't a 'fluff' psychology class at all.
MIT was rather unusual at the time for taking such a physiology and brain science approach to psychology. The prevalent school of thought, notably at Harvard with Skinner, was to treat the brain more or less as a black box and focus on the inputs and outputs.
Teuber joked at the time that his intro course didn't count toward the humanities distribution requirement because it wasn't irrational enough.
MIT’s Center for Brains, Minds, and Machines (CBMM) YouTube channel is a gold mine of interesting videos at the intersection of ML, neuroscience, and cognitive science. It’s definitely worth checking out.
This looks amazing! Added to my to-watch-list.
Also, if you are interested in this kind of neuro- / cogsci- stuff with a tech-twist you might want to consider attending this wonderful spring school for an incredibly immersive learning experience: https://interdisciplinary-college.de/
I'm interested to check this out. I've been trying to understand AI better--but mostly from the ML perspective. I'm very out of date on brain science though. I took 9.00 back in the dark ages :-) but I'm not at all up on current research.