Hacker News new | past | comments | ask | show | jobs | submit login
The brain “doubles up” by simultaneously making two memories of events (bbc.com)
285 points by akbarnama on April 10, 2017 | hide | past | favorite | 73 comments



Hypothesis: Deja Vu happens when the long term memory isn't suppressed properly during the first few days.

I often find when I experience Deja Vu I think I remember doing a thing I did the day before a long time before that. It feels like I have two memories, one of a recent event and one a vague recollection of something a long time ago. This sounds like exactly what one would expect if two memories are recorded, one suppressed for a few days and one forgotten in a few days, but the suppression fails.

Does anyone else have Deja Vu like this?


Since our brains store a simplistic version of what our sensors actually read, my WAG Hypothesis is that Deja Vu happens when the digested patterns ready for storage, collide with previous experiences in memory. Kind of a hashing collision.


In my experience Deja Vu doesn't happen for something I remember doing yesterday, but something I'm currently doing right now, or some combination of events that just occurred. But either way, that's a good hypothesis.


This matches my experience. Deja vu is not something I remember from a past event, but from something happening right at the moment. If I understand the article correctly, and a candidate theory for deja vu encountered on Wikipedia, this might make sense.

I think the theory is that occasionally there are slight issues with the timing/synchronization of the memory being stored in certain parts of the brain. Since the research shows that memories are written twice, if such a desync occurred, maybe deja vu is some perception of disagreement between short and long term memories?

For some reason, I find the experience of deja vu somehow quite ... pleasing. I am not sure if this is true for everyone.


As a kid I found deja vu pleasing as well and "practiced" it for a long time, "giving myself" deja vu.

I would practice by doing the same thing I did the day before or a week before, then try to focus on the feeling and enhance it as soon as it started to show up, then kind of "let myself live in deja vu" for longer and longer periods.

Now I can give myself that feeling anytime. I first relax and look around, take in as much as I can. Then go back to doing something else, then relax again and pretend I have already been where I am.


When I have Deja Vu it's very clear to me that what I'm doing _right now_, in this second, is something that's happened before. I sometimes have experiences where the Deja Vu occurring in the present is possibly the _third_ time that it's happened as well. It just feels so familiar in multiple ways that I am reminded of something happening a while back.

Such an odd sensation when your brain is telling you you're experiencing a repeated situation, but your mind is telling you it's impossible.


Short term memory isn't a matter of days, it's a matter of seconds and minutes. But otherwise, that makes sense. In fact, I think this is already a hypothesis which has come out of fMRI?


There's a second part to a Deja Vu phenomenom that often gets overlooked... the feeling that you've had this experience before is coupled with the ability to predict what will happen next based on the 'memory'.

Next time you get deja vu, try to "remember" what happens next. It's scarily accurate after you willingly try it once or twice.

My overall hypothesis is somewhat inline with yours (although I don't have nearly the expertise to properly examine these things) - that somehow the brain 'skips' filing the memory properly, and the experience being recalled milliseconds later as "now" is being mis-remembered as "past".

Very cool machine our brain.


Deja Vu happens when they change something in the Matrix.


This is the correct explanation

https://www.youtube.com/watch?v=z_KmNZNT5xw


I've had a few Deja Vu experiences that felt so incredibly real, like I had experienced precisely this before, but my memory tells me for sure I've never gone through anything similar.

Is this because the clearer "short term" version is completely lost?


Further supports the notion that understanding the neocortex is the key to understanding intelligence.

"It is immature or silent for the first several days after formation," Prof Tonegawa said.

What's going on during those first several days? It's probably re-arranging it's model of the world to account for those new memories.

"The idea you need the cortex for memories I'm comfortable with, but the fact it's so early is a surprise."

The neocortex is constantly making predictions about the future, so it makes sense that it has some short term memory of its own to make those predictions from.


Why do we need to understand intelligence, so we can make the intelligent irrelevant. Sort of like how the evolution of the human brain made animals irrelevant.


If every human on the planet died the planet would largely continue on, if every insect died then the planet would go through a large catastrophic change. "Animals" aren't irrelevant, and the human brain isn't everything.


I'm not sure what argument you're trying to make here. Insects are a class of which there are hundreds of thousands of species. Humans are a single species. If insects the class died that would be catastrophic. If mammals the class died that would be catastrophic as well.


Hmmm, I'm not sure about that actually.

Insects and plants are closely tied together, with insects being essential for the flowering of certain plants.

Are mammals so closely tied to any species outside of the class?



Ruminants and the related hindgut fermenters are very dependent on microbes to break down their main food sources (or a significant part).


I am not saying they are irrelevant, I am saying they are treated as being irrelevant because of the immense amount of power and control that the arrival of the human brain has allowed. The same thing will happen if ever strong AI emerged, humans will started being treated as irrelevant. Sort of like how America treats the citizens of some Arab countries, irrelevant. Sort of how we treat chickens and cows. Irrelevant. Power corrupts.


Understanding the workings of gross anatomy didn't make gross anatomy irrelevant.

It instead ushered in a new age of surgical intervention that saved a lot of lives and prolonged life in general. I know I'm not alone in having a relative that would have been dead twenty years ago if not for the ability to radically alter the mechanics of a malfunctioning heart.

We shouldn't cease to try to understand the world because of the risk new knowledge brings. Rather, we should also understand the risk and act to intercept and mitigate it. But don't condemn Alzheimer's patients to a slow twilight death because of the risk of strong AI; that's not an acceptable tradeoff.


> that's not an acceptable tradeoff

This depends on the estimated probabilities of worst case scenario. I don't think you can just handwave it as "always worth it".


The scenario for a lack of sufficient Alzheimer's research is currently what we're living with: 83,000 deaths a year, and approximately 5 million patients living with the disease, in the US alone. Probability: 1.0. I don't know if it's "worst-case," depends on whether the disease ratios are progressing year-over-year, and I don't have those numbers at my fingertips right now.

I have yet to hear a realistic strong AI nightmare scenario (i.e. one that doesn't presume a magically-smarter-than-all-of-us-combined AI as a McGuffin with no solid functional grounding) with a probability anywhere near offsetting that cost. Besides, stacking the constant chronic cost of a few tens of thousands of deaths against a nonzero-probability species-ending scenario is nearly an apples-to-oranges comparison---by the logic of simple probability-to-risk modeling, we should be stopping all disease research right now and pouring 100% of that money into fast asteroid detection and mitigation solutions.

Practically speaking, I think we have plenty of time while we are doing the research to cure a current and real disease to puzzle through the detection and mitigation strategies for a strong AI threat that is---at best---decades out (and, some would argue, will come to pass inevitably with or without our Alzheimer's research, yes?).


People just assume that superhuman AI will have human desires: safety, control, power. We want those things because any humans that didn't want them didn't survive to pass on their genes. But a human-built AI will be subject to very different evolutionary pressures. I don't think we can say anything for certain about what it will want.


In both cases, the planet would carry on. It depends on your perspective as to which matters more, human intelligence and knowledge, or nature. I personally believe the former is more important than the latter, although that doesn't mean I don't value the latter as well.


Evolution does not select for (y)our ideas of "importance".


So? Human beings have the ability to make a huge impact on the environment, and therefore the value assigned by people to various outcomes and tradeoffs does matter.


We need to understand intelligence because that is the only way it can be relevant..


We've known this for some time, but this is additional evidence to the notion of our brain processing and prioritizing which information gets stored in short-term vs. long term memory.

This also explains deja vu, but possibly not as many on this thread have tried to explain.

I've written about this about two years ago, the leading theory of deja vu being that your brain processes and stores an experience in the long term memory before it's had a chance to properly store it in short term. By the time the short term memory has caught-up, the event feels already lived, because it's already been processed and stored in long term memory.

At least, that's one theory.


> Researchers then used light beamed into the brain to control the activity of individual neurons - they could literally switch memories on or off.

So can this be extended to movie-plot-like memories be wiped, or false memories implanted? How did they identify the memories to be targeted and where they were stored?

There's lots of exciting work to reverse-engineer and extract rules from software neural nets. Can the same be possible in hardware nets too, or will attempting to measure it interfere and distort it?



> So can this be extended to movie-plot-like memories be wiped, or false memories implanted? How did they identify the memories to be targeted and where they were stored?

The same group has done all that stuff with optogenetics and molecular techniques:

http://tonegawalab.mit.edu/wp-content/uploads/2014/06/296_Li...

http://tonegawalab.mit.edu/wp-content/uploads/2014/06/297_Ra...

http://tonegawalab.mit.edu/wp-content/uploads/2014/06/289_li...


In the article I thought it said they monitored the brain for any activity in response to the shock and the neurons that activated were assumed to be the associated in response to the memory. This is a far cry from being able to look at a neuron in situ and saying what memory or memories are associated with that neuron.

This is an old article and may be superceded by more recent research but false memories may not have to be implanted, in some sense we do it ourselves every time we recall a memory http://www.smithsonianmag.com/science-nature/how-our-brains-...


A more practical use might be found in psychotherapy - recalling traumatic memories to identify the relevant neurons and then switching them off. That could be extremely useful for treating PTSD if it's feasible to do so.


"The Eternal Sunshine of the Spotless Mind" is [SPOILER ALERT] a movie about erasing the memories of a past lover, a very similar idea.


You might want to read about EMDR therapy. I believe any attempt to overcome the effects of traumatic experiences requires reactivating those neurons at least temporarily. I feel like even the use of CBT to change behavior works better if you can associate the unwanted behavior with memories of how you came to acquire that behavior - at that point you'll want to change the narrative of the old events to something that can be more easily accepted. This isn't really new.




Just hypothesizing but could it be that the brain is structured as two adversarial networks, which train each other e.g. during sleep?


It's always tempting to think about the body in terms of the technological concepts of the age. It's not automatically wrong, but there's nothing special about this particular era of computing that would mean we're working with the same methods as the human brain. Not even the fact that we call our systems "neural networks".


They're called neural networks because their design mirrors observations from neurons in the brain.

There's truth to your sentiment, that the current level of knowledge is always somewhat arbitrary, but to say there's "nothing" that suggests a connection between the two is far too dismissive of an interesting question posed by the original comment.


Yet in so many ways they don't act like neurons in the brain.


'neural networks' are actually very different from the neurons inside the brain. There's almost nothing in common, once you look at the mechanics of the two systems.


So true! 50 years ago they were using concepts from early computing to describe how the human brain works. With each new wave of advances in computing, there is a new set of analogies that philosophers and neuroscientists adopt to say: "See, the brain is like this". It Seems problematic to me. Models are models, not reality. They may be good for descriptive effects, but that doesn't me they are anything other than story-telling devices.

As a tangent, this seems to me to be a problem with Daniel Dennett's ideas and why, in the end, David Chalmers seems to be gaining ground with every passing year.


That sentiment makes it easy to dismiss the models we use, but here's the thing - the technological concepts of the age limit what models people can express in that age. Better technology = better models, i.e. models that explain more and predict more.

Also this sentiment casually dismisses the fact that behind computing there's a whole lot of theoretical work that stems not from technology itself, but from how reality works (i.e. math).


If there's optimal way to solve the problems brain (and AI) are good at - then as times passes probability both we and nature use these methods increase. So we're indeed special in that we had more time than people before.


How? Why? According to what line of reasoning?

What do you mean by "we" and "nature"?


How? Because that's how "optimal" is defined.

Why? Ditto.

Line of reasoning? That evolution is an optimization process, and that human intelligence is an optimization process - hence both are likely to reach good solutions for intelligence eventually, and if there's a strong optimum in the design space of those, then both are likely to converge at least in some areas.

What's "we"? Human technological civilization.

What's "nature"? Evolutionary process that already produced working brains.


Interesting.

I'm not entirely sure that "optimal" has an agreed upon definition. At best, "optimal" is relative to the system within which it is being applied. "Optimal" in a Trump world is very different from "optimal" in a Bernie Sanders world. Optimization seems to require some objective. In a practical sense, you cannot optimize a piece of software if you don't know what you are optimizing for.

It is a bold premise that the evolutionary process and human technological civilization have the same optimization goals.


> It is a bold premise that the evolutionary process and human technological civilization have the same optimization goals.

We have similar, when we optimize for "what works" (instead for e.g. "what sells").

The only existing instance of what we're trying to build - intelligence - is something a dumb, random, incremental optimization process following simple rules managed to somehow stumble upon. Now, if there's a strong local optimum in the design space of intelligent machines, then it seems plausible that evolution ended up there, and that we may stumble upon it too, thus converging with the evolutionary solution somehow.

Now I'm not saying our solution will be identical to biological brains. We have different goals (hell, we have goals, nature does not). But we're likely to end up doing many aspects of it in a way that resembles biology.

The core observation here is that it's the structure of reality (implications of laws of physics) that shape the search space we're traversing. Compare flight. Yes, human planes are very different from birds - but that's because they have less efficient energy sources, and also because we want them to go faster (have you ever seen a supersonic bird?). Still, both share some aspects, like the airfoil. Both we and nature "discovered" those because airfoils are dictated by the laws of physics - that's how you do flight in gases.

--

Basically, what I'm saying is that humans always describe things in terms of the technology of their age, but that doesn't mean it's wrong. Better technology means better description. Birds are unlike planes, but analyzing them using the model of airfoil we developed is a good idea and leads to more and better understanding.

--

EDIT

> Optimization seems to require some objective. In a practical sense, you cannot optimize a piece of software if you don't know what you are optimizing for.

Yes, optimization always has an objective - that's how we define it, in contrast to complete randomness. But the objective can be implicit or explicit. Explicit goals require a mind to be involved. Evolution has only implicit objectives, human-driven process have both (because we suck at knowing what we actually want).

But the second important part of an optimization process is the shape of the optimization space. This here is defined by laws of physics. And insofar as evolution's implicit objective is in some aspects similar to our objective, both get similarly influenced by the shape of the optimization space :).


> Basically, what I'm saying is that humans always describe things in terms of the technology of their age, but that doesn't mean it's wrong. Better technology means better description. Birds are unlike planes, but analyzing them using the model of airfoil we developed is a good idea and leads to more and better understanding.

Totally. "Better technology means better description" is a great idea/concept here.

Since reading Michel Foucault while I was studying in the UK (someone I feel like I never even heard mentioned in US university), it made me rethink the "what/essence" of the things we do/build. Just like the supposed lesson-learned (or potentially still to be learned) in the finance world after the financial crisis: models are models not reality. We develop a culture around the descriptions that we use but in the end we aren't truly describing the essence of the "what (i.e. the thing described)". We are layering various descriptions, cultural-ideas, and pre-conceptions on top of the thing itself in order to better communicate some aspect of it to other people. In a sense, different technologies gives us a shared set of concepts with which we can communicate with each other.

Your point is great. Just because they are "descriptions" doesn't mean they are wrong/right. They are a shared language that we use to communicate with each other complex ideas and, hence, we better understand previously wish-washy concepts.


I'm on a phone so I won't type much. From my dissertational research and reading, I've got no hint that the hippocampus and the cortex learn advesariily, at least to my limited understanding of how such networks work. The hippocampus is thought to bind together the features of events into an integrated memory representation. It may do so by relating coded indexes in the hippocampus for features that actually exist in the cortex. So for example, you observe a car and a bike hit each other. Your percepts for the car and bike (features) are in your perceptual cortices. The hippocampus generates an index that when propogated back to the cortex, reactivates those cortices into a state similar to when you originally saw the event. That is memory...that reactivation. At least that is a popular idea in memory research that has empirical support.


I saw Terry Sejnowski give a talk last week in San Diego, and he made the exact same suggestion. He did not provide any evidence, but it's hard not to take his ideas seriously.


Helpful for 'Memento', obsoletes 'Inside Out'.


Fortunately for 'Inside Out', the concept of memory is not the central component of the movie. Rather, it's the idea that we each have multiple competing subpersonalities inside our heads, which remains a useful model.


"The experiments had to be performed on mice, but are thought to apply to human brains too."

I look forward to human trials.


Uh oh. Looks like there was a bit of an overrun there. But you can clearly see that this is your signature on the release form, right?

We advise you to avoid calendars for a few weeks, until you adjust.


Maybe in the future they'll simply erase the memory you had of being part of the trial.


Not much real-world import, but this fixes a few aspects of the movie "Memento" I had considered plot holes. :)

(How does he remember what "remember Sammy Jankis" means? How does he remember he has short-term memory loss?)


Sammy Jenkins had short term memory loss. When he's confused about where he is or what he's doing, he sees the message on his hand, and remembers Sammys condition and understands he has the same condition.

Yeah, it's a slight mental leap, but "remember you have Sammy Jenkins' medical condition" doesn't quite have the same ring to it. :)


> ""It is immature or silent for the first several days after formation," Prof Tonegawa said. 'Strong case' The researchers also showed the long- term memory never matured if the connection between the hippocampus and the cortex was blocked. So there is still a link between the two parts of the brain, with the balance of power shifting from the hippocampus to the cortex over time."

I have no good knowledge of neuroscience, so I'll be happy to read other thoughts on this. Personally, it seems to me that this is an evolved technique to direct us to:

act on impulses at the moment of an event, instead of contemplating rational choices that may take too much time to arrive at (thus limiting our chances of survival);

pursue rational choices in calmer times, trusting us to draw on stored memories of similar events to help make better decisions (the purpose of Deja Vu?)

For the link between the hippocampus and the cortex being required to retrieve memories; maybe those memories are useless without the emotional context, and the hippocampus is needed to relive those emotions to have a more accurate recollection of the memories.

It might not make sense (I know almost nothing about neuroscience), but it's a thought.


is there a package I can install to upgrade mine because this one seems to take high availability over consistency


Just type "aptitude increase" in your brain-keyboard. You may have to read a number of books and live a number of years before the keyboard shows up though.


With the right medication, you might be able to accelerate the appearance of that keyboard, at least for a few hours. :)


This is another welcome step on our journey between psychology and neuroscience.


Tons of "Improve your memory" books are irrelevant from this moment. No suprise that none of them actually worked for anyone except placebo effect.


How did you come to this conclusion? IIRC the research is about finding out how memories are stored. Not about how to recall them. Or to make sure they are formed in the first place.


Any clue as to what would work?


Check out books of the memory champions (Tony Buzan, Domnic O'Brian, Joshua Foer). The idea is not to improve your memory, just to use it efficiently. The core is very simple: our brains have a hard time memorising abstract things. Solution—convert them to highly vivid imagery/stories. Visualisation being step one, step two is somehow to link the resulting images. Here you have a choice of link system, peg system, memory palace/method of loci. That's basically it. You will probably learn some system for converting numbers to images (Major system, o Dominic system), but that's just a tool for the first step. Also, get enough sleep.


This should have a major impact on learning strategies for long term retention. Short term retention may not be any indicator for long term retention as different mechanisms are at work there. But short term memory at least initially will hide any long term memory so how can we measure and optimize what we remember in the long run?


2nd copy is in tape though


Backup..


RAID1


More like RAM and secondary storage.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: