Please please don't take away from this that there are "memory cells" that "encode memories." The article is so badly written it seems to imply that. Memory is a whole brain phenomenon, and memory formation involves connections being formed and lost constantly between all types of neurons everywhere in the brain. One small aspect of memory (episodic memory) relies more heavily on hippocampal neurons during consolidation, and a small portion of that process is what is being discussed here.
So, isn’t the nature of memory in the brain still understood pretty vaguely? The “memory is connections and involves the whole brain” is still kind of a hypothesis and less of a hard, empirically-validated theory, no? May still be the best idea we have, but from what I understand (which isn’t much), it’s not nearly that cut and dry and we don’t really know enough to exclude a lot of the other models of memory formation.
There's no single concept of nature that's understood completely. For example, there are a lot of things still understood pretty vaguely about milk. Or apples. So you can say that about pretty much anything.
>There's no single concept of nature that's understood completely. For example, there are a lot of things still understood pretty vaguely about milk. Or apples. So you can say that about pretty much anything.
But that is practically untrue (I'm not just saying there are corner cases, but that the main topic is still very poorly understood) and a cop-out.
And yet I can speak concretely about a hydrogen atom. Or even photosynthesis. Or DNA.
In DNA, there are 4 base pairs. We can see how that maps to RNA and how that in turn codes to amino acids. (and exceptions exist to these things, but we have a pretty strong grasp on it.) We can even try to fold the resulting Amino Acids into shapes, although it's computationally very difficult to do so. So data storage within a cell has some concrete understanding. Empirically validated and useful (this is how we were able to engineer the mRNA vaccines).
Our knowledge of memory, our knowledge of data storage in the brain is far less concrete. And not just because it's not thought to be precisely molecular like DNA or RNA (although perhaps it could be!) but we have a poor grasp of even where, precisely, information is stored in the connections. Like, sure, you can say it's not stored in a particular location in 3D space, but there must be some transformation, some (non-euclidian) representation of the connections where memories CAN be located. Because our brains are capable of recalling memories on demand. But we have only a vague understanding of how that all occurs. It's very unlike DNA or RNA or whathaveyou in that our understanding is still primitive and vague.
Notice how the things we are able to speak most concretely about are the things which are most remote from most people’s experience?
Memory shares a property with holograms or with lenses/mirrors. If you draw a spot with a marker on one and looks at/through it, you don’t see the spot rather you see a slight loss of image quality across the image.
If you wipe out a bit of brain your memories degrade a bit. Somehow the encoding of memory is holographic.
We just need a different set of mathematical skills/tools to understand this property. From the point of view of traditional scientists it’s not something they know how to tackle.
Personally I think we should look at the brain (and DNA) with a signal processing lens. How data is encoded in the brain is something engineers could answer. Neural networks too are analog computers and spike signals are encoded with a variety of modulation schemes. If we understand those we don’t need to model neutrons the way we do (which is very inefficient, using floating point etc.) but by custom single-transistor analog circuits. That might help advance computation in general...once we understand how the brain is doing it.
> And yet I can speak concretely about a hydrogen atom
You can talk about some abstractions about how a hydrogen atom works, but we don't fully understand everything about atomic structure. We don't know if there is something that makes up the structures of subatomic particles becuase we don't have particle accelerators that have enough energy for us to peer deeper into what quarks are made up of. We don't even know if we've found all the types of particles.
Actually, we do fully understand everything about the atomic structure of the hydrogen atom, everything relevant to chemistry and how life (for instance) works. If we have not yet unified the fundamental forces, that doesn't mean we don't have an INCREDIBLY detailed and complete understanding of the hydrogen atom.
Corner cases in several layers lower than chemistry vs not even really knowing where a memory is stored!
> Actually, we do fully understand everything about the atomic structure of the hydrogen atom,
Not sure where that hubris comes from. There are physical phenomena we have not been able to test, so there will always be unknowns and assumptions based on (abstract) modeling. eg the metallic liquid hydrogen described here https://edu.rsc.org/soundbite/hydrogen-falls-apart-under-pre...
It comes from the unreasonable effectiveness of quantum electrodynamics. That we predicted a metallic state of hydrogen (a sort of molecular, not atomic, structure) under an extreme corner case (extreme pressure) over 80 years ago (well before it could be experimentally verified) just highlights what I mean.
And I don’t mean to say that neuroscientists are doing shoddy work. Far from it! The brain is a far more complex entity than RNA or a hydrogen atom. The task is MUCH harder! But I am showing that high level of specific, concrete knowledge IS possible in the physical sciences. Memory in the Brain is a physical process as well, but we have only a relatively vague understanding of the specifics of it. We can sequence DNA or RNA accurately with relative ease. We cannot do the same with memories in the brain.
You're missing the point. The idea that the predictions are the same as knowledge illustrates the misunderstanding. We do not know because we cannot prove it. I shouldn't need to get into the reason we produce experimental evidence eg https://www.forbes.com/sites/startswithabang/2019/07/06/ask-... - via the LIGO and Virgo detectors
@Robotbeat isn't the one missing the point here. The incredible accuracy of those predictions serves as a testament to just how well we do understand that particular phenomenon. Now compare to neuroscience, where we struggle to accurately make (by comparison) the most basic of predictions.
... what?! We can accurately predict hydrogen (and helium!) energy characteristics to _absurd_ accuracy levels. The issue is that even a system as simple as a single helium atom is prohibitively computationally expensive.
There's more to quantum chemistry (and hydrogen atoms) than what you're saying. If you define chemistry tautologically as what we know, then sure I guess.
To be clear, I'm not disputing unknowns in quantum chemistry in general (I lack the requisite background). But I was definitely of the understanding that single atom (and possibly somewhat larger) systems are thoroughly understood from a mathematical perspective at this point. I would appreciate concrete examples of open questions, either for hydrogen atoms specifically or for quantum chemistry more generally.
Proper title: "Neuroscientists discover one potential mechanism of memory formation and make a preliminary attempt to identify the most active regions where it occurs."
Interesting you should mention “whole brain” effects. Could it be that the whole brain is involved in memory formation or are these just ripple effects of ... something else?
Open loop systems are simple...the effects of one block can be understood in isolation. Closed loop control systems with feedback have nonlinear responses to stimuli that can cause ripple effects across the system.
By analogy, on a modern cpu with good power management, and shared memory caches, if a program heavily writes to computer memory, the effects will be on the whole computer. Not only will memory power increase, but cache use will too (causing threads on other cpus to slow down) and the chip will heat up, causing every part of the chip to throttle.
It is really cool we're getting (slightly) closer to the understanding how memories work.
It is still basic research, but if I understand the significance of this correctly having a confirmation there are specific physical changes neurons undergo when forming some kinds of memories allows at least theoretically extraction of those memories from pieces of inactive tissue. As a layman in the subject I've been always very much interested how much of our personality and consciousness is "encoded" in the electro-chemical activity (software) vs how things are connected and formed together(hardware). The more of "us" is in the hardware the more likely someone in future will come up with a way to "scan" features of dead tissue to recover (in a simulation perhaps) the person lost to death.
We're currently probably as far from that as a medieval battlefield "surgeon" from modern heart transplant, but at least we're moving in the right direction.
I'd argue that the line between software and hardware isn't as clear in neuro as someone with an IT background would like to think, specifically because the electro-chemical activities going on are both, hard and soft, at the same time. This isn't some abstract process running on a biological computer, the biological computer is the sum of the processes that constitute itself. (That's the fascinating thing about bio)
To form a memory is something completely different than to store data in a db, as it involves lots and lots of encoding factors. If anything is actually stored, it's not the memory itself, but the perception at time T in relation to the expectations for T, i.e. a delta (think of the times when you listen to a song from the old days and it hurts, because the emotional connotation from back then doesn't match the current situation). Retrieving this delta would be meaningless unless you have the matching interpreter, the entire status machine.
I have a background in both. Both are implemented in physical mediums. The gating and current flow are similarly physical as are magnetic field changes on disk mediums. There is a greater diversity in the brain which uses field effects ala hormones, ion movement, transmitter diffusion, and direct electrical connection. The physical/informational duality is present in both so I would suggest that the building out of the hardware using the ambient materials (i.e. trophic factor) is the more specific difference you're looking for rather than the duality.
You transition to a difference in encoding which is a significant but separate consideration. I think the thrust of your statement agrees with the science but quickly nice into speculation and undue specificity that to me reads as though you are declaring fact. Language and good communication are hard.
> The physical/informational duality is present in both
A very important distinction is that a computer has clean separation between physical data representation (magnetic bits) and physical device implementation (transistors & etc). As an analogy to a biological system I'd suggest an FPGA running a self modifying program that's also reconfiguring the FPGA on the fly. No sane engineer would design such a system (I hope).
Self modifying programs have totally been a thing. :D They were hard to reason about so fell by the wayside but still...
For ion flow through dendrites and axons, the separation of information transmission from the protein assemblies of neurons is similarly separated.
For neurotransmitters the particles are literally packed into bundles and separated from the ionic transmission that causes their release and cell boundaries at the synapse until re-up take.
For electric transmission, the electricity is definitely separate from the axon and junction.
I think the distinction is a little more in how a single computer's hardware is constrained to a (relatively) small set of finite states outside of outside modification. The brain can expand and contract the set of states that it's physical parts can enter on top of being, for better and worse, less discrete. If our computers could grow their silicon wafers and lay new circuits then we would have something similar.
Self modifying programs exist, sure, but (as you note) aren't much used. Meanwhile the "software" layer in biology is a tangled mess.
> If our computers could grow their silicon wafers and lay new circuits then we would have something similar.
Yes, that's why my analogy included reconfiguring the FPGA (ie hardware) that's running the program on the fly. You highlight a number of examples where a reasonably clear line between the biological "hardware" and "software" can be drawn at a given point in time. However, you fail to explicitly mention any of the biological mechanisms involved in reconfiguration, some of which I would consider to blur those lines!
The addition and removal of synapses over time is an obvious but fairly slow and boring example. The software is very slowly modifying the hardware by small amounts, but the two are still clearly separate.
Epigenetics is more interesting. Any number of convoluted signalling pathways feeding back on the machinery that controls gene expression, with (in some cases) heritable effects. The distinction here remains fairly clear to me at any given point in time. The proteins are the hardware, the DNA the storage, and the program is manipulating the storage so as to modify the synthesis of future hardware components. Easy enough.
... or is it? Often the quantity of some component that gets synthesized is itself used as a signal. Does synthesis being used in this manner mean that the hardware has become part of the software? How far do things have to go before the hardware can be considered to be part of the software? But things get even weirder!
> the separation of information transmission from the protein assemblies of neurons is similarly separated
Not always! Ever come across GPCR heteromerization? This happens at your synapses (among other places), more or less in real time, in response to various convoluted signalling pathways. It can change how the receptor responds to a given ligand, or even allow it to respond to entirely different ligands. So in some cases, an important part of the "logic" for the neurotransmitter response is being played out by the physical configuration of the receptors. From my perspective, it looks decidedly as though the receptor (ie hardware) has become an integral part of the software.
Other examples abound, but this comment has gotten quite lengthy so I digress.
Gene expression, signaling proteins, and protein synthesis or epigenetics in general are excellent examples both of the duality but some circumstances where the distinction gets fuzzier, particularly depending on the context and perspective you take.
I had not heard of GPCR heteromerization and will be reading more. I ended up going down the complexity and theory path more heavily and assigning my brain into business software so my knowledge of the concrete mechanisms is shallower than I'd like.
I definitely enjoyed thinking about about where to more specifically pin the difference in this context so thank you for being gracious with my making that attempt to offer a formulation. I think there's another interesting question to ask whether the duality divide matters but perhaps another day.
If our current connectionist understanding of the brain is correct, then we should be able to recover a personality from a very detailed CT scan, and maybe in the future to revive it. (Personally, i d prefer that in a solar-powered ball that is shot out of the solar system in high velocity)
I often wonder why universities post these press releases for the general public. Epigenetic modifications and memory is still basic research in early investigation, known to play a role in memory extinction and reconsolidation for example, but this is still too early research to make any concrete announcements that would have actionable information, esp. for the general public.
Universities do research, and publications in major journals are the units of outcome. It is not surprising to put this on the university website because it is good exposure. The people who browse university websites are often other academics, so this is indeed an announcement to the broad scientific community and brings prestige to the university.
This would be comparable to winning some scientific prizes, since they are accomplishments that are hard and competitive to achieve. Of course the general public does not care about which scientists win which prizes.
This is all part of marketing to get broad exposure to the scientific community first, general public second.
I believe it’s for the same purpose why we are seeing this. To reach a larger public audience than just the people at the faculty. I mean there might be others that work with this that may never hear about it and miss out on something. If they know the same things, well then it was just x minutes of wasted time.
well people don't know what they re talking about when it's about so specialized basic research. Universities do benefit from the publicity , but it feels like selling snake oil to naive people.
It's not really meant for "naive people": nobody expects you to change your behavior as a result of this press release.
The target audience is really science journalists on EurekaWire. They're hoping this press release will get picked up by a science writer for the New York Times or Scientific American, etc.
A parallel goal is to demonstrate that the university is doing cutting edge work, which helps with student enrollment, fundraising, and recruitment. This also applies to the funders: they want to demonstrate that donations (or tax dollars) are being put to good use.
If it directly catches the attention of science aficionados, that's mostly a bonus.
This announcement is for student candidates in the field, fellow professionals in the field, and the larger layman field of financing sources for the University.
To let the public known they are making progress and get/keep them excited so that they can keep getting funding. Also to attract the students and staff they want.
I experienced an episode of transient global amnesia [1] a couple of years ago. It lasted for 6-7 hours, during which I was transported to the hospital, but every 2-3 minutes I would ask anew "Why am I in the hospital?" — having no memory at all of being there. I still have no clue what might have caused it. I mean, scientists don't have a clue what causes it, only a wide range of speculated causes, none of which really applied to me.
Since there's no known way to prevent TGA, no treatment, and apparently no lasting damage, it has never (AFAIK) been researched, certainly not researched much, but it strikes me as the kind of thing that should be studied. If scientists could figure out how memory formation is completely blocked in episodes of TGA, it seems like that would provide useful data about how memory formation normally works.
> Since there's no known way to prevent TGA, no treatment, and apparently no lasting damage, it has never (AFAIK) been researched, certainly not researched much
I suspect the issue may be the lack of a good biological model. I'd guess it would be a very attractive and obvious research target for anyone aiming to investigate memory formation and recall. However, systematic studies generally require a model system that can be reliably reproduced.
Possibly, but I strongly suspect humans wouldn't be very good models in this case given how much we don't know about memory in general. Rather you'd want a robust non-human animal model of some sort, with a straightforward way of inducing TGA (or something that appeared to be substantially similar) on demand. Think rodent with optogenetic modifications or similar.
I've suffered a serious head injury when I was a teenager and my brain lost the feeling of time for two days. I knew, logically, that lunch happened before dinner, but I couldn't _perceive_ it as such. It's hard to explain. The brain is a complex machine.
Maybe one day we'll have pills to have total recall for a few hours, wouldn't that be cool for exams and learning languages ? I much prefer that than a chip in my head which the silicon valley seems super keen on selling me.
My memory is generally terrible. One discovery that was alarming, encouraging and disturbing at the same time is that I do appear to have some form of nearly total recall buried in there somewhere.
Unfortunately it only surfaces in one way. If I listen to a podcast while I’m doing something, then listen to that same podcast again within a week or so, I will have regular, intrusive visual recall of exactly what I was doing the last time I heard whatever passage the podcast I was listening to. The recall is synchronized down to the word with the podcast, and is vividly detailed and colorful and includes some sensation of feeling as well.
It’s weird and cool at the same time. One of these days I’d like to do an experiment where I videotape myself listening to the podcast, wait a week, then listen to it again and record myself describing what I see in my mind to test how accurate it is.
Oh definitely, making one part of the brain work together with the other part will help me remember either of those when I do one ore the other at later time.
For example, I occasionally play Pubg Mobile and while I do it I ikte listening to some podcasts on a topic I do work. When it comes to apply something that I learned from the podcast I will can have a recall on where exactly I was in the game when this particular topic was discussed.
I've noticed a sort of similar thing, though not nearly as vivid as what you describe. What I find is that listening to a song will trigger very specific memories of the place, time and feelings that I was experiencing when I first listened to that song. Or if not the first time I heard a song it will be from when I was listening to it regularly because it was new to me at that time. When this happens it's always especially vivid and compelling memory recall that is unlike any other I've experienced.
That's sort of fascinating- it reminds me of how the ancient romans used to memorize long epic poems etc (walking around and making visual/spatial associations, then recalling their travel to recite it later. I might be butchering that a bit but that's how I understood the method)
I have the same, but notice it more with audiobooks. An entire year can go by and I will get a flash of where I was at a precise moment in the book exactly as you describe.
Something similar was as a teenager: I could remember when/where I learned or first heard a new word. Sadly that ability has been waning.
I never noticed or attempted to prime it. Just go listen to some episodes that you’ve listened to recently and see what happens. It’s not like rolling a projector when you hit play, it’ll fade in and out most likely.
Making the absurd "drill, exam, forget" process flawless? That would be so sad to see.
Instead, the material+process should be adapted to support long term memory. We know how to do it, already have spaced repetition and mnemonics, sadly teachers mostly ignore them.
What's interesting is that there is a category of specific neurons (called engram neurons) in which this process of chromatin-based memory formation occurs.
It would be really cool to know how these neurons are different, and how and when they differentiate.
Neuroscience is way out of my wheelhouse, but my layman's understanding of this led me to believe that it could be some indication that Randy Gallistel has been on the right track: https://ruccs.rutgers.edu/images/LangilleGallistel.pdf
Article does a pretty terrible job at explaining how memory's are formed, some bits even misleading.
If you're interested in the biological basis of memory, I suggest reading 'In search of memory' by Eric Kandel. It's a biographic tale of Eric Kandel winning his nobel and along the way you learn a ton about how the brain works.
"Notably, in line with previous publications12,13,23, our data suggest that memory formation is largely an enhancer-driven phenomenon."
This is very interesting, and for me a new take on memory formation. It seems to have the advantage over LTP that it can work as a conincidene detector over longer time-spans.
Just reading the abstract, is it me, or did the authors revel in spewing a ridiculous amount of jargon instead of using the opportunity to give a simple and clear statement of the fundamental finding of the work to a general scientific audience?
"The epigenome and three-dimensional (3D) genomic architecture are emerging as key factors in the dynamic regulation of different transcriptional programs required for neuronal functions. In this study, we used an activity-dependent tagging system in mice to determine the epigenetic state, 3D genome architecture and transcriptional landscape of engram cells over the lifespan of memory formation and recall. Our findings reveal that memory encoding leads to an epigenetic priming event, marked by increased accessibility of enhancers without the corresponding transcriptional changes. Memory consolidation subsequently results in spatial reorganization of large chromatin segments and promoter–enhancer interactions. Finally, with reactivation, engram neurons use a subset of de novo long-range interactions, where primed enhancers are brought in contact with their respective promoters to upregulate genes involved in local protein translation in synaptic compartments. Collectively, our work elucidates the comprehensive transcriptional and epigenomic landscape across the lifespan of memory formation and recall in the hippocampal engram ensemble."
I think back to a historical abstract:
"We wish to suggest a structure for the salt of deoxyribose nucleic acid (D.N.A.). This structure has novel features which are of considerable biological interest."
This paper describes a process significantly more complex than the static molecular structure of DNA, and its abstract is written for the audience of researchers sharing the authors' own specialism. That's actually more useful than the converse, because it enables members of that audience quickly to evaluate the contents for relevance to their own work, in a way that would be much more difficult, perhaps impossible, were it written more concisely.
Seconding this. The abstract reads as incredibly well written and clear to me. The technical terms are relevant and appropriately used. I don't see anything I would describe as reveling in jargon, though I have definitely come across such papers before.
I really don't understand why a non-expert would expect experts who are writing for other experts to dumb things down for them. You might as well complain that a graduate level physics textbook isn't readily understandable to someone with no mathematical background!
Nature journals have strict structural requirements for submitted abstracts.
And unfortunately, because this was Nature Neuroscience, the intended audience is more specific, which is why you see more jargon there.
Here is an extracted example from the latest issue of Science, for which the intended audience is more general:
Psychotic disorders such as schizophrenia impose enormous human, social, and economic burdens. The prognosis of psychotic disorders has not substantially improved over the past decades because our understanding of the underlying neurobiology has remained stagnant. Indeed, the subjective nature of hallucinations, a defining symptom of psychosis, presents an enduring challenge for their rigorous study in humans and translation to preclinical animal models. Here, we developed a cross-species computational psychiatry approach to directly relate human and rodent behavior and used this approach to study the neural basis of hallucination-like perception in mice.
It’s not a perfect comparison because Science abstracts are apparently much larger (this is just the Intro!) but it does convey my point about language.
I guess one interesting question is whether other kinds of chromatin elsewhere encode any information by favoring expression of certain genes over other.
From the article it sounds like it may describe another level of reduction to LTP. So, rather than 'this mechanism vs. LTP' it offers a potential genetic mechanism that may explain the changes to neuronal growth during memory formation.
this seems excessively reductive. by this reasoning we should expect to find that removing neurons from the "structure" has no effect on its capacity for encoding memory.
if one reduces "memory" to anything that exhibits sustained interplay between hysteresis and goal optimization then yes, neurons may lie below the level of explanation you wish to use, and could thus be substituted with some other signaling medium, but this is not the same as claiming they are not involved at all.