There are so many strands of neuroscience that show predictive coding now. There's a really nice example in the use of common vs unusual words in sentences. If you're using fMRI to scan someone while they read these two sentences:
The jam on the motorway made everything slow.
The jam on the motorway made everything sticky.
You'll see a flash of activity when people read the word "sticky" rather than the more common word in that context "slow". The theory is that when everything in the sentence proceeds as predicted there's very little need to update prior expectations, but when you come across something that goes against previous predictions it's a signal that something needs to be actively thought about.
Cool! This sounds similar to koans: "a paradoxical anecdote or riddle, used in Zen Buddhism to demonstrate the inadequacy of logical reasoning and to provoke enlightenment."
If you explore the meaning of language enough you'll realize it doesn't really make sense.
Everything is based on context and assumptions that we take for granted when communicating. But if you dig deep enough (and it's really not that deep anyway), you'll notice that pretty much anything you say or write can be interpreted in multiple different ways, which means there's no one absolute meaning to any expression of language.
Meaning comes from interpretation, which is subjective and unique to the person doing it.
Interesting exercise: for a day or a week, make an effort to think how anything that someone else says to you is actually true - asume they are always right and think about how to justify it. It's really amazing, at first it's really hard, but then you discover you can always just interpret what they say in a way that fits your views.
I think you can make unambiguous sentences: "I walked one mile today." Unless context specifically indicates the very rare possibility that walked is personal shorthand for another activity, it's a pretty unambiguous statement.
Exactly, and if you keep going, you'd notice you need each time a longer, more precise description, which at the same time, paradoxically, opens the door for more ambiguity.
In that example, the first context is English language, then knowing who I is, then what a mile is, and a treadmill and today. All of those are contextual. Even mile. How long is a mile? Like any physical measure, it actually varies, because it is defined relative to something else which varies. The meaning of English language has also varied over time, that sentence might not make any sense to someone reading it in 20 years. And then when is today? May 7, 2019 - which is also a relative measure of time and not even that precise.
And then finally, what if you were lying? Did you really walk a mile today? How can I be sure? And how can you be sure I understand the same meaning from the words you say or write?
I understand where you are coming from, but a lot of what you just presented isn't actually relevant to the topic of ambiguity from lack of context in language. Some of it just doesn't apply. Allow me to explain...
> the first context is English language
Let's ignore this context. If you don't understand English then you shouldn't be expected to understand an English statement regardless of context.
> How long is a mile?
That depends on what time and space you are in. But that's ok. All such measurements are relative and defined by non-natural constructs. But we can be realistic and assume a mile is whatever our government has decided.
When I tell you "I ran a mile", I'm not telling someone in Australia I ran a mile. I"m telling you that. If we are speaking over the internet it might complicate things, but we only have to resolve the meaning of mile once for our relationship, and all proceeding discussions will use this meaning.
I would argue that this makes the word mile unambiguous in the sense that you are saying. If we had to assert each time what I meant by a mile, then that would be ambiguous.
> that sentence might not make any sense to someone reading it in 20 years
Evolution of language is a separate phenomenon. We don't need to consider that. Only what the language means today right now as I use it. That's the utility of language. For older English texts, we have literature which matches our common vernacular to the vernacular of the time. Foreign translations are out of scope for the same reason listed above. Since we have mappings between today's English and yesterday's English, I would again argue that this is not a source of ambiguity.
> And then when is today? May 7, 2019
That doesn't have anything to do with language. That is a chronospacial coordinate. It represents a certain degree of completion of our orbit around the sun. It isn't a measurement of time at all. It's only relative in the sense that all time and space measurements are relative to the spacetime coordinate of the Big Bang.
Time is inherently a relative concept. We don't have time unless we are comparing (relating) two different frames. We define one in terms relative to the other (motion, state, etc). In a single, non-relative frame of reference time just doesn't exist.
> And then finally, what if you were lying?
Again, this has nothing to do with language. You aren't trying to guess what I'm thinking, you're trying to understand what I'm saying. That's what language parsing is about. Discussion of motivations, misdirection, etc. is irrelevant.
Interesting. All of the above rebuttals are basically: ignore this because it's context and should be obvious.
Well, that's the problem. It might seem obvious to you now, but it is not obvious to everyone all the time, which is what makes any statement ambiguous.
Additionally, I don't know the purpose of what you are saying. For example, you might say you ran a mile because you wanted to express you were tired, or because you wanted to imply you are healthy, or who knows why, it is definitely not clear from the statement alone what you wanted to convey, which is the main problem.
We feel something internally and then we try to express that in words, hoping that those words will mean the same to the person perceiving them. That process is highly suceptible to noise in many forms, like the other person not hearing you correctly, being distracted, being stressed out, or maybe they speak a slightly different version of your language and some of the words or expressions mean something a bit different to them than to you.
In the end, any statement doesn't have meaning by itself, it only acquires meaning when someone interprets it and understands it in a certain way. That is always a subjective process.
Honestly I feel like we could have a very deep conversation about this because I certainly agree with a lot of what you're saying.
Language is certainly subjective, but I would argue that we can make certain concessions on what context truly is "obvious" and what isn't. And I think the matter of what level of precision is acceptable enough to consider a statement or idea "understood" is up for debate and is also contextual.
At some point, any extra information is just that, extra information. The core idea may have been expressed fully, even if there are certain small ambiguities or if the listener wishes to draw insight from the statement beyond the basic sentiment expressed.
Even statements like this can become ambiguous depending on the speaker. E.g. is the speaker speaking literally or metaphorically? Maybe the speaker feels like they walked a lot and so they chose a distance which represented a lot to them. Also, is one mile a lot or a little? Maybe its the opposite and they feel like they barely walked. Science makes every attempt to be unambiguous, people do not.
In psychology/neuroscience, many refer to this generally as heuristics. It’s astonishing how much the mind relies on these heuristics from aspects like language (as you’ve pointed out) to visual perception. Many think the main purpose is to speed up processing tasks and require less “resources” so that cognitive load is freed up for other things. One of my favorite examples of this is how car drivers seem to fall into a sort of “auto-pilot” mode freeing up resources to think about random things like what you’re going to eat for dinner once you get home.
I think the ELI5 way to look at it is pattern recognition and then using those patterns for not only prediction, much like ML models, but also in other things like physical actions, emotional response, etc.
Many theorize it is that these heuristics are trained and the “path more taken” is the default. It’s when a stimulus doesn’t fit that path that the brain has to decide whether:
1. Is something wrong with this?
2. Is this a new path I need to start paving to speed up future processing?
Whether or not that second choice is made is dependent on lots of things co-processing at the time: Emotion, attention, attitude, etc. How much that path is paved is also dependent on those things.
This idea is fundamental to areas of study like symbolic systems at Stanford [1]
And that flash of activity is also the basis of humor. We find unexpected things like this funny, the reward that helps our brains to analyze and learn from the unexpected situation.
This reminds me of the work John Carmack did when he was writing QuakeWorld, which in 1996 was quite playable on a 56k modem! He started off predicting 300ms of lag on the client side, but it was too glitchy when the correction came in. He landed on a more conservative strategy, and his writeup is a touchstone of Computer Science lore for me:
I wonder if you could design a game around this by drawing BOTH the predicted motion and a ghost image of the last known verified server position of players, and then deliberately introduce 500ms of latency. The gameplay would then be about tricking your opponents into thinking you were going somewhere you're not.
This existed in Guild Wars PvP. The game had a similar motion prediction system (and I think most multiplayer RPG games do), and when teams landed on laggy servers they were often uniformly laggy for everyone in the match, and you were able to use that to confuse the enemy if you were a bit more experienced.
How much of this improved network code made it back into the original Quake engine, and therefore into Half-Life 1? To this day, I have yet to find a multiplayer engine for an ego shooter which feels as perfect as the one in HL 1. For lack of a better description, the Half-Life 1 (and mods) multiplayer game-play just felt (and still feels) crystal clear. This has to be one of the major reasons why Counter-Strike became so big.
The engine also reuses code from other games in the Quake series, including QuakeWorld, and Quake II, but this reuse is minimal in comparison to that of the original Quake.
> I wonder if you could design a game around this by drawing BOTH the predicted motion and a ghost image of the last known verified server position of players, and then deliberately introduce 500ms of latency. The gameplay would then be about tricking your opponents into thinking you were going somewhere you're not.
This would be fun. Maybe call it "Kalman's Revenge" or something.
Thanks for linking this - had never seen it before. Fascinating to look back into the annals of gaming/software history to see the birth of ideas that we now take for granted.
Also interesting, on an unrelated note:
> Romero is now gone from id.
> There will be no more grandiose statements about our future projects.
> Imagine picking up a glass of what you think is apple juice, only to take a sip and discover that it’s actually ginger ale. Even though you usually love the soda, this time it tastes terrible
Even brains aren’t safe from speculative execution vulnerabilities
Walgreens sells a sparkling water that inexplicably has artificial sweetener, though it's not sold as "soda" but as "sparkling water." I bought it expecting, you know, sparkling water and the experience of getting a mouthful of unexpected aspartame was... memorable.
I once saw a man get another man falling down drunk at a residential college (frat house equiv?) bar serving him water while being convincing it was 100% pure and tasteless alcohol from the chemistry lab. (De-ionised water, probably). Served from a conical glass beaker with hushed tones, "I don't want to get kicked out for this so don't tell anyone." The full con-job. They were and are friends and it was utterly hilarious. The drunk guy was not insulted nor upset in the aftermath and still tells the story.
Falling of the stool, uproariously laughing drunk. Someone else who was in on it (I wasn't, I was getting drunk for real on beer) produced an old Nintendo Game 'n watch where the water drunk guy got the top score of any of us. Apparently being drunk like this didn't affect his reaction time at all. No hangover, Immediately sobered up post-reveal and claims he felt totally drunk. (Then almost immediately drank a family sized rum or something - kids those days, sheesh).
It'd be great if you could get that enjoyable drunkeness feeling like so as easily as by pouring booze down your neck...
"Ladies and gentleman, pivot your startups, go!"
"We're the world leaders in getting neural nets drunk, then making them classify with a hangover..."
I’ve noticed during meditation on external sounds that for any repetitive sound, my brain immediately starts reproducing the sound internally, and then stops paying attention to that external sound and starts scanning the background for other sounds. This is the case for anything from air conditioner whine to birdsong. The meditation then becomes to interrupt the cached reproduction and really listen to every sound, including repetitive ones, forcing my brain to expect surprising variations in the same sounds if I can listen really closely.
My wife always play guitar and even after she stops I hear the song she was playing for the rest of the day. It’s very unnerving. What other parts of reality am I just filling in?
Although they just discovered the mechanism, we've known that this happens for quite some time. It's something you can take advantage of in UX design, for example, because if you provide a subtle queue before something new happens to trigger the brain, the new thing will be perceived faster.
You're totally right. I'd like to blame autocorrect, except that I typed it on my laptop. Instead I'll blame the fact that I was writing about queuing theory just before I made that comment.
In cognitive psychology, I've worked with a method known as "masked priming" [0], which is relatively well-known in the field to influence reaction times. Reaction time is a complex measure as it includes both "processing speed" as well as physical reaction times. The "prime" is considered subliminal, as one would not necessarily notice it being presented (although the mask definitely is noticeable). Interestingly, at short enough time scales, e.g. 16 ms duration times, the effect of the priming reverse, and it makes reaction times increase[1].
In 2006 I worked on analyzing data that showed this. The principal investigator was plagued by health problems and I didn't know how to connect what I was doing with the previous literature, so we didn't end up publishing anything. I felt like the idea was inevitable, but I don't know to what extent others believed this.
My model, which was basically guessed from common-sense principles (what would a flip-flop's output look like if there were no clock and the "update" signal were not discrete?), used equations that were similar to what would eventually become the GRU update equations. And the fit with the data was gorgeous: it took one parameter (a time constant that scaled how quickly sensory information was integrated with the current prediction) and the R^2 was north of 90%.
Yeap, this is why sometimes when i read some text, press press Space to scroll the page my brain has already started "reading" from the top of the page and then gets confused when the page has hijacked space and it didn't really scroll, the words do not make sense as they're not a continuation of what i was reading so i take a second to reorient my eyes around the page to realize what is wrong.
This is hugely noticeable playing/practicing/learning music.
Just yesterday playing guitar.. I was fighting playing the wrong note cause my brain was guessing that was the next note because so often in so much music it would be. It was something like 3 notes that are in a particular scale and my brain really wanted to play the next note in the scale instead of the correct next note.
In all activities our brain internalizes how to do things and we can suddenly do them without having to think consciously about how it's happening.. huge in sports and anything with motor skills... music, video games, etc, etc..
I play Dance Dance Revolution competitively and it’s extremely important there as well. A lot of people ask me how I’m able to “read the notes that fast”. The answer is always that you build an intuition about what patterns follow certain patterns, and that you never read each note individually.
It’s a great combination of some of the sensations you get from playing music, and executing things well in a video game.
> A lot of people ask me how I’m able to “read the notes that fast”. The answer is always that you build an intuition about what patterns follow certain patterns, and that you never read each note individually.
osu! - and most rhythm games, I would think - is the exact same way.
Guessing what's next is the only way the brain knows how to play music. If you aren't hearing the music you want to play correctly, you cannot play it. A lot of music education gets things pretty messed up by ignoring this factor.
In similar fashion people that are deaf and cannot hear their own voice eventually start speaking with strange manerisms, too loud, to quiet, weird intonation, etc. But I don't think that's about predicting what comes next, it's more about feedback loop allowing you to control your movement better.
I don’t think this is the same thing as guessing what’s next. He’s just talking about developing a powerful aural imagination, just as excellent painters have to envision the scene they’re painting. That said, prediction certainly comes into play when you’re listening to someone else’s creation.
No, it's pretty clearly prediction. People play music by hearing what they're about to play, and then their body does whatever it needs to in order to make that sound happen.
It's when you've done something so many times your brain has pathways for it so you can do it unconsciously.
It's like riding a bike. There are like 10 things going on at the same time and you don't even think about them once you know how to ride a bike.. your brain is doing a lot of the work in the background.
Heck even using your balance to walk around is the same thing.
How timely! I recently attended a nephew's soccer game. Having never played, I didn't really know what was going on. It seemed to me that I had a much harder time perceiving what happened; I was forever asking someone to tell me what they saw, even though I was looking at the same thing. If a lot of perception is predictive based on previous experience, it makes sense that I felt like I was half blind.
Your perception system doesn't report some sort of fixed reality. It surfaces the information that is most relevant to you and what you want to do. For example, better batters playing baseball will literally perceive the ball as larger.
A really interesting example of this in my life is racquetball. When you first start playing it can be difficult to know if the ball hit the wall or the ground first. With experience it’s pretty easy to tell. You are still looking at the same thing. But some inexperienced players I play with ask wether it hit the ground or not. Interestingly for me at least I couldn’t tell you exactly where it hit or even how it bounced getting there I just know wether it hit the ground or not.
I don't think a part of the brain tries to exfiltrate information from another by using moments when predictive coding is invalidated. Those moments when prediction fails are very useful, we learn from them.
We learn how to create representations of the present by conditioning them on correct prediction of the future. We build world models and thus get to imagine future scenarios, very useful in planning our actions both for achieving higher rewards and avoiding dangers.
So we get sensory representation and world modelling for free - no need to be taught explicitly, the teaching signal is just the link between present and future.
Would caching also be a form of "guessing" what's next? Cache hits could be considered guessing correctly and while cache misses is guessing incorrectly.
There are so many strands of neuroscience that show predictive coding now. There's a really nice example in the use of common vs unusual words in sentences. If you're using fMRI to scan someone while they read these two sentences:
The jam on the motorway made everything slow.
The jam on the motorway made everything sticky.
You'll see a flash of activity when people read the word "sticky" rather than the more common word in that context "slow". The theory is that when everything in the sentence proceeds as predicted there's very little need to update prior expectations, but when you come across something that goes against previous predictions it's a signal that something needs to be actively thought about.