> More generally, the information throughput of human behavior is about 10 bits/s.
I'm sorry, I just can't take this article seriously. They make a fundamental mistake of encoding and assume that information is discretized into word-sized or action-sized chunks.
A good example is a seemingly discrete activity such as playing a musical instrument, like a guitar. A guitar has frets and strings, a seemingly small number of finite notes it can play. So it would seem a perfect candidate for discretization along the lines of the musical scale. But any guitar player or listener knows that a guitar is not a keyboard or midi synth:
1. The attack velocity and angle of the pick intones aggression and emotion, not just along a few prescribed lines like "an angry or sad or loud or quiet".
2. Timing idiosyncracies like being slightly before or after a beat, or speeding up or slowing down, or even arhythmic; the entire expression of a piece of music is changed by subtleties in phrasing.
3. Microbends. The analog nature of strings cannot be hidden entirely behind frets. Differences in the amount of pressure, how close to the fret the fingers are, and slight bending of the strings, intentional or unintentional, static or dynamic, change the pitch of the note.
4. Non-striking sounds like the amount of palming, pick scraping, tapping, and sympathetic vibrations.
Of course there are lots of other things. All of these things make the difference between a master guitar player, say Hendrix, and someone just playing the same notes.
And yes of course we can consider the encoding of the audio coming out of the guitar to be information--at a much higher bitrate, but what about the facial expressions, body language, etc? There are tons of channels coming off a musician, particularly live performances.
This entire article just misses these in picking a quantized encoding of information that of course has a low bitrate. In short, they are missing bazillions of channels, not the least of which is expression and timing.
I'm sorry, I just can't take this article seriously. They make a fundamental mistake of encoding and assume that information is discretized into word-sized or action-sized chunks.
A good example is a seemingly discrete activity such as playing a musical instrument, like a guitar. A guitar has frets and strings, a seemingly small number of finite notes it can play. So it would seem a perfect candidate for discretization along the lines of the musical scale. But any guitar player or listener knows that a guitar is not a keyboard or midi synth:
1. The attack velocity and angle of the pick intones aggression and emotion, not just along a few prescribed lines like "an angry or sad or loud or quiet".
2. Timing idiosyncracies like being slightly before or after a beat, or speeding up or slowing down, or even arhythmic; the entire expression of a piece of music is changed by subtleties in phrasing.
3. Microbends. The analog nature of strings cannot be hidden entirely behind frets. Differences in the amount of pressure, how close to the fret the fingers are, and slight bending of the strings, intentional or unintentional, static or dynamic, change the pitch of the note.
4. Non-striking sounds like the amount of palming, pick scraping, tapping, and sympathetic vibrations.
Of course there are lots of other things. All of these things make the difference between a master guitar player, say Hendrix, and someone just playing the same notes.
And yes of course we can consider the encoding of the audio coming out of the guitar to be information--at a much higher bitrate, but what about the facial expressions, body language, etc? There are tons of channels coming off a musician, particularly live performances.
This entire article just misses these in picking a quantized encoding of information that of course has a low bitrate. In short, they are missing bazillions of channels, not the least of which is expression and timing.