Sure. Let's say a five second EEG segment, recorded from up to 200 electrodes. I bet that by 2017 we cannot accurately detect who a person would like to call out of phonebook of 50 people. Specifically, in a setting where each person is as likely to be called (equal prior probability), I bet the detection accuracy will not exceed 4% on average with a phone book of 50 people. I'm talking about the BCI understanding a "call mom" thought, not detecting it through some other means like a movement (though I don't expect that to work by then either).
My point was more about precision when making a bet, but yeah like I said I agree in sentiment. That being said, two-bit encoding is exactly what Stephen Hawking uses with his thumb, so it's not inconceivable to use it on a neural level as a last resort, however impractical. And in some cases, it is: often paralyzed individuals use their tongue to manipulate a cursor, but this can cause all sorts of problems like abnormally large tongues due to muscle growth.
And I work in a lab that does BMI work, and we couldn't do the "call mom" command in the sentiment of lars, even though we use more spatially precise recordings (multi-electrode chronically implanted arrays). So I'm with that. OTOH we can do some cool things like control a computer cursor or tv remote with motor commands like "left-up" etc. Subjects reported that after a while they would cease to "translate" thoughts from movement commands into "BMI" commands like "change channel". It stands to reason they might be able to do the phone-book thing in that case.
Of course, few people find it worthwhile to get chronically implanted electrodes placed in their motor cortex, soooo.
Why would "call mom" be more difficult than "Left left up up"? I'm guessing left, right, up, down, would map to certain EEG patterns. Why would it be more difficult to map call and mom to their patterns?
It seems to me that if you can map patterns for four directions, you should also be able to map patterns for 50 different phone book entries and several verbs.
I'm not being clear, I don't think that thinking the words "left left up up" would be detectable through EEG.
When I say detecting movement, I mean things like imagining moving a hand, a foot or a tongue. These movements use distinct areas of the brain so you can distinguish between them by looking at where on the scalp the change occurred. This is done in ways that are known to be close to perfect.
However, you probably couldn't use scalp location if you wanted to distinguish "call mom" from "call John", as they would presumably activate the same area of the brain. There are of course other things one could look at, and I obviously can't prove that it can't be done. But at the same time I have never seen any kind of positive result for an EEG classification task at this level of detail.
Well all you would really need are some easily detectable signals. So if move hand, move foot, move eyes, etc are all easily detectable then you have the basis for an interface. After that it's just a matter of building the interface around those limitations. It wouldn't be mind reading (not even close), but it seems like you should be able to get a reasonably good UI going.
I doubt it. Firstly, you will have to provide a method to filter out real movements from intended ones. A sensor on a few muscles may help, but sticking them on your skin every morning would not help towards the goal of "Reasonably good UI".
Secondly, I am not sure one can learn to almost unconsciously think about certain movements of bodily parts. Chances are this will keep requiring too much apof one's attention.
Thirdly, I think temporal resolution will be awful. Even if you can learn to think about say 3 movements simultaneously, I doubt you will get this above a byte per second of bandwidth. Written text is around a bit/character, so that would likely be way below slow speech.
Most of this is opinion/guessing, so feel free to correct things.
Rather than thinking "call mom," could you think "imagine moving your arm in that direction and pushing something"? Instead of literal mind-reading, have essentially a touchscreen without physical touch?
It definitely isn't possible now, and I wouldn't expect it in five years either. If you look at [1], you can see the areas of the motor cortex. With todays methods we can do an acceptable job at separating for example hand from foot movement. These methods look at the spatial domain, and do so in a way that is near perfect. And as you can see, there is a certain distance between the areas on the scalp, while the fingers are all in the same area.
So you couldn't distinguish individual fingers with todays technology. If it was ever to be done, I'd expect that it would be done with the same algorithms as we use today, but with much denser electrodes. If I were to bet, I'd bet that this would be physically impossible, but I'm not as confident as I am with saying we wont be able to detect who I want to call.