I know nothing about neuroscience and sounded like a recipe for failure, you obviously have to use true communications, like asking the test subject between food choices and actually bringing the food requested, so the next time they are asked about food choices they know is a real interaction and have to make a real decision, the same for everything else that is tested, meaning make it as real as physically possible.
You are missing the point, they are not decoding semantics. The signals being decoded correspond to imagined motor activations needed to produce the word. It's similar to asking you to mentally rehearse throwing a ball.
I don't think it is anything like simulating a mechanical action like throwing a ball, just saying a word alters brains in completely different ways depending on the person, for example saying "spider" reminds me of spiderman and there is little I can to stop such through from happening, to someone else it may remind them of something completely different, even my own thoughts about that word may be different any other day.
To actually capture words you have much better chances reading the brain while writing the word with a pen because you are actually sending a signal from the brain to your hand, which is what they did here (even if he doesn't actually have a hand to move, the brain still can emit the very same commands): https://www.cnet.com/google-amp/news/brain-implants-let-para...
>I don't think it is anything like simulating a mechanical action like throwing a ball, just saying a word alters brains in completely different ways depending on the person
It's exactly like imagining throwing a ball. A disproportionately large amount of the motor cortex is used for facial muscle and tongue control. Look up the "cortical homunculus".
>for example saying "spider" reminds me of spiderman and there is little I can to stop such through from happening
These BCIs can't tell whether or not you're thinking about Spider-Man, otherwise they'd be used on terrorists to get information. Instead, they're mainly focused on broad fitting synchronisation of M1 neurons (indicating rest)
>To actually capture words you have much better chances reading the brain while writing the word with a pen because you are actually sending a signal from the brain to your hand
Real movement does produce much more consistent results than imagined movements, but it doesn't translate well to the target market for BCIs (people with severe motor disabilities)
>even if he doesn't actually have a hand to move, the brain still can emit the very same commands
It doesn't work like that in real life. With no feedback, we're back to square one with the "imagined movement".
Sure, a word can evoke all sorts of meaning, other brain process, etc. The decoding principle leveraged here is exactly as I indicated. It's decoding motor activation associated with imagining speaking a word, from the speech-motor cortex.
Given the diversity of pronunciation even within one local group, and that everyone learns to make the sounds their own mechanical way, how does this generalize between individuals?