> Rather than train the AI to recognize whole words, the researchers created a system that decodes words from smaller components called phonemes. These are the sub-units of speech that form spoken words in the same way that letters form written words. “Hello,” for example, contains four phonemes: “HH,” “AH,” “L” and “OW.”
This is nifty. Also I'm oddly comforted by the fact that this system doesn't "read thoughts". It just maps slightly upstream from actual speech to the relevant speech/motor production regions of the brain. So no immediate concern for thought hacking...
Separately, this makes me wonder what such a system would be for deaf people (with signing ability) who have lost their ability to move their arms. I imagine–optimistically–that one could just attach the electrodes to a slightly different area in the motor cortex and then once again train an AI to decode intent to signs (and speech). So basically the same system?
I think not all deaf people have the same mapping of words and their precise phoneme to the typical expected muscle movements. If this mapping differed from a regular person this system would not be that useful on the interpreting speech side. I think we've had pretty good gesture recognition for a while, on the other hand. I bet it's possible to decode individual signs right now but sign language also has a different grammar from typical spoken English and a lot of meaning is context based so it might be tricky in that way, more of a translation problem.
Oh yeh definitely. I meant more specifically: might it be possible to capture the electrical signals (much like this current system) from the parts of the motor cortex that create the series of muscle movements that form a 'sign', and then creating a 2d projection/display of these muscle movements and then ... downstream, a gesture recognition solution as u mention (a big downstream challenge).
It sounds like a lot. Was just a thought experiment of how such spinal blocks/paralysis would affect deaf people and how they'd be able to continue to communicate with their deaf partners. Definitely niche but nonetheless interesting, and I think possible using the same general approach as the OP article.
But yeh to then translate gestures to speech is a distinct and incredibly challenging problem on its own as you allude to. Perhaps in the future they can tap into signing/speaking-translators' brains and have AI learn those mappings in a fuzzy way.
> I think not all deaf people have the same mapping of words and their precise phoneme to the typical expected muscle movement
In fluent sign language, there is something analogous to phonemes. In linguistics these days, they're just called phonemes, and considered equivalent to spoken language phonemes. They're a fixed class of shapes and locations. They combine in certain ways that make up morphemes, which then make up words. It does work very similarly, perhaps identically, to spoken language.
The distribution of handshapes and they way they interact resembles spoken language. For example, it's somewhat hard to say "strengths" and people often produce a slurred "strengfs". The way it slurs together is rather predictable. It's very hard to say "klftggt", and so it just doesn't occur in natural language. Same with sign languages and hard-to-sign combinations.
Phonemes have an exact realization, but they also exist relative to each other, the distance and direction between them is important. This is probably part of why an American can fairly easily understand New Zealand English, despite nearly all of the vowels being different. Another analogy: in tonal languages, if there's a low flat tone, then 3 rising tones, then a low flat tone, that final low tone may be quite a bit higher than the first low tone -- but it will be interpreted as a low tone, as it is judged relative to the previous rising tone. Vowel qualities besides tone work the same way. And so do hand gestures.
There is a lot of variation by dialect/region/community in sign languages. More than in a language like English. This makes it more complicated, but it shouldn't be insurmountable. And of course, not all deaf people speak sign languages as their native language. They would struggle just as people who learn any other language later in life do.
>I think not all deaf people have the same mapping of words and their precise phoneme to the typical expected muscle movements. If this mapping differed from a regular person this system would not be that useful on the interpreting speech side.
Actually, I think it's been found out recently that thinking that everyone has an "inner voice" is a presumption by those who do. Apparently it's only 25%~ of the population or so that actually have an inner monologue.
It got revitalised again recently resulting in much commotion from people on either side of the fence: shock that someone might never hear their own voice or talk to themselves inside their head, and shock that someone might have this voice talking to themselves, for somebody who has only known silence.
I think there are a couple studies that back it up, but a lot is anecdotal as people describe their side of the fence.
I have an inner voice myself, but I have thoughts and sensations that my inner voice does not voice, and some where it does.
But as far as I know no motor signals are sent to my mouth when I talk to myself (i.e. internal monologue), so this wouldn't read my thoughts. I'm not sure what you're saying.
A ML system which skipped the brain and just read the physical movements and converted them to voice goes a long way
(I understand there are camera and glove based apps that can do this, but I'm not sure what the accuracy is like)
The woman from the title is from Regina, Saskatchewan, Canada, and the CBC did a feature on her story. Her husband is pictured at her side in a Saskatchewan Roughriders tee shirt and Toronto Blue Jays ball cap, having dressed with his Canuckness set to 11:
The 5th amendment protects you from being a witness against yourself [0]. So to me it seems pretty clear that in the US this would not be allowed.
But then again, it seems as though being forced to reveal your password is not necessarily a violation of the 5th amendment [1]. I can't quite understand why the supreme court hasn't made a decision on this one yet. There are a lot of conflicting decisions now.
0: `nor shall be compelled in any criminal case to be a witness against himself`
The thought process behind being compelled to provide a password is that you can be compelled to provide it to (indiscriminate) computer; which is currently thought of similarly to being summoned, as opposing this would be contempt of court or obstruction of justice.
Forcing someone to do something without payment and to their own detriment should run afoul of both the 13th and 5th amendment respectively; but if it was already reasonably and obviously known that an encrypted drive contained CSAM or national security secrets, and you have already been duly convicted of that, then it would not apply ("except as a punishment for crime whereof the party shall have been duly convicted," ), and you can be coerced into the "labor" of decryption - although double jeopardy would seem to apply, so you cannot be further charged for anything found, once decrypted.
I suggest making your passwords themselves incriminating, just to throw in another constitutional hiccup.
Not that any of this would matter in practice, but it is quite a legal thought experiment.
Ah but isn't the third party doctrine predicated on the data being voluntarily given to a third party? "a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties."
I'm not sure that right will protect you. The right applies to "something you know" not "something you have". You have a brain.
A good analogy is smartphone passwords. Authorities can't make you share your password (something you know), but they can make you unlock you phone with a fingerprint (something you have).
That is an interesting thought experiment. IANAL, but I'm pretty sure putting something on your head to essentially coerce information would be a violation of the right to remain silent. I don't know whether it would fall in the fingerprint vs spoken password space in terms of subject-to-search-warrant.
Fortunately this tech is currently very person-specific and have to be trained to the person. So to thwart it you'd just have to think applesauce over and over.
In France it's common for the court to ask to a psychoanalyst or a psychologist (not to a psychic) to tell what the defendant was "really" thinking and feeling, in order to decide to decide whether the defendant was responsible for his actions or not.
“There is no crime or offense when the defendant was in a state of insanity at the time of offense, or when he was constrained by a force he could not resist.
This is also possible (not "common", but possible and no one bats an eye as it happens multiple times a year) in America, it's called an insanity defense and it requires expert testimony by a psychologist.
Yes, just like it prevents the prosecution from bringing in a cop that will testify against you to the court.
Yes, just like it prevents the prosecution from bringing in a hair expert that will testify to the court that your hair was on the crime scene.
Yes, just like it prevents the prosecution from bringing in a blood splatter expert to testify to the court only you could make this specific splatter.
As you can see, truth or efficacy isn't a prerequisite of being admitted by the court.
Currently, they can't compel you to give up your fingerprint, but can force a password. So I think thoughts would count as a biologic rather than a generated phrase in that aspect.
I noticed that she selects characters by using her glasses as a pointing device and moving her head. Surely they could use an eye tracking device like Tobii instead?
Maybe there is some medical reason not to for her. But as a healthy user of head tracking for gaming, I would rather have head tracking, so I can move my eyes without interacting with the screen.
Maybe it also doesn't work well with multiple people watching?
ML is part of the larger field of Artificial Intelligence, which includes all techniques to make a computer do anything we consider as part of intelligence.
It is not in the same species, these systems have to be trained on every individual brain, they are not reading some objective signal that is the same for everyone.
> For weeks, Ann worked with the team to train the system’s artificial intelligence algorithms to recognize her unique brain signals for speech. This involved repeating different phrases from a 1,024-word conversational vocabulary over and over again until the computer recognized the brain activity patterns associated with all the basic sounds of speech.
To use this type of system for lie detection, if such a thing is possible, you'd have to get each subject to give you thousands of example statements with truth/lie labels. This obviously defeats the purpose, and also doesn't really seem possible - does lying for a training exercise produce the same brain patterns as lying to actually cover something up? Probably not.
I would presume that someone being subject to a lie detector may have different incentives than those running the lie detector, and they may intentionally taint the data.
It still seems trivial to game. Make up some "tell" for your lies. Maybe every time you need to lie during the training period you squeeze your muscles, or think of hard math problems. It would be very hard for the AI not to fit to this fake "tell."
There are new infrared sensors in development that should see much more complex behavior in the brain, than say EEG. I’m sure even those could be gamed theoretically, but it will clearly become increasingly difficult as the technology improves.
I was thinking it might be something everybody does in highschool or something. Every morning you spend an hour training your AI shadow. Everybody gets one. So useful, like a cell phone.
If that's the sensitivity, what's the specificity? How well does it translate from the population it's trained on to other populations? In what contexts is the type of lying it detects useful to detect? I would assume using it on someone in a criminal investigation context without their permission would be a 5th and 6th amendment violation (as it would almost entirely subvert the usefulness of legal representation).
That seems like a pretty big leap. This doesn't require language understanding, just a translation between muscle movements and sounds. Lie detection is way more complicated.
There can never be a 100% accurate lie detector, only a 100% "thinks they're telling the truth detector". Ultimately human memory is deeply flawed and we're capable of having entirely false memories and swearing on our lives to things that never occurred. A machine which can perfectly read our brains can only get this messy imperfect mix.
Even in the sense of political intrigue, is it so hard to imagine someone so brainwashed they truly believe the lie they are telling you?
no - this is not conceptually related to lie detection. Yes it uses ML to decode something, but that's where the similarity ends. This is decoding the patterns of brain activity used to generate speech.
This is nifty. Also I'm oddly comforted by the fact that this system doesn't "read thoughts". It just maps slightly upstream from actual speech to the relevant speech/motor production regions of the brain. So no immediate concern for thought hacking...
Separately, this makes me wonder what such a system would be for deaf people (with signing ability) who have lost their ability to move their arms. I imagine–optimistically–that one could just attach the electrodes to a slightly different area in the motor cortex and then once again train an AI to decode intent to signs (and speech). So basically the same system?