Hacker News new | past | comments | ask | show | jobs | submit login

>Chang’s team asked Bravo-1 to imagine saying one of 50 common words nearly 10,000 times

I dropped out of a PhD back in 2018 all about the application of Machine Learning to Neurosignal Decoding, and that quote just perfectly sums up everything wrong with the state-of-the-art approach.

A bizarre prompt, an infeasibly boring task, and they somehow magically expect a machine learning to smooth everything over despite the fact that the subject isn't even generating a consistent signal to start with.




I would imagine that after about the 300th word, they'd start to lose all meaning and their associated signals would collapse much closer together. It definitely does feel like an extremely optimistic experiment. Do you think there's a good way to capture enough useful data in a lab environment, short neural implants + microphones with 24/7 recording?


>Do you think there's a good way to capture enough useful data in a lab environment, short neural implants + microphones with 24/7 recording?

It's a hard problem, even neutral implants and mics isn't enough. I had two months to come up with some ideas about how we could get a human to produce reliable signals, and failed completely. That was when I walked away and got a job.


I know nothing about neuroscience and sounded like a recipe for failure, you obviously have to use true communications, like asking the test subject between food choices and actually bringing the food requested, so the next time they are asked about food choices they know is a real interaction and have to make a real decision, the same for everything else that is tested, meaning make it as real as physically possible.


You are missing the point, they are not decoding semantics. The signals being decoded correspond to imagined motor activations needed to produce the word. It's similar to asking you to mentally rehearse throwing a ball.


I don't think it is anything like simulating a mechanical action like throwing a ball, just saying a word alters brains in completely different ways depending on the person, for example saying "spider" reminds me of spiderman and there is little I can to stop such through from happening, to someone else it may remind them of something completely different, even my own thoughts about that word may be different any other day.

To actually capture words you have much better chances reading the brain while writing the word with a pen because you are actually sending a signal from the brain to your hand, which is what they did here (even if he doesn't actually have a hand to move, the brain still can emit the very same commands): https://www.cnet.com/google-amp/news/brain-implants-let-para...


Grandparent here, who studied the PhD.

>I don't think it is anything like simulating a mechanical action like throwing a ball, just saying a word alters brains in completely different ways depending on the person

It's exactly like imagining throwing a ball. A disproportionately large amount of the motor cortex is used for facial muscle and tongue control. Look up the "cortical homunculus".

>for example saying "spider" reminds me of spiderman and there is little I can to stop such through from happening

These BCIs can't tell whether or not you're thinking about Spider-Man, otherwise they'd be used on terrorists to get information. Instead, they're mainly focused on broad fitting synchronisation of M1 neurons (indicating rest)

>To actually capture words you have much better chances reading the brain while writing the word with a pen because you are actually sending a signal from the brain to your hand

Real movement does produce much more consistent results than imagined movements, but it doesn't translate well to the target market for BCIs (people with severe motor disabilities)

>even if he doesn't actually have a hand to move, the brain still can emit the very same commands

It doesn't work like that in real life. With no feedback, we're back to square one with the "imagined movement".


Fascinating, thanks for the explanation.


Sure, a word can evoke all sorts of meaning, other brain process, etc. The decoding principle leveraged here is exactly as I indicated. It's decoding motor activation associated with imagining speaking a word, from the speech-motor cortex.

The study you quoted uses similar principles.


Given the diversity of pronunciation even within one local group, and that everyone learns to make the sounds their own mechanical way, how does this generalize between individuals?


Maybe our thoughts aren't reducible to material reality?


Maybe just like everything in the history of mankind it is a material reality and we just dont understand it yet, just like we didn't understood how to fly for millions of years, maybe it will take a few more to read a brain.


> Maybe just like everything in the history of mankind it is a material reality

There are at least a gazillion things in the history of mankind that we have not been able to reduce to a material reality.


Just like people used to believe the sun was a god or something magical for centuries but now we know it's a mass of gas under nuclear fusion. In general believing that something we don't understand happens due magic (or however you wanna call non-material reality) has a bad track record.


Use the phrase 'god' or 'something magical' as a synonym for 'something beyond our understanding and control' and you'll realize we're a lot closer to that level of understanding than we are to having figured everything out.

Abstraction makes it much easier to reason about complex systems, such as the world :)


Don't you have knowledge of immaterial realities, like the lines, points, and angles of geometry? How is that possible if all you've got to think with is matter?


The machinations of the brain are as immaterial as the bytes of a hard drive; which technically speaking are both material given that the information there is an electrical property of matter.


On the contrary - the more we focus on technology and treat everything as if it's a machine, the less we understand the true nature of the human mind. That's why the general trend in literature, art and architecture is down instead of up.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: