> A visual cortical prosthesis (VCP) has long been proposed as a strategy for restoring useful vision to the blind, under the assumption that visual percepts of small spots of light produced with electrical stimulation of visual cortex (phosphenes) will combine into coherent percepts of visual forms, like pixels on a video screen. We tested an alternative strategy in which shapes were traced on the surface of visual cortex by stimulating electrodes in dynamic sequence. In both sighted and blind participants, dynamic stimulation enabled accurate recognition of letter shapes predicted by the brain’s spatial map of the visual world. Forms were presented and recognized rapidly by blind participants, up to 86 forms per minute. These findings demonstrate that a brain prosthetic can produce coherent percepts of visual forms.
Years ago I read about something like this. A blind man was even able to drive a car (I think it was the researcher's) around a parking lot. IIRC it had a limited framerate so he wouldn't be able to actually drive anywhere with it
I find the encoding of such a signal to be fascinating. Somewhere someone has mapped out the 'protocol' of the eye-brain interface! I would love to know more.
> Somewhere someone has mapped out the 'protocol' of the eye-brain interface!
I think it's more accurate to say they've tinkered their way into figuring out a few basic commands that can make the system predictably do a handful of things. Still no color, very limited resolution, and low framerate as you mentioned.
We've probably got over 99% more of that 'protocol' yet to discover. But at least we've got our foot in the door.
Reminded me of this NPR story (https://www.npr.org/programs/invisibilia/378577902/how-to-be...) in which they show that a blind person can technically see through hearing (echolocation), and then poise the (philosophical) question, what is seeing? Pretty fascinating.
I think seeing is pretty easy to define, it's the decoding of high-density two-dimensional information encoded in a wave. High-density differentiates it from feeling/smelling, and two-dimensional differentiates it from hearing/smelling. Is that definition missing anything relevant?
Yes, wide-acceptance. Google the definition of seeing, does it resemble what you wrote?
Ask a lot of people what seeing is, would they give the same definition as yours?
Historically, seeing has been mostly regarded as what we are capable of doing with our eyes.
Also, in your definition, what is doing the decoding? Is the retina decoding, is the optic nerve decoding, are each individual cell decoding, are the individual neurons decoding? And what does it mean to decode?
Definitions require agreement, not just a seemingly factual description. At least if you want the definition to be accepted. Which is in the end what defines a definition.
I am totally convinced that we will essentially invent telepathy within the next few decades.
We’ve already made strides with interacting with a computer via brain signals, if we can build simple text interfaces (like this) that interface directly with the brain we have all the pieces needed to silently send text messages using only thought.
My mind immediately goes to horrible implications like telepathic spam. Conversations getting interrupted with ads, screaming directly into your brain.
> Leela: Didn't you have ads in the 20th century?
>
> Fry: Well, sure, but not in our dreams. Only on TV and radio. And in magazines and movies and at ball games, on buses and milk cartons and T-shirts and bananas and written on the sky. But not in dreams. No, sir-ee!
I think you could argue that we already are. Imagine you're an alien who discovers Earth. You see all these humans everywhere and wonder how they might be related. So you go show yourself to a big group like Beijing or Delhi. You immediately see activity from all over the world. Looks like a hive mind to me.
The difference you're describing is how far apart our brains are. This strikes me as an optimization over a phenomenon that the internet has already facilitated.
There is no incentive for the brain interface companies to offer an opt-in device. It will be opt-out (but may require a law AI to grok the privacy policy)
I totally agree, I think we're going to have a multi-decade period of technological improvement in this area from the first useful implementation to full sensory immersion.
Thank you for posting! Always fascinates me that we still find out how to brain works. Always get the feeling there is still so much left to discover about the human body :)
For those who haven't read it, the 'Wait But Why' extensive blogpost on Neuralink (warning, it's super long, I think it has enough of a wordcount to qualify as a novel if it was one) is probably the most fascinating thing I've read in the last 5 years:
A lot of it is speculation by necessity, and I'm sure some is outdated now, but it's also packed with facts and the reasoning seems sound. If anyone knows whether any of it has been debunked, I would be interested in that as well.
I hope to see a non invasive way to do this soon, either via nano-bots or some form of magnetic stimulation. Years ago, our hard-disks got smaller, it is time for MRI and transcranial stimulation.
Abstract:
> A visual cortical prosthesis (VCP) has long been proposed as a strategy for restoring useful vision to the blind, under the assumption that visual percepts of small spots of light produced with electrical stimulation of visual cortex (phosphenes) will combine into coherent percepts of visual forms, like pixels on a video screen. We tested an alternative strategy in which shapes were traced on the surface of visual cortex by stimulating electrodes in dynamic sequence. In both sighted and blind participants, dynamic stimulation enabled accurate recognition of letter shapes predicted by the brain’s spatial map of the visual world. Forms were presented and recognized rapidly by blind participants, up to 86 forms per minute. These findings demonstrate that a brain prosthetic can produce coherent percepts of visual forms.