Disclaimer: I work in a similar lab (my adviser was advised by the senior author on the paper, who runs the lab) on a brain-machine interface project funded by DARPA.
What makes this awesome from a scientific point of view:
1) 2-dimensional decoders, where an individual controls a mouse with their mind, have existed for a while. What makes this really cool is that there are actually quite a few degrees of freedom in a robot arm, but they're basically using the same hardware. So the influence of software/algorithms in this case is pretty fundamental. There are a lot of papers on improved neural-decoding methods that at first glance appear really dry, boring, not the `sexy' kind of science with huge breakthroughs, but they end up being crucial to good performance as the complexity of the robot grows.
2) One participant was implanted with the electrode array 5 years before the study, and had the injury 10 years before that. Usually the signals don't last that long in monkey models. And we know that your cortex changes with disuse, so it's awesome that they were able to get usable signals so many years later.
What needs to get better are a couple things:
First, these decoders aren't perfect yet. What the Wired article didn't tell you is that the performance for the woman's implant was around 20-50% successful trials (still awesome from a comparison to no interaction)[1].
Second, incorporating sensory feedback is another challenge that is really hard to address, but also very important. Imagine building a robotic controller in which the only information you received about the robot's position was visual. That's the way this works. If we find a reasonable way to mimic sensors of muscle extension (a proxy for joint angle) then we can create more controllable devices.
The feedback is something that a couple of friends of mine and I are working on.
There are a few things that we've talked about are:
1) Heat. Patches that can be placed on areas of your skin that will increase/decrease heat quickly as a way of giving forece-feedback. This was just something we talked about briefly, and I really don't know how workable it is. I don't know offhand of any materials that can rapidly go from hot to cold and back again.
2) Electrodes. This is difficult because skin conditions change. There is [from what we've looked at] a smallish envelope between "I can feel this" and "this is potentially dangerous". The envelope can change with skin conditions [and people]. I've built a little prototype that sits on my tongue and gives me feedback, but that is clunky, and something we'd like to avoid.
3) Vibration. The problem here is that it turns out that it's a bit tough to judge different levels of vibration. It ends up as an on/off sensation. What we've talked about is using different patters of on off, or a variable frequency of on-off. (Here is one of the motors walking through the different freqs. It sounds kindof funny: http://www.youtube.com/watch?v=C_bGb2Xij8I&feature=youtu...)
----
There is a ton of interest in this stuff, at least in my circle.
This was something I put together over an evening, so it's really rough (just a PoC). What I was actually doing here was tying the electrodes to different events on my webserver. The idea here being that the "force feedback", could indicate to me how much traffic was hitting different areas.
I almost constantly tail -f /var/log/apache/access_log. This was supposed to be a wearable version of that :).
(The project, for me, changed a bit after this. What I'm working on now is a general purpose haptic device that can be tied to arbitrary inputs around your environment)
What about vibrating motors placed in different locations? I've always been intrigued by this kind of vibrating compass-belt: http://www.gradman.com/hapticcompass .
Ah man, I sorry, but I have to rip in to you now...
How the hell is this area of science NOT sexy? Are you kidding me? IMHO, this is the ultimate in science, interfacing the human being directly with technology. That is a "sexy" as it gets, its right up there with CERN and spacey stuff. Stockings and suspenders couldn't make this more sexy.
And if that aint good enough for "us" then I merely have this to say, "think Russian".
Not sexy, pah!!!!
Not being in this field is possibly the only professional regret I have. I bow to you and your colleagues. Keep up the SEXY work.
No doubt this is amazing. But a question strikes me... why is a military organisation funding this research? By their very nature, are they not in the business of disabling enemies (a.k.a people)?
Just strikes me as odd but I can see the potential of weaponizing this technology.
Dismemberment and lost limbs is fairly rare among the civilian population. This is not the case with the military - it makes perfect sense they would be the ones funding this, as they are regularly sending home guys with fewer limbs than they went out with.
I think they should be funding this - they owe it to their disabled veterans.
DARPA has always engaged in long-range seemingly "blue sky" research. They've pioneered autonomously driven vehicles and they invented the internet. To some degree they recognize that pushing the envelope will result in related breakthroughs that are imminently practical.
Militarily the primary goal is likely improving the lives of soldiers who have been maimed and dismembered in battle. This is even more of a concern today because the improvements in battlefield medicine have resulted in more and more soldiers surviving with wounds that previously would have been fatal, leading to more veterans with very serious disabilities.
Certainly there is a potential to weaponize this technology but you have to keep in mind the reasons why that's not such a big concern at least in the near-term. First, this requires pretty invasive brain surgery. Second, it has a fairly low success rate at the moment. Third, the amount of control available is significantly diminished compared to functional nerves and muscles. Overall there aren't any good reasons why you would want to try to switch to using a system like this for controlling a tank or a fighter jet, so it's questionable what sort of battlefield potential the technology has now. When we get to the point where these systems can match flesh and blood then that will change, but that's a much larger can of worms than merely military applications.
This is extremely impressive, what a beautiful piece of work.
The smile on that lady's face says more than any number of facebook 'likes' or google +1's could ever do for me, that's a real and measurable quality of life improvement for a single person, this really makes a difference.
The engineering on that arm is pretty heavy duty, I wonder what it weighs and how fast it could move and what kind of safeguards are built in to avoid the operator injuring themselves due to glitches in the system. If you look at the way the arm moves it is actually quite comparable to the arm of a baby that moves an object to its mouth the first couple of times.
With one big difference, a baby may get it wrong but it is not strong enough to do too much damage, even if it pokes itself in the eye every now and then it is usually with very little force. This arm however looks engineered to be strong enough and fast enough that it could do real damage to the operator or its environment. If that's servo driven there has to be a whole slew of safety systems in case a driver decides to hook a motor to V+ or V- because of a blown FET.
In any real-world machinery situation where the machinery is under software control and has the capability of doing real harm (for instance: machine shops with CNC gear) there are normally countless interlock systems that you'd have to bypass before you could get yourself in contact with a piece of it moving under control of the computer. In this case the operator is extremely exposed and I cringed when the arm sped up towards her face with the bottle. I also found myself sort of 'willing the bottle' in the right direction, they way some movies will have you react to something on the screen. Hard to describe.
This is not a soft hand moving either, it looks like it is made of pretty strong aluminum and fairly heavy.
They're using a Kuka DLR lightweight arm. It's a rather high-end piece of equipment. It's strong enough to do a pull-up, fast enough to catch a ball, and every joint incorporates a six-axis force-torque sensor. It's definitely strong enough to do damage. Of course, that just means that Kuka loaded it up with safety systems. The FT sensors let it know exactly how much force it's applying, and if they read too high a value the brakes slam on. I've never worked with those arms before, but I'd bet that there's a similar system completely in hardware. You'll also notice that the operator's left hand and the entire back half of the room are never seen; there are at least two emergency stop buttons there, probably more, and one of them is wired directly the arm's power supply.
Really, though, all that engineering is of secondary interest. That look on her face is exactly why I got into robotics. And we've only been at it for 15 years; imagine what we'll be able to do in another 15.
This is an area where hackers can really make an impact.
We tend to forget how necessary movement and control is for our daily routines. There are people out there who even lack the ability to turn on|off the TV. A little hacking with an arduino, and some buttons will make it possible for them.
In hackaday.com, one consistently sees hacks that are for the disabled. Special game controllers for people missing an arm, or some other limb are popular and simple to re-create.
I do have a personal anecdote.
I was once buying some resistors on the local radio shack. While I looked in the mess of tiny plastic bags, another person started to browse for electronic bits. I started to talk to him, and found out he was a doctor. His work was in dealing with disabled people. He had learned how to work with embedded electronics and fiber composites in order to build prosthetic limbs for impoverished patients.
> This is an area where hackers can really make an impact.
One of my friends has coauthored several papers with some of the BrainGate folks.
We spent a road trip a couple years ago discussing his research. One thing that jumped out is how similar many of the tools used in this field are to those in a startup. In particular, many big-data analysis tools and techniques that can be developed, optimized, and funded within the context of the web sphere are applied to the bioengineering problems. A few months ago an article floated around HN on how the current tech bubble, if it's such, lacks a "byproduct" that will benefit society when the scraps are cleaned up. You can make a pretty good argument that it's this.
Once the photo sharing app market dries up, people will have to look into other areas. I know that in the next ten years we will have startups (even 1-2 person teams) building much more than simple web apps. Innovation will slowly seep to other areas.
(And thus, Jeff, that is why people need to learn how to program)
I also work in a similar lab as a phd student. Kevinalexbrown's points are spot on. Currently, the main challenges for implementing this clinically are
1) stability of the electrode array over long timescales.
2) increasing the number of degrees of freedom of robust control that we can decode from the neural data. Activity in the motor cortex inherently lives in a (very) roughly 10 dimensional space, which is a bit of a mystery since we use thy activity to control hundreds of muscles independently. What this means, is that when you record from 96 electrodes simultaneously, many of the neurons picked up by the array are correlated, such that the resulting dimensionality is much lower than the number of neurons
3) An important step forward will be to develope optogenetic (or other) sensory write in, using pulses of light to activate neutrons in specific patterns that mimic priprioceptive signals from your limbs. This will increase the speed and robustness of movent via faster closed loop feedback.
4) Processing power. Currently it's typical to run decoders on real-time PC's. Creating an embedded processor capable of sufficiently low power operation with enough processing power to run the decode is non-trivial, especially if you want to implant this in the brain.
It is very unlikely, as suggested elsewhere, that current web tech will impact BMI research at all. The primary interesting tech is in the decode algorithm, which is a modified form of the Kalman filter. There is, however, a lot of room for hackers in research. I recently made the switch from working in defense, building robots and designing sensor fusion algorithms, to a phd in neuroscience. Probably won't pay as well long term, but it's far more rewarding and interesting! I'm surrounded by scientists that need better (software and hardware) tools and analysis methods. Also, an infusion of ideas and values aligned with open access publishing (or changing the scientific publishing model altogether), open source software, data and code sharing, etc. would generally benefit all and accelerate scientific research, but that's another topic worthy of more discussion elsewhere.
If you're interested in helping out with the next stage of this research and you know someone with limited or no ability to use both hands who might also be interested, please visit http://www.braingate2.org/clinicalTrials.asp and let them know.
I know I know, it's not good etiquette here to make short, emotional comments, but -
By thinking about moving her own paralyzed arm, one woman in the experiment used an artificial limb to serve herself coffee for the first time in 15 years.
Seeing her smile at the end of that video just made my otherwise terrible day rather special. And it's such great validation for the team that's working on this technology, I hope they continue to see much success in the future.
The real deal is that this needn't be confined only to people with disabilities. We all can use an extra limb or too. But, I think it will take a little more time to become such a sci-fi toy, after the urgent needs of disabled people are met..., but the size of the market if we look at this as applicable for everyone is huge.
The only person in this thread that thinks dirty is you. Really, someone is capable of some level of direct control over her environment for the first time in years and that's the only thing you can think of? Look at the video and watch that lady smile at the end and then ask yourself what your comment said about you instead of about her.
I think grantgrant has a point. I wonder what it's like to use an arm like that - she seems to be focusing hard on thinking "up" "down" "left" "twist" but as this stuff gets more powerful, how do you separate the signal from the noise? I have many thoughts bouncing around my head, but somehow my brain knows not to act on them.
Your muscles respond to your thoughts not because they pick out some random thought from all of them and decide to act on it. That wouldn't work at all. Your muscles are controlled by your brain in a feedback loop where your desire to make a particular movement gets compared with the actual movement you are making and then that movement is corrected. For something new or exceptionally delicate that is a full-time conscious job.
For something that you've done a hundred times before you more than likely have abstracted it away and have it 'on call' to the extent that it can be done as a subconscious task.
I don't know if you have a driving license or not but because driving is a skill that many people learn only well after they've become conscious of their environment it is my favorite example of this sort of abstraction. When you first learn to drive it is difficult and each and every movement is something that your conscious is fully engaged with. If you learn to drive in a stick-shift car it is not rare to have an instructor operate the clutch and the brake for the first couple of lessons because there is already so much for you to pay attention to.
Fast forward a decade or so and you're listening to your favorite tune and possibly working out some problem in your head while driving on the highway at a 90 miles per hour.
That's the power of how your brain is organized, it has an automated process for abstracting skills on board that allows you to build on top of things you've previously acquired but still allows you to focus on those same things long after they've become abstracted away if required.
The brain is a "parallel processor" in that distinct regions are responsible for different tasks. These arrays are recording from the primary motor area (M1) which directly controls output muscles. While this does require conscious control right now, it's not the case that irrelevant thoughts are likely to significantly alter the control signals sent to the arm.
What makes this awesome from a scientific point of view:
1) 2-dimensional decoders, where an individual controls a mouse with their mind, have existed for a while. What makes this really cool is that there are actually quite a few degrees of freedom in a robot arm, but they're basically using the same hardware. So the influence of software/algorithms in this case is pretty fundamental. There are a lot of papers on improved neural-decoding methods that at first glance appear really dry, boring, not the `sexy' kind of science with huge breakthroughs, but they end up being crucial to good performance as the complexity of the robot grows.
2) One participant was implanted with the electrode array 5 years before the study, and had the injury 10 years before that. Usually the signals don't last that long in monkey models. And we know that your cortex changes with disuse, so it's awesome that they were able to get usable signals so many years later.
What needs to get better are a couple things:
First, these decoders aren't perfect yet. What the Wired article didn't tell you is that the performance for the woman's implant was around 20-50% successful trials (still awesome from a comparison to no interaction)[1].
Second, incorporating sensory feedback is another challenge that is really hard to address, but also very important. Imagine building a robotic controller in which the only information you received about the robot's position was visual. That's the way this works. If we find a reasonable way to mimic sensors of muscle extension (a proxy for joint angle) then we can create more controllable devices.
[1] http://www.nature.com/news/mind-controlled-robot-arms-show-p...