Hacker News new | past | comments | ask | show | jobs | submit login
Will This “Neural Lace” Brain Implant Help Us Compete with AI? (nautil.us)
80 points by dnetesn on April 5, 2018 | hide | past | favorite | 42 comments



I love Elon, but this notion of a “bandwidth problem” in UI is what happens when someone with no training in UI whatsoever tries to extrapolate.

Imagine the neural lace already existed. Close your eyes, and picture what you would experience. Would it be 3D? Probably. Our brains have spatial hardware. Would it be auditory? Probably, our brains have hardware for that too. Would there be language? Ya, that’s part of our hardware too.

So it’s an experience of sights and sounds and language...

But think about an iPad... that already had sights and sounds and language. Actually it’s capable of beaming far more of all three to you than it does. And it’s capable of taking in far more input than you generally use. It can do 10 finger multitouch, plus sound recording. Newer devices will have full body tracking.

And yet... we don’t use all that available bandwidth. For the most part we stare at a few words, some boxes and lines... why?

If Elon is right that more bandwidth is the problem, why aren’t there more high bandwidth user interfaces on existing devices?

The answer is: we don’t have a bandwidth bottleneck. We have a design bottleneck. And if I dropped a fully functional neural lace at Elon’s office, he’d have exactly the same design bottleneck. And he’d realize that if he wants to solve his problem, he needs to hire 10 really good UI designers, and then 10 more, and then 10 more, and it’s going to be decades before they max out even the bandwidth of an iPhone.


There's a huge bandwidth bottleneck on both input and output sides. Your imagination of what a neural lace would do is too limited. It wouldn't be 3d, it wouldn't be auditory.

It would be having a perfect recollection of every single moment of your life.

It would be knowing the entire contents of wikipedia off by heart.

It would be understanding and speaking every language on the planet.

It would be looking out the window and seeing exactly where a friend living 1000km is in your field of view.

It would be sharing thoughts and feelings with other people in the literal sense.

It would be the ability to suppress short term urges by being constantly aware of your long term goals and your progress towards them.

It would be the ability to open an enterprise project you're working on, and instantly know the layout of their codebase.

If you're imagining the neural lace interface to be sensory, you're way off the mark. Sensory interconnect is just the beginning. The real revolution is giving your brain a low level IO bus that allows a computer to transparently extend it beyond the physical limits of whatever number of neurons are in your head.


All this speculation ignores the fact that we don't have a practical theory of how the mind works, an therefore no way to know what, specifically, would need to be done to produce a specific outcome. For example, Lieber talks about the development of 3D transistors, so they can be implanted in neurons, but he does say what they would do once there.

To be fair, pharmacology, in its application to mental issues, is at about the same level, but it is also struggling to demonstrate unambiguous successes. As research tools, these devices are great, but it is premature to suggest that we have a technology that is going to change the way we think.


But that's the point of basic research, no?

Minds are things that function in a physical universe. At some point in the future there will be a description of function, followed by a plan for modification or augmentation of existing minds, as well as bottom up design of new minds.

All minds are matter performing some computation. Molecules move around and change in time. In concert with this physical process, subjective experience takes place. There will come a time when physical systems are designed with the goal of creating a particular form of subjective experience.


> Sensory interconnect is just the beginning. The real revolution is giving your brain a low level IO bus that allows a computer to transparently extend it beyond the physical limits of whatever number of neurons are in your head.

The point is that you need some way of interfacing. A fast bus doesn't do me any good if I don't have drivers and then interfaces to interact with the thing on the other end of that bus.

I mean, yes, ideally we'd have some kind of thought-based interface... but you've gotta design that, too.


I think the solution might even turn out to be something like build-it-and-the-drivers-will-come. Neuroplasticity is enormous. There are reports about a conjoined twin pair in Canada that supposedly can "think inside the others head" (they're connected at the head!).


Agree absolutely, a step toward new research related to "drivers". I think it will be interesting how age might affect early usage of such a neural lace. Perhaps a link operated from a young age (even birth) will operate better as the baby's other "drivers" (for the usual IO operations like sight and hearing) havn't developed as much. Also for AI-safety it is important to begin interfacing between between digital and neural as early as possible.


>Also for AI-safety it is important to begin interfacing between between digital and neural as early as possible.

I see no evidence that we're going to get anything like machine consciousness in the near future. Moore's law was going to get it for us, but... that isn't looking so clear anymore. Sure, machines can do more and more tasks that used to be human-only... but there's not a clear path from that to consciousness.

On the other hand, speculative brain surgery is super dangerous, and shouldn't be done to people who are too young to give informed consent.

All that said, I'd personally be willing to take some significant risk myself, if it gave me a credible chance at a useful neural interface. But I'm an adult; I think it would be ridiculously unethical to make that decision for an infant. Even as an adult, I would personally need a lot more education than I currently have to decide what was 'credible' at this point.


The bandwidth of your memory is substantially higher than reading, though, is the point.


sure, I'm not saying it wouldn't be great. but my point is that you still need an interface, and my understanding is that we're pretty far away from building an interface that feels like it's just your memory. I mean, it'd be great, sure, if you could do it... but that's an interface that needs to be designed, and it's an interface that we, as humans have no idea how to design.

How would you record a memory? how would you replay that memory? how would you index the memory?

I mean, sure, the idea is to emulate how the brain works now... but how does the brain work now? I don't think we really have a very clear idea on that level.


I agree with your point, and that probably has to be a more long-term milestone that has to be achieved for it to work well and integrate seamlessly.

I think the short-term idea is to use the extreme adaptability of the human brain to reprogram itself to send and receive data from external machines. There are already prototypes of robotic arms that are not only controlled via a brain interface, but also give sensory feedback via it.


A practical way to imagine the use here would be snapshotting your short term memory state.

Rather then trying to remember where your keys are you just load the last few snapshots until suddenly it's fresh in your mind that you want to remember where your keys are.


In short, it'd be magic.


Violation of conservation laws would be magic. It is an intricate information processing.


Just a hint: there's two letters in IO - Input and Output.

If I imagine a complex animated 3D shape in my head, I have like really no easy way to "pull it out" and share it to anyone else. Besides spending up to ~1 week sketching/drawing it in either Autocad or Blender/Maya. The out part is the problem, this is where we have a bottleneck.

> iPad [...] it’s capable of beaming far more of all three to you than it does. And it’s capable of taking in far more input than you generally use. It can do 10 finger multitouch, plus sound recording. Newer devices will have full body tracking

Really? You think tapping a screen or moving our bodies is a good way to output data?! There's already a huuuuuuuuuge botlenect between my body and my brain. I don't want to force my interaction with a computing device to go through this bottleneck too. Fuck my slow ass annoyingly useless body, I want something that can read with at least a few megapixels per second from the part of my brain that's doing the visual thinking.

Even language is a huuuuge ass bottleneck. If you're someone thinking visually like me, you sometimes just give up sharing an idea when you're like "man, I'd need to write a 500 page novel to put it into words".

I wish Elon the best luck in the work developing any kind of interface that could help us bypass either the "language bottleneck" or even better, the "body bottleneck" (because language is still about using part of your body - the vocal cords). I'm passionate about UI/X design myself, but I've never taken it professionally because I find it pointless: only real way to improve an interface is to (1) improve the "business-logic data model" and (2) increase bandwith by nuking the entire interface away and letting people interact with the "core data" itself. This is what Git and Emacs org-mode have at their core: the result is amazing UX, horrible UI and "just kill yourself"-hard learning curves... but if I can get the first one, I can put up with the second two until I can afford to hire an army of UI and learnability experts later. Excell/spreasheets are another such exemples of letting people "touch the data" and forgetting about anything else.


With input bandwidth I agree with you. But output bandwidth is another matter. Currently we use keyboard, mouse, stylus, and touch panels.

We're already seeing some changes here with the new VR sculpting movement, but for output a direct brain interface would drastically increase our output bandwidth.


No UI on an iPad could allow me to input some things as fast as the split second it takes me to imagine them (think structures and scenes)


You're assuming the status quo of input and then suggesting that it is a given, and thus any technology will in the end by identical to the status quo.

Let's consider chess for a minute. When a strong player looks at a chess position they see something very different than what a casual player sees and, in fact, they actually use different parts of their brain. They will be able to see ideas and concepts in a fraction of a second that a casual player could spend hours searching for, and failing to find. Yet paradoxically, show these ideas to the weaker player and they'll understand them easily - it's not magical.

I find chess a good example since most people are not very good, but do know how to play so they can understand the analog from the amateur side easily. But you can also see things from the side of the 'master' in language. Look at your post. It's made up of 1,499 characters and hundreds of different combinations of these characters. Yet I'm able to parse it very little time and you wrote it with no significant effort. Think about the immense amount of processing that's going on there, without ever really realizing it.

So I see no reason to believe that this sort of invisible processing need be limited. Is there any reason I can't consume the entirety of e.g. Wikipedia or the entire catalog of astrophyics journals? Even going full Neo, the difference between a master jiujitsu practitioner and somebody who's never done it at all is almost entirely in the brain. Why can't that information be consumed in direct fashion? It doesn't really seem all that much more spectacular than consuming the entirety of dictionaries and the countless nuanced exceptions and rules of construction between those words, or somehow being able to instantaneously grasp, to at least a very good degree, a practically infinite variety of chess positions.

I also remain cynical on the neural lace, but because I think it's reach far exceeds its grasp. I think to really make progress you would effectively need to begin breaking down the man-machine interface in ways beyond clever signal correlation, and we haven't even scratched the surface there. By analog it'd be like trying to become a fluent speaker, without accent, in Ancient Egyptian by studying hieroglyphics.


FWIW, the term "neural lace" was coined by Iain Banks.

From Excession:

One of the exhibits which she discovered, towards the end of her wanderings, she did not understand. It was a little bundle of what looked like thin, glisteningly blue threads, lying in a shallow bowl; a net, like something you'd put on the end of a stick and go fishing for little fish in a stream. She tried to pick it up; it was impossibly slinky and the material slipped through her fingers like oil; the holes in the net were just too small to put a finger-tip through. Eventually she had to tip the bowl up and pour the blue mesh into her palm. It was very light. Something about it stirred a vague memory in her, but she couldn't recall what it was. She asked the ship what it was, via her neural lace.

~ That is a neural lace, it informed her. ~ A more exquisite and economical method of torturing creatures such as yourself has yet to be invented.

She gulped, quivered again and nearly dropped the thing.

~ Really? she sent, and tried to sound breezy. ~ Ha. I'd never really thought of it that way.

~ It is not generally a use much emphasised.

~ I suppose not, she replied, and carefully poured the fluid little device back into its bowl on the table.


That’s briefly mentioned near the end of the first paragraph:

... solution to this unappealing fate is a novel brain-computer interface similar to the implantable “neural lace” described by the Scottish novelist Iain M. Banks in Look to Windward, part of his “Culture series” books. Along with serving as a rite of passage, it upgrades the human brain to be more competitive against A.I.’s with human-level or higher intelligence.


The only things in that paragraph that are correct are:

1. the name neural lace

2. Iain M. Banks coined it

3. He was Scottish.

It wasn't first described in Look to Windward. It didn't serve as a rite of passage in the Culture. It doesn't upgrade the human brain, and certainly not to the level of the Culture's ruling machine intelligences.

And at no point does the article quote the bit that I did, which serves as a cautionary note: the neural lace is the most effective implement of torture ever.

Once upon a time, the Geek Code had an entry "c++++ I'll be first in line to get the new cybernetic interace installed into my skull." It only took a little while for everyone sensible to amend that to "Waiting for skull interface 3.1, with security patches and a really good firewall."


That’s disappointing. I thought the quality of Nautilus was better. I actually pay for a digital subscription.


I first encounted the "neural lace" idea in Peter F Hamilton's Night's Dawn Trilogy, where it is called "nanonics".

My favourite imagining of such a device is in Peter Watt's "Echopraxia" where it's used to create a hive mind by physics monks.


Excession... Always a great re-read. It's construction, characters layout and plots resonates really deep inside my technical, introvert self.

RIP Mr Banks.


The fundamentals of this idea are so shockingly hand-wavey, that I find this to be something of a hazard to Elon Musk’s reputation.

It’s far enough afield, that as blue sky projects go, this is farther away from us now, than cold fusion was in the 1980’s. Hint: we’re still not sure if cold fusion will arrive very soon.

It’s great to see caution in light of powerful technologies, but this blue sky concept is thrown into the mix as if to say “try anything to make it work” and in so doing, leaves the overall prophetic narrative in the realm of AI dominance as inexorable.

To hear this idea floating around is like saying:

  So, let’s do something to our brain. Anything. 
  Sprinkle wires on it, and maybe those wires will 
  help. What’s in them? Who knows! But golly, anything 
  to gain an edge because the who knows!

  But yeah, AI is already here to stay, and there’s no 
  controlling it, so just route around it.
There’s no plan here. Just add in the “make-brain-smart-wires” and pray. No one explains how we’ll be smarter, which is especially bothersome, given that we can’t quantify subjective experience, motivation and emotional reaction.

Are terrible people, criminals, dictators,and third world war lords getting these things too? Why will it make them do good? Why is it better to make an evil person more powerful than an artificial entity which we fear?


I don't think it's quite as far fetched or far out.

The fundamental point of the neural lace is increased bandwidth OUTPUT from the human mind so that it can be INPUT for some other process.

Bare with me...

Right now I am using 2 thumbs to output this message. Not very efficient. I could hop on a keyboard and up my output bandwith to 10 fingers. That would be faster but still bottlenecked on the mechanics of my hands. Sure I could use voice buts it's loud and mostly words only. Not great for quiet rooms or coding with special characters.

Now imagine an integrated set of circuits directly connected to my brain. It wouldn't have to be many. Let's say 10. Now with training I could learn to activate each circuit. Even if the training was only for a digital signal of 0 or 1 that's 2^10 unique signals. Not to mention these could be modifiers of other outputs. And the real kicker is this signal generation happens at the speed of thought. Imagine coding in an IDE at that speed!

I didn't touch on the whole AI threat but the point is simply that if we need to be defensive or offensive against an AI we need to be able to output meaning quicker than running to the server room and unplugging HAL.


Even getting reliable and useful output from the brain is still much much harder than you think it is.

> Now imagine an integrated set of circuits directly connected to my brain. It wouldn't have to be many. Let's say 10.

10 circuits connect to what, exactly? How? Where? Attaching electrodes to existing cells is invasive and uncomfortable (to say the least), and temporary because scar tissue forms and severs the connection in time.

But suppose you developed some way of reliably, durably, and non-invasively attaching external artificial neurons into your brain (to be sure, a huge jump). Where would you put these neurons, exactly? What would they synapse to? Just one or two particular neurons? How would you locate them during "install time"? How would the user access them during "run time"? Most brain-machine interfaces use motor neurons because they're easy to access. Maybe that'd be a good route, but again, doing this reliably and non-invasively is really hard, especially if there isn't a clear feedback loop.

I don't know why, but a lot of people seem to think that neuroscience is a lot closer to being "solved" than it actually is. Truth is, we barely have any idea how the brain works, let alone how we can meaningfully augment and interact with it.


I agree. It's certainly not a plug and play device from your PC and there are hurdles to cross for sure. But, over time, you could learn to control signal output just as a baby does. The mapping will be chaotic and unique to each person but every brain is different. When you imagine a yellow elephant it is probably slightly different than my process. You don't know how you do it but you do. If a reward mechanism is tied to signal output a growing brain will figure it out.


> And the real kicker is this signal generation happens at the speed of thought. Imagine coding in an IDE at that speed!

I already do I type virtually at the same speed I internally verbalize what I'm typing, so any bandwidth increase would have to bypass the inner voice part of the brain, if that's even possible. There might be some room for improvement in other areas, like "got to definition", but hardly revolutionary.


This may be the tail wagging the dog.

It wouldn't surprise me if the internal voice slows to match the typing speed. People that have gone blind have learned to hear and process language at astonishing speeds that an average human couldn't begin to interpret.


AI, like any other militarily-beneficial technology, will advance whether some societies want it to or not. Nuclear weapons, drones, armored vehicles... These are all similar examples where being the ethical "sucker" is extremely hazardous to the survival of your state.

The access people have to possible neural enhancements will be dependent on the rules of their society: Chinese social ratings, US economic status, Russian political connections, etc. Will a criminal have a harder time getting neural enhancements in China or the US? Our definition of what a criminal is will be wildly different in edge cases.

I think an interesting, current corrolary is nootropics and meditation. These are not going to inherently make you a "good" person,and they are easily accessible to anyone who wants them. If you want to train your brain and increase your self-control, it's ultimately up to you what you do with those advantages. Formal consequences emerge when new behaviors are perceived as detrimental to society.

In terms of how neural enhancements will.make us smarter, does it really matter? If the output is what you intended, at what point do you need to focus on how you got there? Anyways, pick a challenge and put an enhanced and unenhanced human head-to-head. We'll discover our new capabilities through experimentation, just as we always have.


Elon musk has talked a bit about this company. The goal is to develop a device which will help increase our low output "bandwidth", not make us smarter.

Has the feasibility of such a thing even been explored before? I wouldn't be so fast to dismiss it, I know some types of information have already been decoded from the brain, at least parts of their goal have been proven to be conceptual possible.

https://thenextweb.com/artificial-intelligence/2018/03/06/mi...


> The fundamentals of this idea are so shockingly hand-wavey, that I find this to be something of a hazard to Elon Musk’s reputation.

The rest of the comment implies that you aren't joking.


>Hint: we’re still not sure if cold fusion will arrive very soon.

I thought cold fusion was confirmed nonsense?


There's a limit to how quickly we think (I think I remember reading during drivers ed that we had an average decision making bandwidth of ~ 2^7 bits per second.) For most people there really isn't enough going on in their head that they couldn't just spend a few hours typing it out. If you want to see what happens when people type faster than they think then visit any online forum.

IMHO: good Direct Neural Interfaces probably won't be anything more interesting than the invention of GUIs. It will be neat, it will make computers more intuitive (via neural plasticity) and maybe let us directly share thought with each other (really neat! totally useless) but nothing really new will probably ever come of something like this. In fact, if it became popular I think it might make learning some things harder. Software engineering for example is entirely about communicating precisely (with others, yourself, and your computer.) If you live your whole life communicating telepathically you might never learn to communicate well.


> and maybe let us directly share thoughts with each other

...and various advertising corporations.


Ada Palmer's (imo fantastic) Terra Ignota series has this concept, among many others. She envisions a "set-set", or a child who is trained to use "neural lace" like implements from a very young age and barely uses their actual limbs, and these set-sets are the most adept at running computer systems. In the world of Terra Ignota, "set-set"s are highly controversial.


"Terra Incognita takes place in 2454" I'd be surprised if humans ran any computer systems by then


Neural Lace explained on Wait, but why? https://waitbutwhy.com/2017/04/neuralink.html


How misleading. The article is about the challenges, and possible diagnostic and repair benefits, of creating electronic circuits that don't trigger the immune system. Not "competing with ai".


EEG is now "neural lace", mkay, good job naming things!


The media tends to push binary stories of humans vs AI. Science fiction, on the other hand, has long explored the complexity inherent in human, AI and cyborg interactions.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: