Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly, this is not a new result; people have been doing stuff like this since at least the 90s, most notably Steve Potter at GA Tech and Tom DeMarse in Florida.[1][2] (I built a shitty counterstrike aimbot using a cultured neural network in college based on their papers.)

There was a lot of coverage back in 2004 when DeMarse hooked it up to a flight simulator and claimed it was flying an F-22 [3] (lol, but I don't blame him too much...)

The basic idea is that if you culture neurons on an electrode array (not that hard) you can pick some electrodes to be "inputs" and some to be "outputs" and then when you stimulate both ends the cells wire together more or less according to Hebb's rule[4] and can learn fairly complex nonlinear mappings.

On the other hand, these cultures have essentially no advantage over digital computers and modern machine learning models. Once you get through the initial cool factor, you realize it's a pain to keep the culture perfectly sterile, fed, supplied with the right gases, among many other practical problems, for a model which is just much less powerful, introspectable, and debuggable than is possible on digital computers.

[1] https://bpb-us-w2.wpmucdn.com/sites.gatech.edu/dist/f/516/fi...

[2] https://potterlab.gatech.edu/labs/potter/animat/

[3] https://www.cnn.com/2004/TECH/11/02/brain.dish/

[4] https://en.wikipedia.org/wiki/Hebbian_theory




Once AI gets over the initial cool factor that humans are wet-tech, they'll realize it's a pain to keep the human culture perfectly sterile, fed, supplied with the right gases, among many other practical problems, only for a model which is just much less powerful, introspectable, and debuggable than is possible on digital computers.


That was beautiful.


I was with you up to “once you get over the cool factor.” It seems impossible to get over how cool it is to have a minibrain playing video games. Having one of those at home must really impress the girls.


Moreover, if there are girls not impressed by this, you will know, and have really dodged a bullet.


Wisdom


"played video games" is overstatement. There was a slight increase in the performance with the particular setup that they used. It was not as straightforward as it sounds. This kind of science is still in its infancy


“Meet my brother. He’s adopted”


All he does is lay around playing pong


[flagged]


What do you mean? You don't want the comment section on HN to be reduced to low effort, repetitive humor for the purpose of karma whoring?


I mean ... it does sound fun when you put it that way...


I'm in, I only post to lose rep, it's a race to zero...


One of the early CorticalLabs founders here. This is like dissing AlphaZero because "This is not a new result; computers have been playing chess since the 50s!". We are standing, as always, on the shoulders of giants. Steve Potter is one of our advisors.

We've improved on every axis 10x. We process over 1000 signal channels in real-time and respond with sub-millisecond latency from our simulated environment. We've recorded thousands of hours of play time from mouse and human neurons. We're investigating biological learning with top neuroscientists from around the globe. This is by far the most rigorous, extensive and technologically advanced work on in-vitro learning ever produced.

Our work goes well beyond Hebbian, "fire together, wire together", We have follow up papers in the pipeline that study internal non-linear dynamics and show how whole-network dynamics changes during game play and learning. Being able to observe and measure cognition has huge applications to drug testing and discovery.

For background, frisco (the above commenter) helped start NeuralLink. Consider this, our DishBrain is a completely reproducible, highly controlled test bed for brain computer interfaces. This will massively accelerate neural interface development, all without sacrificing any chimpanzees.

> On the other hand, these cultures have essentially no advantage over digital computers and modern machine learning models

The brain is the single existing example of general intelligence. A human brain can do more computation than our largest super computers with 20W of power (a million times more efficient). Trillions of interacting synaptic circuits, rewiring themselves on the molecular level. Biological learning is the only game in town, honed by a eons of evolution. There are fundamental physical limits to hot slabs of silicon. Do you have a single credible proposal for building such a machine that isn't growing one?

> (I built a shitty counterstrike aimbot using a cultured neural network in college based on their papers.)

Nice humble brag. I trained neural networks from my bedroom in highschool in 2002. There is a long road between a cool university project and building a world class neuroscience R&D company, you know that!

CoriticalLabs is always open to collaborations. We're here to talk when you want to integrate some of our cutting-edge neuroscience technology with your work. Instead grumbling about the 90's, let's look forward to what neuroscience looks like in the 2030's


> The brain is the single existing example of general intelligence.

This is incorrect. It is not pedantic to point out that we have never interacted with a "brain" in isolation: the human brain is an organ of the human organism. The human being is the single existing example of general intelligence.

> let's look forward to what neuroscience looks like in the 2030's

This is very interesting science without question. Are there existing ethical and moral frameworks guiding the development of your field?


All I’m saying is that I think it will be challenging to produce a commercial product that achieves product-market fit for an application other than basic neuroscience research. It’s a cool tool but the practical drawbacks are myriad, and when you say “the brain is the single existing example of general intelligence,” that’s true of the whole thing, with glial ion buffering, ephaptic coupling, global oscillations, and so much more. We should be honest here: the system being studied in DishBrain is very far removed from that, so it’s tough to use the existence proof like you are doing.

I hope I don’t come across as uncivil, but you guys alienated a lot of people both in how you talked about “sentience” and also seemed to heavily hype this as totally novel.

I would never root against cool progress in neural engineering, but I would be curious as to what you think your first big product will be based on this. Past attempts have usually ended up pivoting to stuff like artificial noses.

Edit: I tried to ignore it but the bad faith attack on neuralink, which, look, I have complicated feelings about too — you should know the animal use data in the press is extremely out of context (to the point of simply being wrong) and also neuralink has had zero chimpanzees in its entire history.


> “the brain is the single existing example of general intelligence,” that’s true of the whole thing, with glial ion buffering, ephaptic coupling, global oscillations, and so much more. We should be honest here: the system being studied in DishBrain is very far removed from that, so it’s tough to use the existence proof like you are doing.

Our vision is incredibly ambitious. We can't build a whole brain yet, only small 2D fragments. We have a roadmap that goes all the way to a complete synthetic biological intelligence. The short and medium term milestones are concrete, achievable and valuable. The long term goals are more speculative, we're clear about that. It's a path, a tightrope, but still a path.

> [...] but you guys alienated a lot of people both in how you talked about “sentience” and also seemed to heavily hype this as totally novel.

We clearly defined our terms, our paper was accepted via a long peer review process into a prestigious academic journal. We coauthored with multiple top neuroscientists from around the world. Our discussion section alone has more citations than most entire papers. If scientists are "alienated" by this, it's a grievance that we cannot remedy.

Our work was hyped, we hyped it, it deserves to be hyped. Can you cite an example in our own words where we claim our work is totally novel?

> I tried to ignore it but the bad faith attack on neuralink, which, look, I have complicated feelings about too — you should know the animal use data in the press is extremely out of context (to the point of simply being wrong) and also neuralink has had zero chimpanzees in its entire history.

Please accept my apologies; it was meant to be more collaborative. I really do think that our system could be used to reduce the need for animal sacrifices and this is a good thing. I also believe you take making animal sacrifices seriously.


Can you provide the missing context re Neuralink animal usage?


Unfortunately I can’t share specifics about Neuralink. But the general points I will make are:

- in this field, monkeys are high value animals and experimenters will often work with the same ones for many years; they are not, generally speaking, a high throughput model.

- to the extent a company does need to go through a large number of animals for a study, the way this works is you start by figuring out all of the problems you might be worried about, and choosing some rarity threshold to verify absence of (safety against), and then animal numbers are derived from the power calculation. For example, to rule out a potential complication to no more than 1% of patients with 95% confidence… you need a lot of animals, especially considering multiple study arms. This is the values tradeoff we as a society have chosen to make and empower our regulators to enforce. There is often a negotiation for the least controversial species to use that will satisfy the scientific goals.


> A human brain can do more computation than our largest super computers with 20W of power

The power needs of the human brain are likely to be measured quite accurately.

The same is not true of the "amount of computation" performed by the brain. How are you measuring that?


We can estimate the amount of information processed. Visual is like 10mbit [1] plus other senses it might be up to 100mbit. Only doing similar sensor fusion and extracting features in realtime on computer requires more power. But there's also the symbolic processing, doing something similar too requires much more power on computer. Then there is other stuff such as maintaining homeostasis we don't really know how to compute yet.

[1]https://www.eurekalert.org/news-releases/468943


> We can estimate the amount of information processed.

I'm not sure this makes sense. Here is a simple dynamic programming problem from Project Euler: https://projecteuler.net/problem=67

You can estimate the amount of information being processed in a few different ways. But that's not really relevant; the whole point of solving this problem is that you can do the same job with less computation than it looks like you need.

There is no particular connection between "amount of information processed" and "amount of computation performed".


There is a connection. How can any computation be done without moving information around? In absence of better measure, we can roughly estimate the computational complexity of a black box from looking on the input and output.

If the brain's job could be hypothetically done by some optimized system using an picowatt is irrelevant. We don't have such a system.


> On the other hand, these cultures have essentially no advantage over digital computers and modern machine learning models.

Absolutely false. While it's indeed hard to keep it alive, real neurons are far more sophisticated than what AI researchers think they are. Modern digital so called neural networks are built on the outdated and oversimplified knowledge of neuron model, almost a century-old by now.


Modern CMOS transistors are extremely sophisticated devices (you need a very complicated model with hundreds of parameters to simulate all kinds of quantum effects to predict its behavior). Yet all it does is one simple function - it's an on/off switch.


Evolution would not allow waste of energy and complexity for just one on/off switch. Inefficient things die out in the course of millions of years. Neural tissue on itself is far older than humanity, so it had much more time to perfect.


> Evolution would not allow waste of energy and complexity for just one on/off switch. Inefficient things die out in the course of millions of years. Neural tissue on itself is far older than humanity, so it had much more time to perfect.

Sorry but that's not how evolution works at all. You are essentially postulating that evolution results in efficient outcomes given enough time whereas there are many, many examples of evolution delivering results but clearly sub-optimal ones. It's not a given that evolution will lead to an efficient solution, it's not even a given that it will lead to a solution at all.


That’s not true at all. If something is a waste but doesn’t meaningfully change yours odds of suvival, “evolution” won’t care


Out there food and energy are often scarce. Evolution does care about efficiency in that particular case a lot. True, there are ecosystems with plenty of free food but they are rarity.


Evolution doesn't care about anything.


it is possible this is not quite true


No, that is not possible.


At least in plants there is some evidence that mutations produced and then acted on by natural selection are not fully random. It is a long held assumption this should not be possible, but there are interesting lines of evidence suggesting it may be. It would open the chance of there being a kind of (limited) underlying logic.


https://www.nature.com/articles/s41586-021-04269-6.pdf

This is an example. Epigenetically driven evolution in arabodopsis, which protects certain regions from mutation. In a very limited sense, evolution might be said to "care" about something here, as it is kind of taking direction from the environment, not simply acting on uniformly random mutations. Nothing like this is known in animals or most anywhere else afaik.


Peacocks, antlers, art.


It is arguable all three have evolved for the same reason.


sophisticated is not a scientific word. they are complex and complicated, and the voltage dynamics across their elaborate membrane takes a lot of computers to simulate. But we don't really know what it is doing or if it is particularly sophisticated. Nature has found a lot of complex solutions to simple problems because it does not know better. We don't know how well it did with intelligence


We already do know[1] the a single neuron has the same level of complexity as multilayered digital "neural network".

[1] https://www.youtube.com/watch?v=hmtQPrH-gC4


There are different studies proposing 2 or 3 layer network for representing the input-firing curve of neurons (Usually hippocampal). Of course, neural networks are abritrary approximators so the size of the network determines the fidelity of the reproduction. But it 's not clear what the firing does and what amount of complexity in the firing code is reduntant or useful for making AI systems


What is clear however is the evident power savings in implementing cultured neural networks vs digital ones for a given network capacity.


Even that is not clear. A model like GPT-4 can read an entire book in seconds, and produce an intelligent answer about its content [1]. A human would need at least several hours to perform the same task.

[1] https://www.anthropic.com/index/100k-context-windows


> for a given network capacity

You'd be hard-pressed to find an expert who believes any of the current crop of LLMs have a similar capacity to human brains.


Unlike the capacity of human brains the capacity of ML models has been growing very fast in the last 10 years. The number of tasks AI cannot do is shrinking fast.


Power consumption is what's relevant to this discussion thread here. We're not talking about possible capacity but possible efficiency for given capacity as implemented in analog vs digital circuits.


If a "given capacity" is the ability to read books and answer questions about the content, then LLMs are more power efficient (4kW for 20 seconds beats 20W for 3 hours). That's on standard power hungry GPUs (8xA100 server consumes about 4kW). If we switch to analog deep learning accelerators, we gain even better power efficiency. There's simply no chance for brains to compete with electronic devices - as long as we match brain's "capacity".


Ok so it's clear you are not familiar with the vocabulary of this domain. "Capacity" means something specific (if extremely hard to measure precisely) about the informational throughput of a system, not just "the ability to do high-level task X as judged by a naive human observer".

I urge you to study more seriously about the things you seem so eager to speculate about before publishing underinformed opinions in public spaces.


Were you going to provide the definition? What is this “capacity” you are talking about?


You have my permission to use your favorite search tool to answer that question.


Tried googling for "ML model capacity" - only found informal handwaving. The closest to a formal definition is VC dimension: https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_di...

Is that what you mean when you say "capacity"? Does not seem very relevant in the context of our discussion. If it's not what you have in mind, I'd appreciate a link to the Wikipedia article on "ML model capacity" or whatever specific term experts use to represent the concept.


> The basic idea is that if you culture neurons on an electrode array (not that hard) you can pick some electrodes to be "inputs" and some to be "outputs" and then when you stimulate both ends the cells wire together more or less according to Hebb's rule[4] and can learn fairly complex nonlinear mappings.

This is fascinating, can you clarify it a bit? Do you 'stimulate', e.g. apply electrical potential to both the inputs and outputs to represent each instance of training data, without any physical distinction between input and output at that stage? And then if you apply the potential only the inputs, you can then read predictions on the outputs?


What I always wonder about with these systems is how feedback was delivered to the cultured neurons. How do we tell them they're doing things correctly? Or is this some form of unsupervised learning with them?


the original paper is available https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6

They used a specific region of the electrode array to deliver the "reward" signal which was a regular predictable pulse pattern . An error was represented with unpredictable activity


Wait, wtf, sentience? “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world”


The paper used unfortunate terminology everywhere even if there were negative comments about in the preprint. It caused a number of reactions https://pubmed.ncbi.nlm.nih.gov/36863319/


Do you have a writeup or video of the aim bot you made? Would love to see it!


We have lots of these cultures around for drug testing. I wonder if the “brain” playing pong affects the tests in any way.


Tangent: was thinking it would be cool if you had a bio mass that could connect to a pcie slot and act as a graphics card. That would be some really impressive tech. Build circuits in the goo with floating particles.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: