Hacker News new | past | comments | ask | show | jobs | submit login
In vitro biological system of cultured brain cells has learned to play Pong (nature.com)
144 points by areoform on June 2, 2023 | hide | past | favorite | 109 comments



Interestingly, this is not a new result; people have been doing stuff like this since at least the 90s, most notably Steve Potter at GA Tech and Tom DeMarse in Florida.[1][2] (I built a shitty counterstrike aimbot using a cultured neural network in college based on their papers.)

There was a lot of coverage back in 2004 when DeMarse hooked it up to a flight simulator and claimed it was flying an F-22 [3] (lol, but I don't blame him too much...)

The basic idea is that if you culture neurons on an electrode array (not that hard) you can pick some electrodes to be "inputs" and some to be "outputs" and then when you stimulate both ends the cells wire together more or less according to Hebb's rule[4] and can learn fairly complex nonlinear mappings.

On the other hand, these cultures have essentially no advantage over digital computers and modern machine learning models. Once you get through the initial cool factor, you realize it's a pain to keep the culture perfectly sterile, fed, supplied with the right gases, among many other practical problems, for a model which is just much less powerful, introspectable, and debuggable than is possible on digital computers.

[1] https://bpb-us-w2.wpmucdn.com/sites.gatech.edu/dist/f/516/fi...

[2] https://potterlab.gatech.edu/labs/potter/animat/

[3] https://www.cnn.com/2004/TECH/11/02/brain.dish/

[4] https://en.wikipedia.org/wiki/Hebbian_theory


Once AI gets over the initial cool factor that humans are wet-tech, they'll realize it's a pain to keep the human culture perfectly sterile, fed, supplied with the right gases, among many other practical problems, only for a model which is just much less powerful, introspectable, and debuggable than is possible on digital computers.


That was beautiful.


I was with you up to “once you get over the cool factor.” It seems impossible to get over how cool it is to have a minibrain playing video games. Having one of those at home must really impress the girls.


Moreover, if there are girls not impressed by this, you will know, and have really dodged a bullet.


Wisdom


"played video games" is overstatement. There was a slight increase in the performance with the particular setup that they used. It was not as straightforward as it sounds. This kind of science is still in its infancy


“Meet my brother. He’s adopted”


All he does is lay around playing pong


[flagged]


What do you mean? You don't want the comment section on HN to be reduced to low effort, repetitive humor for the purpose of karma whoring?


I mean ... it does sound fun when you put it that way...


I'm in, I only post to lose rep, it's a race to zero...


One of the early CorticalLabs founders here. This is like dissing AlphaZero because "This is not a new result; computers have been playing chess since the 50s!". We are standing, as always, on the shoulders of giants. Steve Potter is one of our advisors.

We've improved on every axis 10x. We process over 1000 signal channels in real-time and respond with sub-millisecond latency from our simulated environment. We've recorded thousands of hours of play time from mouse and human neurons. We're investigating biological learning with top neuroscientists from around the globe. This is by far the most rigorous, extensive and technologically advanced work on in-vitro learning ever produced.

Our work goes well beyond Hebbian, "fire together, wire together", We have follow up papers in the pipeline that study internal non-linear dynamics and show how whole-network dynamics changes during game play and learning. Being able to observe and measure cognition has huge applications to drug testing and discovery.

For background, frisco (the above commenter) helped start NeuralLink. Consider this, our DishBrain is a completely reproducible, highly controlled test bed for brain computer interfaces. This will massively accelerate neural interface development, all without sacrificing any chimpanzees.

> On the other hand, these cultures have essentially no advantage over digital computers and modern machine learning models

The brain is the single existing example of general intelligence. A human brain can do more computation than our largest super computers with 20W of power (a million times more efficient). Trillions of interacting synaptic circuits, rewiring themselves on the molecular level. Biological learning is the only game in town, honed by a eons of evolution. There are fundamental physical limits to hot slabs of silicon. Do you have a single credible proposal for building such a machine that isn't growing one?

> (I built a shitty counterstrike aimbot using a cultured neural network in college based on their papers.)

Nice humble brag. I trained neural networks from my bedroom in highschool in 2002. There is a long road between a cool university project and building a world class neuroscience R&D company, you know that!

CoriticalLabs is always open to collaborations. We're here to talk when you want to integrate some of our cutting-edge neuroscience technology with your work. Instead grumbling about the 90's, let's look forward to what neuroscience looks like in the 2030's


> The brain is the single existing example of general intelligence.

This is incorrect. It is not pedantic to point out that we have never interacted with a "brain" in isolation: the human brain is an organ of the human organism. The human being is the single existing example of general intelligence.

> let's look forward to what neuroscience looks like in the 2030's

This is very interesting science without question. Are there existing ethical and moral frameworks guiding the development of your field?


All I’m saying is that I think it will be challenging to produce a commercial product that achieves product-market fit for an application other than basic neuroscience research. It’s a cool tool but the practical drawbacks are myriad, and when you say “the brain is the single existing example of general intelligence,” that’s true of the whole thing, with glial ion buffering, ephaptic coupling, global oscillations, and so much more. We should be honest here: the system being studied in DishBrain is very far removed from that, so it’s tough to use the existence proof like you are doing.

I hope I don’t come across as uncivil, but you guys alienated a lot of people both in how you talked about “sentience” and also seemed to heavily hype this as totally novel.

I would never root against cool progress in neural engineering, but I would be curious as to what you think your first big product will be based on this. Past attempts have usually ended up pivoting to stuff like artificial noses.

Edit: I tried to ignore it but the bad faith attack on neuralink, which, look, I have complicated feelings about too — you should know the animal use data in the press is extremely out of context (to the point of simply being wrong) and also neuralink has had zero chimpanzees in its entire history.


> “the brain is the single existing example of general intelligence,” that’s true of the whole thing, with glial ion buffering, ephaptic coupling, global oscillations, and so much more. We should be honest here: the system being studied in DishBrain is very far removed from that, so it’s tough to use the existence proof like you are doing.

Our vision is incredibly ambitious. We can't build a whole brain yet, only small 2D fragments. We have a roadmap that goes all the way to a complete synthetic biological intelligence. The short and medium term milestones are concrete, achievable and valuable. The long term goals are more speculative, we're clear about that. It's a path, a tightrope, but still a path.

> [...] but you guys alienated a lot of people both in how you talked about “sentience” and also seemed to heavily hype this as totally novel.

We clearly defined our terms, our paper was accepted via a long peer review process into a prestigious academic journal. We coauthored with multiple top neuroscientists from around the world. Our discussion section alone has more citations than most entire papers. If scientists are "alienated" by this, it's a grievance that we cannot remedy.

Our work was hyped, we hyped it, it deserves to be hyped. Can you cite an example in our own words where we claim our work is totally novel?

> I tried to ignore it but the bad faith attack on neuralink, which, look, I have complicated feelings about too — you should know the animal use data in the press is extremely out of context (to the point of simply being wrong) and also neuralink has had zero chimpanzees in its entire history.

Please accept my apologies; it was meant to be more collaborative. I really do think that our system could be used to reduce the need for animal sacrifices and this is a good thing. I also believe you take making animal sacrifices seriously.


Can you provide the missing context re Neuralink animal usage?


Unfortunately I can’t share specifics about Neuralink. But the general points I will make are:

- in this field, monkeys are high value animals and experimenters will often work with the same ones for many years; they are not, generally speaking, a high throughput model.

- to the extent a company does need to go through a large number of animals for a study, the way this works is you start by figuring out all of the problems you might be worried about, and choosing some rarity threshold to verify absence of (safety against), and then animal numbers are derived from the power calculation. For example, to rule out a potential complication to no more than 1% of patients with 95% confidence… you need a lot of animals, especially considering multiple study arms. This is the values tradeoff we as a society have chosen to make and empower our regulators to enforce. There is often a negotiation for the least controversial species to use that will satisfy the scientific goals.


> A human brain can do more computation than our largest super computers with 20W of power

The power needs of the human brain are likely to be measured quite accurately.

The same is not true of the "amount of computation" performed by the brain. How are you measuring that?


We can estimate the amount of information processed. Visual is like 10mbit [1] plus other senses it might be up to 100mbit. Only doing similar sensor fusion and extracting features in realtime on computer requires more power. But there's also the symbolic processing, doing something similar too requires much more power on computer. Then there is other stuff such as maintaining homeostasis we don't really know how to compute yet.

[1]https://www.eurekalert.org/news-releases/468943


> We can estimate the amount of information processed.

I'm not sure this makes sense. Here is a simple dynamic programming problem from Project Euler: https://projecteuler.net/problem=67

You can estimate the amount of information being processed in a few different ways. But that's not really relevant; the whole point of solving this problem is that you can do the same job with less computation than it looks like you need.

There is no particular connection between "amount of information processed" and "amount of computation performed".


There is a connection. How can any computation be done without moving information around? In absence of better measure, we can roughly estimate the computational complexity of a black box from looking on the input and output.

If the brain's job could be hypothetically done by some optimized system using an picowatt is irrelevant. We don't have such a system.


> On the other hand, these cultures have essentially no advantage over digital computers and modern machine learning models.

Absolutely false. While it's indeed hard to keep it alive, real neurons are far more sophisticated than what AI researchers think they are. Modern digital so called neural networks are built on the outdated and oversimplified knowledge of neuron model, almost a century-old by now.


Modern CMOS transistors are extremely sophisticated devices (you need a very complicated model with hundreds of parameters to simulate all kinds of quantum effects to predict its behavior). Yet all it does is one simple function - it's an on/off switch.


Evolution would not allow waste of energy and complexity for just one on/off switch. Inefficient things die out in the course of millions of years. Neural tissue on itself is far older than humanity, so it had much more time to perfect.


> Evolution would not allow waste of energy and complexity for just one on/off switch. Inefficient things die out in the course of millions of years. Neural tissue on itself is far older than humanity, so it had much more time to perfect.

Sorry but that's not how evolution works at all. You are essentially postulating that evolution results in efficient outcomes given enough time whereas there are many, many examples of evolution delivering results but clearly sub-optimal ones. It's not a given that evolution will lead to an efficient solution, it's not even a given that it will lead to a solution at all.


That’s not true at all. If something is a waste but doesn’t meaningfully change yours odds of suvival, “evolution” won’t care


Out there food and energy are often scarce. Evolution does care about efficiency in that particular case a lot. True, there are ecosystems with plenty of free food but they are rarity.


Evolution doesn't care about anything.


it is possible this is not quite true


No, that is not possible.


At least in plants there is some evidence that mutations produced and then acted on by natural selection are not fully random. It is a long held assumption this should not be possible, but there are interesting lines of evidence suggesting it may be. It would open the chance of there being a kind of (limited) underlying logic.


https://www.nature.com/articles/s41586-021-04269-6.pdf

This is an example. Epigenetically driven evolution in arabodopsis, which protects certain regions from mutation. In a very limited sense, evolution might be said to "care" about something here, as it is kind of taking direction from the environment, not simply acting on uniformly random mutations. Nothing like this is known in animals or most anywhere else afaik.


Peacocks, antlers, art.


It is arguable all three have evolved for the same reason.


sophisticated is not a scientific word. they are complex and complicated, and the voltage dynamics across their elaborate membrane takes a lot of computers to simulate. But we don't really know what it is doing or if it is particularly sophisticated. Nature has found a lot of complex solutions to simple problems because it does not know better. We don't know how well it did with intelligence


We already do know[1] the a single neuron has the same level of complexity as multilayered digital "neural network".

[1] https://www.youtube.com/watch?v=hmtQPrH-gC4


There are different studies proposing 2 or 3 layer network for representing the input-firing curve of neurons (Usually hippocampal). Of course, neural networks are abritrary approximators so the size of the network determines the fidelity of the reproduction. But it 's not clear what the firing does and what amount of complexity in the firing code is reduntant or useful for making AI systems


What is clear however is the evident power savings in implementing cultured neural networks vs digital ones for a given network capacity.


Even that is not clear. A model like GPT-4 can read an entire book in seconds, and produce an intelligent answer about its content [1]. A human would need at least several hours to perform the same task.

[1] https://www.anthropic.com/index/100k-context-windows


> for a given network capacity

You'd be hard-pressed to find an expert who believes any of the current crop of LLMs have a similar capacity to human brains.


Unlike the capacity of human brains the capacity of ML models has been growing very fast in the last 10 years. The number of tasks AI cannot do is shrinking fast.


Power consumption is what's relevant to this discussion thread here. We're not talking about possible capacity but possible efficiency for given capacity as implemented in analog vs digital circuits.


If a "given capacity" is the ability to read books and answer questions about the content, then LLMs are more power efficient (4kW for 20 seconds beats 20W for 3 hours). That's on standard power hungry GPUs (8xA100 server consumes about 4kW). If we switch to analog deep learning accelerators, we gain even better power efficiency. There's simply no chance for brains to compete with electronic devices - as long as we match brain's "capacity".


Ok so it's clear you are not familiar with the vocabulary of this domain. "Capacity" means something specific (if extremely hard to measure precisely) about the informational throughput of a system, not just "the ability to do high-level task X as judged by a naive human observer".

I urge you to study more seriously about the things you seem so eager to speculate about before publishing underinformed opinions in public spaces.


Were you going to provide the definition? What is this “capacity” you are talking about?


You have my permission to use your favorite search tool to answer that question.


Tried googling for "ML model capacity" - only found informal handwaving. The closest to a formal definition is VC dimension: https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_di...

Is that what you mean when you say "capacity"? Does not seem very relevant in the context of our discussion. If it's not what you have in mind, I'd appreciate a link to the Wikipedia article on "ML model capacity" or whatever specific term experts use to represent the concept.


> The basic idea is that if you culture neurons on an electrode array (not that hard) you can pick some electrodes to be "inputs" and some to be "outputs" and then when you stimulate both ends the cells wire together more or less according to Hebb's rule[4] and can learn fairly complex nonlinear mappings.

This is fascinating, can you clarify it a bit? Do you 'stimulate', e.g. apply electrical potential to both the inputs and outputs to represent each instance of training data, without any physical distinction between input and output at that stage? And then if you apply the potential only the inputs, you can then read predictions on the outputs?


What I always wonder about with these systems is how feedback was delivered to the cultured neurons. How do we tell them they're doing things correctly? Or is this some form of unsupervised learning with them?


the original paper is available https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6

They used a specific region of the electrode array to deliver the "reward" signal which was a regular predictable pulse pattern . An error was represented with unpredictable activity


Wait, wtf, sentience? “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world”


The paper used unfortunate terminology everywhere even if there were negative comments about in the preprint. It caused a number of reactions https://pubmed.ncbi.nlm.nih.gov/36863319/


Do you have a writeup or video of the aim bot you made? Would love to see it!


We have lots of these cultures around for drug testing. I wonder if the “brain” playing pong affects the tests in any way.


Tangent: was thinking it would be cool if you had a bio mass that could connect to a pcie slot and act as a graphics card. That would be some really impressive tech. Build circuits in the goo with floating particles.


Here's a short video included in the full paper of the system playing pong (plus a visualization of measured neural activity)

https://www.cell.com/cms/10.1016/j.neuron.2022.09.001/attach...


Does it play pong well? Random noise can "play" pong, in that it can be translated into commands to erratically move the player's paddle and occasionally hit the ball by chance. Can anyone with access to the full paper report on how impressively the DishBrain plays? I'm not necessarily expecting human-level skill, but at least something to indicate that it's responding appropriately to the game's state and working towards winning, rather than just wobbling the paddle around randomly.


The paper is open access https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6

It's not "playing well". The experiment is set up so as to reinforce (through LTP) its dynamics when it performs correct rather than wrong. This ends up "wobbling the paddle around" with a statistically significant better performance than untrained . This is not a brain that knows what it's doing, but a few thousand neurons apparently able to undergo plasticity


> a few thousand neurons

Does that come from somewhere in the paper? I'm seeing "10^6 cells plated onto each MEA" and trying to determine if that's the size of each "DishBrain" or if the number is pared down before actual tests are carried out.


I see in the paper that they used 800K and 1M culture neurons . The device in question however (MaxOne) has a size of 8 square mm. I m not sure if the cells they plated can actually be recorded/ stimulated as the device is rather high resolution with 26400 densely packed electodes that can record from multiple spots of a single neuron. In typical recordings from these there are much fewer neurons in culture


How does it reinforce and produce LTP?


It isn't clear what makes their reward signal predictable but 10 pulses at 100hz looks like an ltp inducing stimulus


Really philosophically interesting that a consistent, rhythmic signal induces reward. Music is pleasurable even at a neural level?


It s a mechanistic thing though, the stimulation is strong enough to depolarize the postsynaptic cell to cause plasticity


It learns over time and shows statistically significant improvement. It’s still pretty terrible, but to answer your question: yes it’s not random.


this is amazing. our brain sizes will no longer be limited to the size of our skulls. we can have giant pulsating brain organoid the size of a meat locker with oxygenated hemosynth, glucose, antibiotic, and cooling tubes going in and out of the room. the room is the size of a meat locker but it's non-ferrous. It's actually housed inside of a MegaTesla fMRI that monitors the organoid and this is in fact how the output is read from it. input to the organoid is visual. the organoid's surface is spotted with light-sensitive patches of proto-retinal tissue of varying degrees of organization and development.


If you're interested in exploring direct neuron + computer integration, Justin Atkin from The Thought Emporium is attempting to grow neurons on an electrode array. Here are two episodes that cover his work:

Growing Human Neurons Connected to a Computer - https://www.youtube.com/watch?v=V2YDApNRK3g

Connecting Neurons to a Computer: The New Plan - https://www.youtube.com/watch?v=p1C0qpqpAWc


Are these human brain cells? And how many do you have to grow before it becomes an ethics issue, i.e until it grows into a proper brain that has subjective experience?


> Are these human brain cells?

A goal of the paper is to produce a system that can be used by researchers to experiment on neurons from different sources. It mentions its use of a few different cell lines, from mice and humans both, so at least some DishBrain runs used human cells. The human cells were from cell line ATCC PCS-201-010, which are stem cells derived from the dermal cells of a newborn baby's foreskin. The authors coaxed those stem cells to differentiate into neurons.

> how many do you have to grow before it becomes an ethics issue

The paper says they used methods to "achieve a dorsal forebrain patterning" and plated 10^6 cells on each electrode array used for testing (about 5x the brain cells of a mosquito, or a tenth of the cells in a mouse brain). Note that I have no idea if all million of those cells survived until the experiments were carried out, or if they were all actually used in each "brain" (maybe they were segmented out?)

I think the ethical issues are going to depend largely on how much the structure of the neurons affects awareness. A pile of 10 million random mouse neurons on a plate probably doesn't have anywhere near the same value of experience that a 10 million neuron actual mouse brain has. But I'd say if you're okay with experimenting on mice, 1 million neurons in a petri dish is very ethically safe in comparison. Agreed that we're starting to get to the point where it's more questionable though.


> A pile of 10 million random mouse neurons on a plate probably doesn't have anywhere near the same value of experience that a 10 million neuron actual mouse brain has.

In particular, I question the significance of neural activity in the absence of sensory organs.

One might argue that at some critical mass / neuron count, presumably well above mouse-level, there is the potential of sentience. OK, but what is sentience in the absence of sensation?

I don't know the answer to that question! But I'm fairly confident I don't place its well-being high on the ethical totem pole.


Right, there seems to be distance between "Watson plays Jeopardy!" and "neural net plays [virtual simulated] Pong", since a large part of Pong involves Mom's favorite term, "hand/eye coordination", where your eyes are sensing and interpreting light, and your hands/arms are occupied with gripping and twisting the paddle controller.

Please just wire up some GPS and LIDAR to humans and make us better drivers.


The whole field of biological intelligence needs a ChatGPT moment. There's at least four or five different companies running around trying to do this with biological neurons, but unfortunately pong just isn't a spectacular demo.


It's fascinating how brain cells outside of its host can do simple tricks like play pong. Perhaps these scientists need a new marketing department because DishBrain isn't quite appetizing.


DishBrain's elevator pitch: Neural networks meet fava beans and a nice Chianti.


Maybe they could rebrand to "Brainlent Green".


Yeah, this is ethically-sourced meat, right? You could have a whole pipeline that trains neural nets to farm gold, and then slaughter it for a steady supply of cabeza to every taco truck in these United States.

https://xkcd.com/2173/


I would dare to say that's not even an outrageous proposal, more of a modest.


This month I shall begin to identify as prepackaged cultured meat with a fully-functional neural net.

(It's OK, I'm an expert at Pong.)


That's liver.

When it comes to brains, canonically Hannibal is fond of eating them straight out of the skull, with the owner still alive and conscious.


Maybe they can do more than play pong. Is "the host" something more than machinery, a vehicle, to a big clump of brain cells? Is there a soul? Or is it all an illusion

I've tried cow brain twice and it was delicious.


One advantage of this that you could theoretically build a few cubic meters of brain tissue that would be potentially far smarter than human.


In Peter Watts' Rifters trilogy (a series of sci-fi novels) there are large cultured masses of neurons used as a form of AI and nicknamed "head cheese."


the beginning of Krang

I really hope there are tiny turtles learning martial arts right this time


We should stop doing things like this.


Stop experimenting on in vitro biological systems of cultured brain cells? Why?


Yes, what possible moral problems could there be from using the one substance in the universe that we know produces sentience to build machines to play Pong?


A lawsuit by Nolan Bushnell, with the settlement to be paid in quarters.


Because eventually you have Blade Runner.


Why?


said every luddite for the last 10 thousand years.


How many brain cells do you string together before the DishBrain gets some semblance of sentience? How many before it becomes conscious?

We don’t know, of course. Perhaps consciousness isn’t possible without a live body. Or perhaps it takes just a few hundred thousand neurons.

In the meantime, you run the risk of torturing a potentially conscious entity for…what gain exactly?


It’s just repeating statistical patterns in its input-output mapping.


I don't think the Luddite philosophy applies in this case, I think OP may have been speaking about the morality or ethics of growing a brain in a dish? Luddism is more about when the capitalist class steals worker productivity gains for themselves, and is a pretty valid philosophy for today, when the capitalists are trying to replace thinking humans wholesale with computer AI.


TIL what a luddite is: apparently they started in 1811 and were named after Ned Ludd, a textile worker in England. So what was the characteristic called in 1810?


Maybe the concept is not much older than that. Luddite implies resistance to technological change and the disruptive consequences of those changes. Before the early modern era, you might go your whole life never encountering/seeing/being replaced by a disruptive new technology. If you've never experienced repeated new changes of that sort you could hardly develop a dislike for them.


Really good point!


According to who you're replying to, they're actually 10,000 years old, and complain about scientific research, rather than about being pushed out of their textile jobs by machinery.


excuse me while I roll my eyes directly into the back of my head. Would you prefer I used the correct Sanskrit term?


Properly they complained about who had control and would get the benefits of the new tech.

In earlier times, the ownership came out differently


That's what she said :)


Wish I could read this whole paper. DishBrain sounds absolutely horrifying.


Do they use gradient descent? Otherwise how do they train?


From the paper, it looks like they apply a long random stimulus when it misses the ball, and a short predictable stimulus when it doesn't. The theory is that the neurons will try to organize in a way that avoids the random stimulus.


cool. tldr appreciated


I’m surprised no one has touted power consumption as a huge benefit of using these neural networks.

GPU supercomputers no longer necessary. Just a BrainStation or two.


Can it play doom?


We need this aws service, maybe call it “Real Brain”. You train it, except you cannot untrain it


But you can click "Delete".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: