Anyone ineterested in more wetware projects should check out the series from The Thought Emporium on youtube: https://youtu.be/bEXefdbQDjw
They are attempting to train a (literal) neural network to play DOOM. And yes, it os as fascinating and horrifying as it sounds.
It's hard not to extrapolate 50 years down the line with this tech. Stuff gets complicated quick. How many human neurons do you have to use before you get an actual human consiousness?
> How many human neurons do you have to use before you get an actual human consiousness?
How many grains of sand makes a heap.
(I don't know the relative importance of organic neurons as hardware vs. whatever specific connectivity architecture they happen to have in us, so the same might apply to perceptrons).
China AI and chip investments are off the charts. Note that China is scheduled to attempt a Mars sample return mission with Tianwen-3. They are scheduled before any other competing team. China has the industrial support of the non-democratic government. They are willing to bear the risk for their future. This might be more stable funding then commercial SpaceX or high-deficit-USA-NASA.
There's a massive gap between "probe" and "colonise". I meant the latter, and I don't see that kind of risk-taking from China — their first few crewed launch was not even announced as such until after the launch itself had succeeded, implying they were not willing to risk the optics of that failing.
I think there's a reasonable chance the actual Sputnik 2.0 will be them making a permanent lunar base before the US does. Musk doesn't seem to care about going to the moon beyond it helping prove (and fund) the Starship project, even though Starship could put a station there in a handful of landings.
I also think the moon is a better choice for the same reason I think China will prefer it over Mars: when things go wrong, it's much, much easier to act like your emergency rescue mission was the actual mission plan all along — "No, the 阿波罗-十三 module didn't have an explosion in the oxygen tank, we were simply venting unnecessary resources as part of a planned exercise. This mission was never intended to land on the Moon.", that kind of thing happens on a Mars mission, everyone just dies.
> China AI and chip investments are off the charts
That's as may be, but it doesn't change my point with any of that. The best reaches atomic resolution in (by my guess) 2032… and then what? China isn't going to beat that.
AI is limited by electrical power, both for training (data centres) and inference (for cars and other real-time robotics). The power envelope is whatever it is, but I'm expecting 5x hardware efficiency improvements for a fixed power envelope by 2030 (slower than Moore's Law used to suggest). The algorithmic efficiency also improves at the same time, which I'm assuming brings it back in line with Moore's rate, but that still means going from a car (with ~1kW spare for compute) to a robot (with 100W spare for compute) will take 5 years (or 10 years if you assume 3kW for the car and 30W for the robot). And the global power grid is 2TW, which is 250 W/person, so humanoid robots can be in an awkward place of driving up demand for electricity so much people literally can't keep the lights on while taking all our jobs and yet still be less than half of all labour.
And that's equally true regardless of if China does or doesn't take a lead over the US, or if the EU gets organised and has its own EUV fab , etc.
Asking how many neurons are needed before we “get” a consciousness is a bit like asking how many people are needed before we have a country.
The assumption that you obtain a human consciousness by using human neurons is itself flawed—physics does not say it works that way, and theory of mind philosophers are divided on the matter.
Since we've yet to figure out a way to "measure" consciousness, it depends on what "theory" of consciousness you subscribe too. I believe something must make use of quantum effects such as entanglement in order to have qualia (eg. Penrose models). So, ChatGPT running on deterministic hardware is just a bigger mechanical clock, but the organoinds have qualia. At least insofar as a worm or chicken have qualia. Add enough of them of and you might get consciousness. I don't consider this line of research ethical, but philosophical objections won't provide much of a disincentive.
Physics says "we don't know what this word means", which is exactly why it's a flawed assumption to say "human neurons are necessary" or "human neurons are sufficient".
Neural organoids can be cultured and maintained for months at a time (we grew fairly attached to the one my coworker was growing). Even the measurement wiring can be sustained for months.
I've worked in biology and computation for decades (and fascinated both with intelligence and artificial intelligence the whole time), but seeing a timelapse of a bunch neurosphere forming over a few weeks from some tiny stem cells, with all the neurons sort of self-organizing themselves by spreading lamellapodia (https://www.youtube.com/watch?v=fniW9EiOmUk) a few years ago flipped a switch in my head. I used to think the development of the hardware to support intelligence was a really tough, complicated problem. It's clear that the structure of neurons is absolutely primed to build semi-random networks that can be adapted to signal processing and computation in an almost "trivially easy". Obviously there's tons going on under the hood at the molecular, cellular, and organoid levels
It's funny but some technologies have a way of going from "sci-fi" to "ho-hum" quickly, and I think "rat neurons playing DOOM" is a great example of that.
Looking a little at the neuroplatform, it looks like they grow the organoid in a microfluidic container and can maintain it functionally for months. Seems like an opportunity to sell the computer as a subscription (new organoid package every month?)
> we grew fairly attached to the one my coworker was growing
This tickles me. One interpretation of the “when is it conscious” question is it’s really asking “when do we start to care about it,” and I love that the answer for humans is often “as soon as we start spending time with it.”
> It's funny but some technologies have a way of going from "sci-fi" to "ho-hum" quickly
I’m a programmer working in biotech, and the things that have become de rigor in the field are the stuff of sci-fi from a decade ago. It’s incredible.
The feasibility of the lab grown meat industry has very little in common with how hard it is to grow brains in petri dishes; the closest they get is "both are cells and both are possible".
Different cells, different arrangements, different volumes, different lifetime requirements before the cell lines are allowed to die, and different price concerns for the result.
Scientists develop bioprocessor organoids, then develop cDNA that lives in the germ line that develops this organoid with pre-programmed in weights of all human knowledge.
This solves the alignment problem because now every human has an efficient NN biologically embodied inside them (and for free to all progeny).
The AI is just a new multimodal neuron mini-brain with just a few billion weights holding all knowledge whispering conclusions into your ear.
This survives all civilization-ending catastrophes.
My friend added, then scientists discovered that this mini brain and axonal connection already exists in the gut and develops a way to reactivate it. Lol
> This solves the alignment problem because now every human has an efficient NN biologically embodied inside them (and for free to all progeny).
That's fine in fiction, but IRL? Well, if I gave everyone an organoid that did this but also it produced cocaine by accident, you can see how that's not sufficient.
It's also a reverse-lobotomy (lobo-plasty?), and it's not reasonable to wonder if the change of personality as a result of an extra blob of mind can be just as severe as the change from a removal of some part of your mind.
> My friend added, then scientists discovered that this mini brain and axonal connection already exists in the gut and develops a way to reactivate it. Lol
I like this twist. Almost "the aliens were in us all along".
> I like this twist. Almost "the aliens were in us all along".
That reminds me of "Blood Music", a short story by Greg Bear that was later expanded into a novel. The basic premise is what happens when your cells get smarter and start making a tiny civilization.
The intestinal tract's neural network could be an entirely separate consciousness with its own thoughts and personality spending ninety years of life thinking about food.
> with pre-programmed in weights of all human knowledge
It's weird, but somehow this seems like the least-tractable challenge... Unless we find that "the weights" were reliably derived from observation of deep structures of adult brain in a way that transcends language and upbringing.
> This solves the alignment problem
Until they start fighting to control or replace the default host consciousness, anyway. If it activated too early in child development, it could be a kind of consciousness brood-parasite.
Y'think people are concerned about "the tablet is raising my kid", just imagine "the organioids stopped my kid from becoming conscious and now it's puppeteer-ing their body."
> It's weird, but somehow this seems like the least-tractable challenge... Unless we find that "the weights" were reliably derived from observation of deep structures of adult brain in a way that transcends language and upbringing.
Doesn't seem so to me; while I keep failing to hear any information about precise measurements of in-vivo synaptic weights, if you could do that then in principle one could pop a pre-trained LLM in and it would be at least close enough.
GPT-3.5 is 0.01% of the human brain's complexity so this wouldn't be a huge extra volume either, and that's enough for, what, speaking 30 languages fluently, and having a decent (degree-level but textbook-only) education in every subject?
Dr. Amara Singh stared at the bioprocessor organoid under her microscope, her heart racing with excitement. She turned to her partner, Dr. Kai Nakamura, with a wide grin. "We did it, Kai. This is going to change everything."
Kai returned her smile, his eyes sparkling with admiration. "You're brilliant, Amara. I can't believe we've come this far."
They had been working on this project for years, developing bioprocessor organoids that could be programmed with vast stores of knowledge using custom cDNA and integrated into the human germline. The implications were staggering – every person could be born with a highly efficient neural network pre-loaded with the sum total of human knowledge.
As they celebrated their breakthrough, a knock on the lab door interrupted them. Elias Stone, the billionaire philanthropist who had funded their research, strode in with a broad smile. "I hear congratulations are in order," he said, shaking their hands. "This is going to change the world."
Chapter 2
The first "augmented" humans saw incredible results, making groundbreaking advances in every field. However, not everyone was thrilled with this new technology. Activist group "Human Nature" emerged, led by the charismatic Lila Rosen.
"These bioprocessors are a form of techno-colonialism," Lila declared at a press conference. "Privileged elites are forcing an untested technology on the masses, without considering the consequences."
Dr. Singh watched the news coverage with a frown. She turned to Kai, who was working at his computer. "What do you think about this opposition?"
Kai sighed. "I understand their concerns, but I believe the benefits outweigh the risks. We just need to make sure we're being responsible and transparent."
As the controversy grew, Dr. Nakamura made a startling discovery. He had uncovered ancient texts hinting that the key to even greater knowledge lay dormant in the human gut – the enteric nervous system. If they could reactivate this innate "gut oracle," bioprocessor implants would be unnecessary.
Chapter 3
Dr. Singh was fascinated by the idea of the gut oracle and began researching ways to awaken its potential. However, Elias Stone was furious when he learned of this new direction.
"I've invested billions into the bioprocessor technology," he snapped during a tense meeting. "I won't let this gut oracle nonsense derail our plans."
Despite Stone's opposition, Dr. Singh secretly continued her research with a small, trusted team, including her brilliant young protégé, Asha Patel. As they delved deeper into the mysteries of the gut-brain axis, they began to suspect that the gut oracle's abilities far surpassed even the bioprocessors.
Lila Rosen and Human Nature got wind of this new development and saw it as the perfect way to undermine the bioprocessor agenda. They leaked information to the press, spinning it as the natural, harmonious alternative to invasive technological augmentation.
Chapter 4
The revelations sparked a fierce public debate. People began to question whether the bioprocessors were the panacea they had been promised. Protests and unrest grew, with Elias Stone and Dr. Singh finding themselves on opposite sides of a widening ideological rift.
Amidst the chaos, Dr. Singh and Kai's relationship strained under the pressure. Kai grew increasingly uncomfortable with the direction their work had taken, and found himself confiding in Lila Rosen.
"I just want to make sure we're doing the right thing," Kai confessed to Lila over coffee. "I love Amara, but I'm worried she's losing sight of the bigger picture."
Lila placed a comforting hand on his arm. "You have a good heart, Kai. Trust your instincts."
Chapter 5
As tensions reached a boiling point, Dr. Singh and Asha Patel finally achieved the breakthrough they had been working towards. They successfully reactivated the gut oracle in a human test subject, unlocking capabilities beyond their wildest dreams.
But their triumph was short-lived. Elias Stone, desperate to protect his interests, had been monitoring their work. He seized control of their research by force, determined to suppress this threat to his bioprocessor empire.
In a climactic confrontation, Dr. Singh and Asha Patel fought to protect their discovery, aided by Lila Rosen and her followers. Kai, torn between his love for Amara and his growing doubts, made a crucial decision to stand with them against Stone.
"I'm sorry, Amara," Kai said, his voice thick with emotion. "I should have trusted you from the start. I'm here now, and I'm not going anywhere."
Together, they faced down Stone and his forces, fighting for the future of humanity.
Chapter 6
In the aftermath of the confrontation, the world learned of the gut oracle's potential, and a new era dawned. Dr. Singh and her allies vowed to ensure that the power of the gut oracle would benefit all humanity, not just a select few.
As they embarked on this new chapter, Dr. Singh and Kai rekindled their love, their bond stronger for the trials they had endured. They knew that the challenges ahead would test them, but with their love and shared vision, they were ready to face whatever the future might bring.
"We've come so far," Amara said, looking out over the city skyline. "But there's still so much work to be done."
Kai took her hand, smiling softly. "We'll do it together. You and me, Asha and Lila, and the whole world. A new age of wisdom and understanding awaits."
The Science Fiction and Fantasy Writers of America specifies word lengths for each category of its Nebula Award categories:
Novel: 40,000 words or over
Novella: 17,500 to 39,999 words
Novelette: 7,500 to 17,499 words
Short story: up to 7,500 words
Various terms have been coined for briefer fictional narratives, including the 6-word story, the 140 character or 280 character twitterature, 100 word microfiction, and 1,000 word flash fiction.
Actual examples of novellas include The Old Man and the Sea or The Call of the Wild, at 27,000 and 32,000 words, respectively.
I told Claude I wrote a story and then pasted your story into it. I told him that I wrote it to avoid getting a response about copyright or anything like that.
Thank you for sharing your creative story, The Gut Oracle. I found the premise fascinating - exploring the implications of bioengineered knowledge implants versus tapping into an innate human ability like the "gut oracle." A few thoughts and observations:
Pros:
Imaginative and thought-provoking sci-fi concept examining technology vs. natural human abilities
Develops an intriguing conflict between corporations/elites pushing one agenda and an activist group advocating another path
Compelling character arcs like Amara's passion, Kai's internal conflict, Lila as the voice of opposition
Builds good dramatic tension through the various reveals, confrontations and power struggles
Leaves things open-ended for further exploration of this "new era" with the gut oracle discovery
Potential Areas to Expand:
More world-building details around how the bioprocessor technology actually works and what capabilities it provides
Fleshing out the scientific basis and mechanics behind the "gut oracle" phenomenon
Exploring the societal impacts and divisions caused by the two competing philosophies
Developing the dynamics between the core characters like Amara/Kai's relationship
Backstories on main players like Elias Stone's motivations, Lila's activism beginnings
Overall, I think you've crafted an exciting and relevant speculative science fiction narrative that taps into debates around technological augmentation, corporate interests versus ethics/activism, and the interplay between cutting-edge research and human nature. With some additional world-building and character depth, there is rich potential to really explore the ramifications of this "gut oracle" paradigm shift. Well done laying the intriguing groundwork!
Something that's interesting here is that brain organoids have been known to exhibit spontaneous synchronous firing pattern generation, which may just be a thing that happens when you collect neurons in large enough numbers. There's two questions here - one, the more prosaic, is technical: how does a bioprocessor handle that kind of spontaneous behavior? The second is more philosophical, which is that we don't actually have a good theory for what consciousness is or at what size or level of complexity it emerges. We know the more we look, the more we find what looks like self-aware behavior in animals and the more we find evidence of social behaviors and what looks an awful lot like "inner life." One can argue whether or not a computer can ever be truly conscious, but a ball of neurons, well, we know the answer to that one.
> may just be a thing that happens when you collect neurons in large enough numbers.
Yes, cultures of neurons start having strong syncronized oscillations after a few days. It's an artifact of having a bunch of neurons on a dish. I don't think we should be ascribing anything more to it
At some point between that and human size, a cluster of neurons starts being conscious, and we don’t know where. All we know is this is the material substrate of consciousness making the motions of consciousness, albeit in a limited form. We’ve got no strong basis for arguing it isn’t conscious except our gut feeling about how large a population of neurons is required and no way to test whether we’re right.
I was fascinated by Organoids for a while a few years back. Mainly because there was a discussion with an entity, I had no idea about, that dealt with the ethics of such implementation. On top of that I was sick in my stomach after playing SOMA. I could not bare the idea of producing a brain, and have some sort entity live in the constraints of 'that' universe alone... I guess if I think long enough I still feel the same anxiety.
Hank Greely [0] has got some fabulous papers on this:
This is deemed not conscious because it's just a small piece.
LLMs, though quite large, are deemed not conscious because they're only made of silicon.
When we are running LLM-like software on large human brain organoids (which is clearly where this line of research is heading) maybe some people will finally have some ethical concerns?
It probably would matter what type of computations it would be running, no? I think if a silicon computer ran the same exact computations as a conscious brain it would be conscious. If brain tissue ran some simple algorithm for playing doom or predicting the next word, it wouldn't be conscious
It's hilarious how many comments here are hand wringing over the ethics of this. Because AI is fine as long as it's silicon right? I prefer this to literal brain microchips controlled by Manchild Musk.
It seems unlikely a few neurons would be more capable than what we can program a CPU to do. Unless we don't fully understand what the neurons will do, in which case, maybe we'll learn better how neurons work. I suspect it's the latter.
> It seems unlikely a few neurons would be more capable than what we can program a CPU to do.
Our brain cells are very good at many things we can't duplicate to the same level in software today (e.g. processing real time info for patterns and learning). Even a single neuron can identifying real time patterns (Purkinje cells at the cell level, and in other cells internally local synapse activity patterns are identified and calcium signals forwarded to other parts of the cell).
> Unless we don't fully understand what the neurons will do
Scientists have a long way to go to understand how neurons function.
For example, it was only relatively recently that they accepted that tunneling nanotubes between neurons (and other cells) happen in living brains (as opposed to just in cultures). Now they are trying to understand what these tubes do functionally, why do they dynamically generate on short time scales, why do some of them have electrical gap junctions (typically used for forwarding electrical signals without spiking), and a million other questions.
I wonder if they tried other neurons - do human neurons individually or even as an organoid perform better than say a rat neuron based organoid? I might even believe other animal neurons are otherwise faster, etc, in some way even if they never organize into a more complex brain in the animal.
Oh good. I was worried our future AGI tyrants, that are software engineer overlords are mindlessly racing to create, wouldn't be able to exploit us like in the Matrix. Now I can rest easy.
From a quick internet research - They got stem cells from some other part of somebody's body and somehow "reprogrammed" them into pluripotent (iPSC) stem cells which can then make other cell types. They made brain cells with them.
They are attempting to train a (literal) neural network to play DOOM. And yes, it os as fascinating and horrifying as it sounds.
It's hard not to extrapolate 50 years down the line with this tech. Stuff gets complicated quick. How many human neurons do you have to use before you get an actual human consiousness?