Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With a few minutes of googling the only reference i can find says that Tardigrades have approximately 200 neurons in their brains [1] and their dry weight is on the order of one microgram [2]. They also have a lower count of synaptic connections (from the same paper):

>Thus, miniaturisation of the tardigrade nervous system is achieved largely by reducing cell numbers, neuropil branches and the number of synaptic connections, while, at the same time, increasing functional specificity. Nonetheless, given the proximity of synaptic neighbours, these adaptations are unlikely to compromise functionality to any great degree.

Now go to YouTube and search for videos of tardigrades operating in their environment. Here are a few examples:

https://youtu.be/bVDtzHM2-QQ

https://youtu.be/kux1j1ccsgg

https://youtu.be/slYQTMUz62s

I'm blown away by the complexity of their interaction with the environment and the stark simplicity of their neurological wiring relative to anything else that I can think of. For me it also raises the question of whether they have any subjective experience approximating what we describe as consciousness, or if the entirety of their life is just a long form electro-chemical reaction of deeply nested feedback loops programmed into ~55 million base pairs.

Either way it’s simultaneously amazing and humbling.

1 - https://bib-pubdb1.desy.de/record/401705/files/Martin_et_al....

2 - https://benthamopen.com/contents/pdf/TOZJ/TOZJ-3-1.pdf



The amoeba has zero neurons, and yet it can hunt prey and can reproduce sexually.

Our understanding of intelligence is so laughably primitive that it is "not even wrong".


Makes me think of the classic video of a white blood cell chasing a bacterium. You can sense the frantic desperation of the bacterium as it tries to get away from something monstrous that will inevitably swallow it whole.

https://youtu.be/JnlULOjUhSQ


I love this video and I agree that we tend to make entirely arbitrary decisions about what is and is not intelligent, which is far more a reflection on our own preconceptions than the nature of intelligence, but I would note that bacteria have a completely different experience of their environment than we do of ours, and always have a bit of a frantic vibe to them.

On their scale, they are moving through molasses while being buffeted by particles causing them to turn and spin; they are entirely incapable of moving in a straight line, and their motility is fundamentally random. So they always have a sort of vibrating, frenetic quality to them.


I think we're well aware that systems without intelligence can act as-if they have it. It is on this basis humans have been superstitiously imparting souls to any-old-thing for thousands of years.


What makes you say that an amoeba doesn't have intelligence?


intelligence, if it is to mean anything, is a property of agents who act guided by explicit interior representations of their external environment


Are you implying ameobas don't have some sort of interior representation of their external environment?

I would be very hard to do what they do without one.


Well there's implicit and explicit. In a sense, a boiling pot of water has an implicit representation of the heat of the gas flame -- in that the state of the water tracks the state of the flame.

The question is whether the system is able to perform any explicit inference with those representations. Ie., is the representation explicit available, as a representation, eg., that this representation can be employed to imagine the world being a different way (the heart of solving problems using intelligence).

It's worthwhile distinguishing a certain pun on the word "intelligence". We don't call gangster's intelligent, and physicists dumb -- despite gangster's being much more adept at most "physical skills". The esteem for "intelligence" comes from its association with The Intellect (ie., cognition and imaginative simulation, the place of explicit representation). Not from adept procreation, at which the cornavirus is presumably then almost "the most intelligent lifeform".

An amoeba doesnt represent the world to itself. It's an interesting biochemical system which interacts with its environment in a way which seems as-if intelligent. Digital computers likewise seem as-if intelligent. Ie., they seem as-if they explicitly represent the world to themselves -- but they dont.

Many things seem to be doing what we do, many things which dont, can (ie., we can automate some problem solving so that it doesnt need intelligence). But this is only seeming. These systems are nothing like us at all, they are radically alien -- they're, in any sympathetic sense, dead. There is "nothing that they are doing" -- there's no "they" there.


I don't know the technical answer to your question but I do think that us humans have continuously underestimated the complexity and intelligence (in the broadest sense) of animals.

First, we see them as emotionless objects. Then we acknowledge that some animals seem quite smart, such as dolphins, dogs, pigs and cows, and of course other primates.

Then we discover that far more animals have complex systems to communicate by language, signs or color, have advanced courtship rituals and actual emotions, such as an elephant mourning.

And still we keep getting it wrong. Recently it was discovered that tiny earth worms, considered some of the "dumbest" and most primitive life forms, show complex behavior and have actual learning ability. For example, they drag leaves into their underground "den" to feed on them and over time learn which way is the correct way to drag the leaf, as the other way would block the den. That's a stunning interaction for such a simple creature.

Because it isn't a simple creature. Nothing is.


" do think that us humans have continuously underestimated the complexity and intelligence of animals... First, we see them as emotionless objects."

No premodern culture thought so. As far as I know, it's only from Descartes on that the opinion you note existed.


I think a little earlier. Religion, in particular Christianity, is responsible for create a massive conceptual divide between mankind, created in gods' image, and everything else, lower life forms at our disposal.

Another pivotal moment would be have to be the invention of agriculture, some 10K years ago. Only from this point on begins the idea of animals as property.


Define "we". Because the smartest people I know are the ones most likely to underestimate as you've described and look down upon people that anthropomorphize. Further, it seems to me that the smartest people I know fall prey to reductionism when it comes to the subject of intelligence or philosophy in general.


It's convenient to clamp our understanding of reality. If I convince myself that cows are baser creatures, it makes it a lot easier to enjoy this juicy burger made from its child.


> For me it also raises the question of whether they have any subjective experience approximating what we describe as consciousness, or if the entirety of their life is just a long form electro-chemical reaction of deeply nested feedback loops programmed into ~55 million base pairs.

Maybe they are the same thing. Adapting to the environment in order to survive is also the role of consciousness and intelligence in humans. The struggle for life provides a learning framework for the brain, and an intrinsic space of values and emotion, but for each species is different depending on their ecological niche.

I often see people musing about consciousness as something supernatural (a "hard problem" for science), but forgetting to consider the formative role of the environment and the tight coupling of consciousness with goals, which are attainable in the environment. So the macro is being overlooked for the micro.

We can explain much by just looking at the environment-body interaction without the need to identify the pixie dust in the brain. I think consciousness is a process encompassing the interplay between the body and part of the environment, not just the brain. We can't find it in the brain or in the neural net alone, but we can find it in a game of goal maximization of a learning agent. That would include a tardigrade[1].

[1] Aversive conditioning in the tardigrade https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6776688/


But that still doesn’t explain how we, or other living beings are able to see and feel things from our own perspective. It is genuinely a hard problem, especially if you try to understand it from a purely naturalistic pov.


You can try to explain it:

- we form unsupervised representations for vision, hearing and the other senses (first person perceptions)

- we have goals that paint any situation into good or bad, any action into useful or not (emotions associated to the relation between perceptions and goals)

- we form memories and our experience is dependent on past actions (semantic, episodic and procedural memory)

- our environment is full of other humans so we have to predict their behavior as well, and we get to see ourselves as we think they see us (the self)

It suggests a lot about how we get to "see and feel things from our own perspective". We have actual brain circuits dedicated for that, and a supportive environment for cultivating them.


Or they are a case for this idea of the universe itself being conscious and beings tap into that via neurological mechanisms we do not understand well.

It may be a base level of complexity / capability is needed to tap into the "field of consciousness", which could explain bees, these things, and other seemingly too small living systems doing more than their apparent weight class would indicate is possible.


There's no reason to believe this is true about the universe, but it is a fun idea to think about. We know so little about consciousness, intelligence, and the brain in general. It is amazing how much insects seem to accomplish with so little.


Just read past this again, and you probably won't see my reply, but our brief exchange had triggered another fleeting thought perhaps worth sharing.

There is no reason to believe...

I'm not sure what context you were speaking in, but I have a fairly conservative frame I work with, and that is we know almost nothing. The scientific method does bring us understanding and that's basically a monkey see, monkey do kind of thing where we understand doing a is going to yield b.

Understanding is predictive to a point, and at that point, error exceeds understanding for whatever reason, and we simply don't understand enough to proceed, which drives new science and ultimately, new and improved understanding.

None of that actually invalidates prior understanding. Where understanding is predictive, and it has been validated against the authority; namely, mother nature, it's good understanding! And that all means knowing the limits of understanding is as important as the understanding itself is. Applying our understanding outside it's scope can be worse than ignorance.

Perhaps this is all well trodden ground for you, and I apologize in advance, but it did also highlight "reason to believe", and I just generally don't. For me, there just isn't a lot of room for "believe" in all this. Knowing things is a tall order. Confidence in understanding may just be a technical, formal expression of believing too.

Where we lack understanding, it's a guessing game. We may make pretty good guesses, depending on what understanding we do have and our own nature, intuition and such, and that's fine, even ideal depending on what the scenario is.

Maybe I would go down this road saying something more like, "I suspect" or "these ideas are suggestive", hinting at whatever basis I might have to give an idea greater weight or worth, but indeed! There is no reason to believe anything is true about the universe on basic principle, knowing our current limits and how we arrive at pretty much anything useful.

Later, below you used "unlikely" which suggests to me we get after thoughts like this in a similar way, and it's OK if not, but somehow I felt compelled to just get these reasoning basics out there, out of my head.

Happy New Year, and thanks in advance for entertaining me.


For sure. The significant point is we just don't know much about consciousness.

Given that state of affairs, I find it useful to be aware of the various major, core ideas. Anything could lead us to some greater understanding.

It is a fun idea to think about, like a field of sorts that can be tapped into.


It's a fun idea, but a big stretch to imagine we're tapping into some field where that takes place completely outside the realm of the forces and particles we understand (but initiated from us in the realm of forces and particles we do understand.) Not impossible, just very, very unlikely.


I tend to not give the various ideas much waiting right now. The reason I do that is we really don't know a damn thing about consciousness.

For all we know there are new physics out there.

The other idea that I find compelling, and this is my own idea, is get consciousness of some kind the organism or mechanism must be complex enough to differentiate itself very fully from the world around it.

We may end up making a smart brain in the box, but I don't think it will become conscious, unless it is complex enough to be aware of itself completely. And I'm sorry I'm having a hard time articulating this this morning, but just think about your fingertip we've got nerves and everything going all the way down there so that the whole mechanism is a closed loop it's kind of like we're built to know what is us what is not us completely with high fidelity.

I think we're going to find that is a requirement for a self to manifest or in some way defines a being from a mechanism.

Mechanisms will not be conscious, beings will.

In any case we're pretty ignorant right now, meaning that Consciousness is Magic, and we can talk about all sorts of stuff and not be out of bounds.


Do you have some resources you'd like to share around this? Would love to read about it.



https://www.nbcnews.com/mach/science/universe-conscious-ncna...

I have seen this idea pop up a few times and find it intriguing. I have read much better, but struggle to find links right now.

That said, this backgrounder / brief type piece should point you to the rabbit hole.

To me, in the most general sense, it's similar to the idea of the universe being cyclic, rather than having some definitive beginning and or end. It may just always be.

In this context, consciousness may be an emergent artifact of how living systems function, or maybe it's a thing of it's own, or... Who knows? I don't, but I consider basic branches of thought like this important, if nothing else to stimulate possible lines of thought easily passed by otherwise.


Thank you, I always had some thoughts in this direction, but never read about it somewhere else (never put time into it).

I definitely think there's a lot more beyond what we are able to sense or even grasp (if that's the appropriate word); in a similar fashion to these sort of Flatland examples, about how 2D beings are unable to grasp our 3D world, perhaps only slices of it at a time ...

What if consciousness is (or lives in) some sort of different dimension that we only get a glimpse of. What if, on that higher dimension, consciousness behaves like matter, has a definite structure, etc...

Ammazing stuff to think about.


I agree! Perhaps one of us stumbles on a new branch of understanding one day.

Have fun.


What says our consciousness is not just a long form electro-chemical reaction of deeply nested feedback loops programmed into millions of base pairs?


AFAIK that’s the only explanation we have any evidence for at this point.

It’s easy for me to think there’s magic buried in the hundred billion neurons and hundred trillion synapses in the human brain. But in the case of tardigrades or wasps, it becomes much less obvious where the consciousness would ‘live’ with such limited real estate.


A wasp is at least an order of magnitude more complex than a tardigrade though (maybe even two orders?), and flying require quite complex intelligence so I would not put some for of consciousness for wasps. Though I would believe that their “brains” are entirely different from mammal brains, because to be honest, some mammals are really really dumb (and for a long time they were believed to be only “biological robots”?).


The basis for believing this is the inference that because the physical world insofar as we have knowledge of it obeys physicalist principles, so must the human mind.

But we can't really claim to know based on our investigation of the mind itself. If we entertain theories of consciousness like dualism, monism or emergentism, it seems unlikely that we have the categories of understanding and analysis to decisively confirm or deny them.


It is, and it's a very immersive and persistent illusion.


They seem to occasionally seek the company of other tardigrades, and hug.

It could be purely mechanistic stimulus response behavior, but I'd like to think they're experiencing some sort of life, and that it's mostly a good one.

That second YouTube video is great!


> They seem to occasionally seek the company of other tardigrades, and hug.

I would assume that this feature is necessary for reproduction.


For contrast, ants have around a quarter million neurons.


We have mud dauber wasps around the house. I can't find a good reference for neuron count...the only one I see is for super tiny parasitic wasps which I can't imagine is accurate. Lets assume they are on the order of honey bees at 500k-1M neurons.

This past fall I saw one on the ceiling of my front porch. It had a rear leg pinched between the soffit and the edge of a can light. It was pretty obviously struggling, pulling away with its remaining legs and every now and then buzzing its wings. It was interesting but kind of a dumb 'more powah' response that could be the result of a biological integrator of some form.

I then saw one of the most remarkable things that I've personally witnessed in nature. Another wasp of the same type was randomly flying by, about 6" below the level of the ceiling and along a path that would have brought it no closer than a foot away from the stuck wasp. After flying past the stuck wasp, it took a hard right, circled around and landed near the stuck one. It then proceeded to walk up, grab the stuck ankle (?) of the other wasp and help it tug. They both worked on it for a few seconds, broke the first wasp free (not sure if the foot was still intact), then they both flew away in different directions.

I can build a mental model of a biological automaton lacing together finely tuned feedback loops that cause it to 'act' in a way that supports itself. Extending that to a point where it appears to identify, empathize, re-plan, solve a physical problem then resume prior plan starts to stretch my ability to imagine without some kind of subjective consciousness backing it all.


> I can't find a good reference for neuron count

https://doi.org/10.1371/journal.pone.0059090

    DNA labelling and analyses of confocal z-series reveal that the brain of Macrobiotus cf. harmsworthi contains ∼200 nuclei that are arranged in a bilaterally symmetric pattern (Figure 2D and Figure S1). Since some of these nuclei are likely to be from non-neuronal cell types, such as glia cells, the tardigrade brain might contain fewer than 200 neurons.
Some more reading:

Martin et al. "The nervous and visual systems of onychophorans and tardigrades: learning about arthropod evolution from their closest relatives": https://bib-pubdb1.desy.de/record/401705/files/Martin_et_al....

Mayer et al. "Selective neuronal staining in tardigrades and onychophorans provides insights into the evolution of segmental ganglia in panarthropods": https://doi.org/10.1186/1471-2148-13-230

Also note that tartigrades apparently have a constant number of cells throughout their life, so instead of increasing the cell count, they just increase cell size. The cells can have increased complexity anyways. All it really means that the nervous system has about 200 nuclei. Also don't forget that even single celled organisms are capable of the classical behaviours like hunting for food, running away from predators, forming colonies (e.g. bacteria), etc.


I'm pretty sure he was referring to the neuron count of the wasp.


Oh right, good point.


I know next to nothing about a bug’s life, but interesting to consider distress signals being tuned into otherwise solitary creatures. For colony wasp/bee I could see why distress would bring help (bring more venom!) but solitary… maybe it was kin?


But what he explains means there's also some problem solving there. I mean, the coming one grabbed just the ankle that stuck. She didn't pull the other wasp from any other part of her body.


I had a light bulb moment a few years ago when I saw a seagull come in with the wind, turn against it and land perfectly at the top of a pole.

Since then everything AI is still impressive to me but it just doesn't feel very advanced.

Within a skull smaller than my fist, these birds have routines for repowering, communication, duplicating themselves, attack and defense.

That is advanced.


200 neurons, yet still more AI complete than any concocted deep learning attempt with way more of so called "neurons".


A 'neuron' might be a kind of the biological equivalent to the CPU or microcontroller.

So saying that the Tardigrade has 200 CPUs sounds pretty good, actually.

If you build a drone for example with 200 CPUs, you'd expect it to be pretty capable I'd imagine, including some form of AI or machine learning.

And it seems to me neurons by default are machine learning "CPU"s...


Really depends, doesn't it? 200x something like a attiny13 is still kind of useless in that regard


I've never tried to build a Tardigrade with 200 attiny's, so I'm not quite sure....ESP32s with 8MB of PSRAM on the other hand....would it be enough?


Well, not really. They are neither nodes only in a ML algorithm but neither are they complete processors. They are somewhere in between where they can do some limited “signal processing” (eg. throttle signal frequency, negate it, and have some form of primitive memory even), but they are likely not Turing complex in themselves.


Just dumping this somewhere, pretty informative video on how neuronal signalling works.

https://www.youtube.com/watch?v=ZKE8qK9UCrU


Great video! Love the visuals -- thanks:)


Would they need to be Turing complete? Seems to me they operate on a completely different principle.

Anyways my statement was analogical -- and as we know analogies are merely tools for extending thinking from one paradigm to another. Best not to take it TOO literally.


Evolution is really very clever not just in the how our neurons control our bodies but also in how our physical forms make the motions we need to perform natural. As an extreme case, there's the famous video of a fish able to swim upstream despite being dead[1]. Working with robotic grasping professionally has made me acutely aware of how awesome my hands are and how much easier they make daily life compared to simpler alternatives. All of which is to say that we shouldn't think about the tardigrades locomotion just in terms of the neurons its using.

[1]https://fyfluiddynamics.com/2018/07/when-i-was-a-child-my-fa...


Its safer to assume all life is conscious than the common assumption that we are somehow special in that regard.


All life is conscious, but is all consciousness alive?


Is a primitive bacterium in any way or shape conscious? I think that would be a way too generous usage of the word.


A bacterium is a single cell. The neurons in our brains are also single cells. If we think consciousness is in the brain, then isn't it also in the neurons? If its in the neurons, is it in a neuron?


No? Emergent properties are a thing.


Though the ant example may be more interesting, humans have ~86 billion neurons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: