> Going back to the 1940s, researchers have exposed spiders to caffeine, amphetamines, LSD and other drugs, attracting plenty of media attention along the way. Unsurprisingly, these spiders make addled, irregular webs.
Huh. Photos of these webs were featured in a Time-Life book on drugs my parents left out for me in the 1960's. My brother and I were entranced, perhaps not the intended effect. Each web did convey the drug in question.
Years later I described these spiderwebs to the storyboard artists for "A Beautiful Mind", influencing the "mad shack" set design.
This is actually a very compelling paradigm to consider, and on the surface it looks like an issue of semantics, but the change in viewpoint can change how you look at cognition considerably.
I was pondering the nature of consciousness, particularly the barriers between a social species, eusocial species, and singular mind. My brain is made up of neurons, but there's no discernible consciousness to them individually, there's just my mind. Is a herd of cattle a mind in a similar way? What about a bee hive? Is the distinction simply a function of bandwidth or latency? Is the distinction based on some organizational principle? Something else?
What this article kind of argues (or frames someone else's argument) is that there is no fundamental distinction, just an apparent and illusory one. If you offload memory and information processing tasks externally, for example, taking a list or using an abacus, do those tools become a part of your mind? If you cooperate with another person, do you both become part of each other's minds? The perspective laid out in the article argues yes, when you offload cognitive tasks outside of your brain, the tools with which you do that are a part of your mind. It is very interesting to consider.
Our minds don't exist independently from each other to begin with, we are not as separated as we are connected. We are all part of a system, just as our brain has parts.
We don't see it this way for some evolutionary reason, but distrupt the communication between the two brain hemispheres and you start to see how much of our concept of indivisible, independent self is made up after the fact by parts that create coherent pictures of the work of other parts, which takes time and by parts that reflect on this, which takes even more time. By the time this coherent self emerges most of the work is done, and a large amount of it is not initiated by it, rather observed. We are of course capable to set things in motion with this slow coherent self, but effective behavior comes from learning to push all the work to the faster parts, which are either already specialized (cognitive biases, heuristics) can be specialized (learning, habits) or even external like tools and people.
Organizations can be seen as an extension of this, like a frontal frontal lobe. Organizations are basically slow thinkers, with procedures to control cognitive biases.
Societies are even more complex and slow, but they too mirror this hierachical organization.
Yes! This is the nature of our species. We externalize thought.
Calculators make people dumber. A slide rule is like a trombone; you slide to your best guess, then listen to the sound and home in on the note. An abacus trains the mind to no longer need an abacus.
In the future the most capable humans will be those who coevolve with external tools. This has long been the case, but we're entering a realm where those tools are increasingly cognitive. Mathematics is a nice microcosm for the tension here: It attracts people with unusual gifts who are certain those gifts are sufficient to proceed as if today is 1790. Others, perhaps not so gifted, but with a greater brain plasticity who welcome these external tools, are having a field day.
> Octopuses are famously smart, but their central brain is only a small part of their nervous systems. Two-thirds of the roughly 500 million neurons in an octopus are found in its arms.
OK...
> For the octopus, with thousands of suckers studding symmetric arms, each of which can bend at any point, building a central mental representation of how to move seems like a computational nightmare. But experiments show that the octopus doesn’t do that. “The brain doesn’t have to know how to move this floppy arm,” Cheng said. Rather, the arm knows how to move the arm.
How is this supposed to explain anything? The computational problem is the same whether you solve it with neurons located in the head or neurons located in the arm...?
Not sure if the analogy I'm about to put forward is oversimplified, but it feels akin to "display Thing on Screen" being instructed to the GPU and the GPU doing all the heavy lifting to calculate what Thing is and ultimately displaying it on Screen.
If this is the case, then ants have a similar characteristic. A newly decapitated ant body will stand upright. It will stand steady against a breeze, stick to an inverted surface and right itself if tipped over.
It essentially stands waiting for instruction to walk.
It's beneficial to do it like this for the octopus given the constraints of its environment. Have a close run in with a seal or a shark or some other predator, and lose an arm, and you aren't wasting energy maintaining resources for the missing arm in the central head brain. Two octopus being equal aside from a central brain versus a limb based system, and you chop off an arm, the limb based octopus will survive and eventually out compete the centralized brain octopus due to lower total metabolic costs from their more modular brain system. The central brain octopus needs to eat more food to pay for a brain it isn't using to its fullest extent.
Humans are like the central brain octopus. We have experiences like the phantom limb phenomenon after becoming paraplegic; there is still the signalling network set up for the missing limbs and this is still a metabolic cost to keep up these connections in the brain even with the limb no longer present on the body.
> It's beneficial to do it like this for the octopus given the constraints of its environment. Have a close run in with a seal or a shark or some other predator, and lose an arm, and you aren't wasting energy maintaining resources for the missing arm in the central head brain.
This is a perfectly reasonable thing to say, but it tends to conflict with the article's own interpretation of what it's saying.
For example, the image captioned "Octopus movements are too complex to be centrally coordinated."
Their point does not appear to be that you can save on energy budget by storing the arm software in the arm, so that it goes away when the arm does. Rather, they seem to be saying that it's possible for the arm software to solve problems that would be too computationally complex to solve, if you were trying to solve them inside the head instead of the arm.
I think the article doesn't make very many conclusive points to be fair, it's more just probing an interesting question about offloading biological compute, but this is certainly an energy question. Everything in biology is due to entropy. The article even concludes talking about the ideal experiment would be to measure the hypothetical metabolic costs of web building vs having more brain tissue and no web building, if such a setup of otherwise equivalent spiders were possible.
I think its just an example of how it might be simpler to distribute the load. It's usually simpler to build multiple cheap things than one big expensive thing. We see this with CPUs: our supercomputers aren't single core monsters, but instead thousands of desktop class CPUs strung up in parallel and we distribute our computing jobs across an array of them because this is much cheaper.
Huh. Photos of these webs were featured in a Time-Life book on drugs my parents left out for me in the 1960's. My brother and I were entranced, perhaps not the intended effect. Each web did convey the drug in question.
Years later I described these spiderwebs to the storyboard artists for "A Beautiful Mind", influencing the "mad shack" set design.