I've never understood what people see in this story. Meat is a complex organization of trillions of self-replicating machines, each of which (besides red blood cells maybe) is also incredibly complex... much more advanced than anything human minds have been able to build.
I can't imagine what a sentient entity would have to be made out of or how much less "efficient" or stable it would have to be than me for me to find it amusing. If it turns out that some slow geological process or a momentary dust cloud are actually self aware, the last thing I would do is laugh.
Is it just an abstract association that we're wet inside whereas computer chips are dry? That's just where our technology is right now, it largely reflects how our brains can't simulate fluid dynamics or conceive self replicating distributed systems efficiently enough so we resort to designing simple, solid state things.
i think the point of the story is exactly what you've gotten from it
namely, it's absurd to dismiss the possibility of self-awareness in some system because it's built out of parts different from the parts other self-aware systems you're familiar with are built out of
a useful thing to keep in mind when the stochastic parrots start squawking about how large language models aren't actually intelligent
The goalposts for intelligence always seem to change with each new foundation model that comes out. As long as people can understand the principles behind why a model works, it no longer seems intelligent.
while they don't yet rise to the level i would describe as 'intelligent' without qualifications, they do seem to be less unintelligent than most of the humans, and in particular most of the ones criticizing them in this way, who consistently repeat specific criticisms applicable to years-ago systems which have no factual connection to current reality
A disembodied paragraph that I've transmitted to you can appear to be intelligent or not, but it only really matters in the sense that you can ascribe that intellect to an agent.
The LLM isn't an agent and no intellect can be ascribed to it. It is a device actual intelligent agents have made and ascribing it intellect is equally as erroneous.
Going meta for a moment, this argument begs the question, assuming the conclusion "Therefore LLMs are not intelligent" in the premises "No intelligence can be ascribed to LLMs".
I'm not convinced it's even possible to come up with a principled, non-circular definition of intelligence (that is, not something like "intelligence is that trait displayed by humans when we...") that would include humans, include animals like crows and octopuses, include a hypothetical alien intelligence, but exclude LLMs.
I'm not arguing that LLMs are intelligent. I'm arguing that the debate is inherently unwinnable.
almost precisely the same assertions could be made about you with precisely the same degree of justification: you aren't an agent and no intellect can be ascribed to you. you are a device unintelligent agents have made and ascribing you intellect is equally as erroneous
an intelligent agent would have recognized that your argument relies on circular reasoning, but because you are a glorified autocomplete incapable of understanding the meanings of the words you are using, you posted a logically incoherent comment
(of course i don't actually believe that about you. but the justification for believing it about gpt-4 is even weaker)
Consciousness is generated when the universe computes by executing conditionals/if statements. All machines are quantum/conscious in their degrees of freedom, even mechanical ones: https://youtu.be/mcedCEhdLk0?si=_ueWQvnW6HQUNxcm
The universe is a min-consciousness/min-decision optimized supercomputer. This is demonstrated by quantum eraser and double slit experiments. If a machine does not distinguish upon certain past histories of incoming information, those histories will be fed as a superposition, effectively avoiding having to compute the dependency. These optimizations run backwards, in a reverse dependency injection style algorithm, which gives credence to Wheeler-Feynman time-reversed absorber theory: https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorb...
Lower consciousnesses make decisions which are fed as signal to higher consciousnesses. In this way, units like the neocortex can make decisions that are part of a broad conscious zoo of less complex systems, while only being burdened by their specific conditionals to compute.
Because quantum is about information systems, not about particles. It's about machines. And consciousness has always been "hard" for the subject, because they are a computer (E) affixed to memory (Mc^2.) All mass-energy in this universe is neuromorphic, possessing both compute (spirit) and memory (stuff.) Energy is NOT fungible, as all energy is tagged with its entire history of interactions, in the low frequency perturbations clinging to its wave function, effectively weak and old entanglements.
Planck's constant is the cost of compute per unit energy, 10^34 Hz/Joule. By multiplying by c^2, (10^8)^2, we can get Bremmerman's limit, the cost of compute per unit mass, 10^50 Hz/Kg.
https://en.wikipedia.org/wiki/Bremermann%27s_limit
Humans are self-replicating biochemical decision engines. But no more conscious than other decision making entities. Now, sentience, and self-attention is a different story. But we should at the very least start with understanding that qualia are a mindscape of decision making. There is no such thing as conscious non-action. Consciousness is literally action in physics, energy revolving over time: https://en.wikipedia.org/wiki/Action_(physics)
Planck's constant is a measure of quantum action, which effectively is the cost of compute..or rather..the cost of consciousness.
Lines up a bit too perfectly. Everyone has their threshold of coincidence I suppose. I am working on some hard science into measuring the amount of computation actually happening, in a more specific quantity than hz, related to reversible boolean functions, possibly their continuous analogs.
The joke is how you decide that the machine isn't an agent. If you believe only meat can be an agent and given the fact that machine isn't meat, it follows machine isn't an agent. The story reverses this chauvinism and shows machines finding an idea of thinking meat absurd for arguably better reason that machines are better fit for information processing than meat.
How are you defining intelligence? And how are you measuring the abilities in existing LLM systems to know they don't meet these criteria?
Honest questions by the way in case they come out snarky in text. I'm not aware of a single, agreed upon definition of intelligence or a verified test that we could use to know if a computer system has those capabilities.
Yann's explanation here is a pretty high level overview of his understanding of different thought modeling, it isn't really related to how we define intelligence at all and isn't a complete picture. The distinction drawn between System 1 & 2 as explained is more of a limitation in conditions given to the algorithm rather than ability of the algorithm itself (i.e. one could change parameters to allow for unlimited processing time)
Yann may touch in how we define intelligence elsewhere, I haven't deeply studied all of his work. Though I can say that OpenAI has taken to using relative economic value as their analog for comparing intelligence to humans. Personally that definition is pretty gross and offensive, I hope most people wouldn't agree that our intelligence can be directly tied to how much value we can produce in a given economic paradigm.
human-written words are not self-aware; they're just inert ink on paper, or the equivalent in other media. i've never seen even a small child make that error before. if you wrote that in earnest, you seem to be suffering from a bad kind of confusion and may need medical attention
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
I’ve been visiting this site for a long time, and it’s getting to the point where it feels like people are ignoring this rule regularly, and it’s disappointing.
My recollection is that there was quite a lot of debate back then on whether machines would be able to think or be conscious with arguments on the can't think side being Searle's Chinese Room and similar (https://en.wikipedia.org/wiki/Chinese_room), which I always thought was a bit silly and people don't seem to take seriously now but did back in 1991.
The Made of Meat story was quite a nice counter argument to that.
The argument that sentience can't derive from computation alone because computations can be slowed down and done by hand still hold for me. The piece of paper written on or the silicon doesn't acquire different properties because what the operation will be used for, the GPU doesn't differentiate between graphic rendering and weight computation when it's adding 1. That's in opposition with animal brains that not only do computations but interact with real-world matter/fields directly. With that said of course we don't know if human like sentience is a prerequisite to (super)human level intelligence. But recent advancements in AI tends to diminish the argument sentience is a side effect of intelligence though.
>The piece of paper written on or the silicon doesn't acquire different properties because what the operation will be used for
I does though. Electric current in general doesn't compute, but electric current in a computer does compute. Thus electric current has a property of computation depending on what it's used for.
> With that said of course we don't know if human like sentience is a prerequisite to (super)human level intelligence. But recent advancements in AI tends to diminish the argument sentience is a side effect of intelligence though.
We already know the answer to this and it is no, it is not a pre-requisite unless "sentience" is itself an emergent property of intelligence. Hutter's mathematical formulation of an optimally intelligent agent (AIXI) is not computable but approximations of it are, that is to say super-human intelligence IS just a computable function (as human intelligence is resource bound and suboptimal) with no extra "sentience" required. The only limiting factor at this point is the computational resources to compute this function, with the resources we have now it is still at the "toy" stage: playing noughts and crosses and Pac-Man etc.
People used to think that "creativity" was required for playing chess... clearly those people had not heard of Minimax.
Your consciousness can be slowed, it takes time for you process input and make decisions. Your sense of time varies by circumstance (and sometimes drug). I don't know if you would argue a single neuron is aware of the thoughts it is enabling. The fact is i could easy argue that you are a chinese room.
Actually, the story doesn't say that it's machines if I can see correctly. It might just as well be other evolved organisms, just not protein/"meat" based.
Makes you wonder how can they have a concept of "meat", let alone such a dismissive one. We call "meat" the animal tissue that we eat as food- so either here "meat" is an inadequate term for "organic matter", or we have the curious case of an alien lifeform that eats organic matter while not being itself organic.
In any case, being non-organic, it's not clear where they might have gained such contemptuous familiarity with organic matter- although it's clear from the story that they know about it much less than they think.
English has separate terms for "flesh" and "meat" for complicated reasons, with "meat" having a stronger food implication. But I think the story uses "meat" because it's a funnier word in that context.
It’s not hard to imagine aliens knowing about organic matter since those are universal. I could see a disdain for it if that’s where they started and then “ascended”. Much like we might view crude stone tools compared to say the LHC. Or how we view a single celled organism.
my take on it is the absurdity of denying capacity to something simply because you look down on it for one reason or another. If there are signs that a thing does a thing, then it does it. It should not be weird that this thing does it, we should simply update our internal model to better reflect reality as we now know it and move on.
To flip it around, it is equally absurd to say computers can't be sentient because they're just math, or just minerals and electricity etc.
The point I took away is that almost all biological life is optimized for energy efficiency in its ecological niche. Any interplanetary species would be beyond those constraints, and would either evolve or engineer their way out of them as soon as possible to, you know, live! Not to mention eradicate disease, travel the galaxy, etc.
I like how the word "meat" is clearly an inaccurate translation of a notion in the language of the conversation, for lack of a better word in human languages. "Organic matter" would be more accurate, but less striking.
Meaning, "meat" is a variation on the "unreliable narrator" theme: the "unreliable language". This is used a lot in Gene Wolfe's Book of the New Sun, where medieval language describes artifacts of a space-faring civilization.
Funnily I thought almost the opposite. I feel like "meat" is deliberately chosen in-character to be both less precise and less respectful than "organic matter". It would be like a species with organic computers incredulously describing our computers as "rocks".
I think that the computers-as-rocks analogy works even better than this story, since there is no such thing as non-sentient meat, but there most certainly are rocks that can't compute
You've crystalized for me what I've always liked about this story, it implies that for some (horrifying?) reason the aliens are totally familiar with non-sentient meat.
- Can you believe those so-called humans. They made their computing devices out of rocks! They shovel sand out of beaches, make disgusting plates out of it, and force electricity through little paths of metal on them.
The Long Sun series (also Gene Wolfe) was another excellent use of this. Everything is narrated by a priest unknowingly living on a generational starship and you're left to interpret the world around them.
Also happens in a classical setting with the Soldier of the Mist books.
That's sort of a "house special" at the Gene Wolfe fine writing establishment - his characters use the words expectable for their time and circumstances, and the reader has to work it out to the contemporary context.
It's "Coding Machines" by Lawrence Kesteloot, January 2009, and it has a lot to do with Cox's piece, Running the “Reflections on Trusting Trust” Compiler
Edit: I was actually thinking of "The Great Silence" (aka the parrot one) which is a bit shorter but also available online. (The last line always gets to me)
I generally like Chiang's work and its derivatives but that was... kind of terrible. It felt like you told a government PSA writer to do sci-fi, except it isn't really sci-fi. It's an essay which posits that parrots are sapient and our failure to recognize it means we won't recognize alien sapience, and also wiping out parrots is bad. It's an if/then statement that stops at the if.
The entire overarching genre is often called speculative fiction, so your description of it is sort of accurate. Writing would be in a sorry state if the only stuff that got put out had to answer its own questions.
But it's not speculative at all. There are only like three sentences of any substance in the whole thing. A parrot who is more an idea than a character speaks to the reader claiming that a parrot understanding shapes and colors means that parrots are sapient, and humans not recognizing that means we won't recognize sapient alien life, P.S. humans are killing all the parrots. There is no story. The essay being from the perspective of a parrot instead of a person talking about parrots has no consequences and doesn't change the work at all. The whole thing is incredibly facile.
We Puerto Rican parrots have our own myths. They’re simpler than human mythology, but I think humans would take pleasure from them. Alas, our myths are being lost as my species dies out.
That's the only fictive part of the entire work.
It's just so lazy to change the premise of something without that change having any meaningful impact. What makes the statement by the parrot different from if it were from a human? Nothing whatsoever, and that's why this is a bad "story".
Although, now that Douglas Adams has been brought up, I think I should also recommend his lesser-known book, Dirk Gently's Holistic Detective Agency, which I believe has some connection to Dr. Who.
Yes, the first Dirk Gentley novel was based off a cancelled Dr. Who project called Shada, which I believe was recently released as an animated special.
There's a short story along these lines I remember reading somewhere, but I could never find it again. It describes the experience of someone whose brain is separated from their body but kept connected to it via a remote connection. Over time there are various experiments or different situations, like going into a cave to experience higher latency, maybe controlling multiple bodies at once, replacing the biological brain with an equivalent simulated one, and some others. I wish I could remember more of the details...
Any chance it's one of the ones you mentioned above, or another someone recognizes? I was able to find that the phrase "brain in a vat" refers to this genre as a whole, but haven't yet found the particular version that I faintly recall reading.
Add to that "Thang" by Martin Gardner (yes, that Martin Gardner, you Scientific American old-timers). I read it in an Isaac Asimov-curated short story collection once upon a time and it remains one of my favorite short-shorts.
Though I have to add, as a huge fan of Ted Chiang, that you missed one of his best short stories, and certainly his shortest story - it's about a page long and will probably take less than five minutes to read, iirc:
"Yes, a rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone. Was in contact two galactic rotation ago, wants to be friendly again."
2 galactic rotations ? That's a long time to keep a grudge. But also fun to think that Earth is only ~20 galactic years old.
We are born meat and die meat; our life spans are short
"Hyrdogen core clusters" might be around a bit longer
On a related note, it would be curious to see how many years a computer could keep chugging along silently doing its job. Of course dependent on a reliable power supply and that We stop our habit of breaking all dependencies every three months. Maybe thousands of years would be possible? A bit interesting to think about.
The longest-running computer that we are aware of ran for about 4.5 billion years, and even then it only stopped because it was destroyed in order to facilitate an intergalactic highway construction project for a hyperspace express route.
Enjoyed this but upon reflection seems odd that they would have a word "meat" and yet be unfamiliar with biological systems / consciousness in animals.
That's what seems odd? It's no more odd than aliens speaking English in the first place.
At the risk of stating the obvious, the language here is not only used to transmit the ideas in the story, but convey feeling. With the use of "meat" as a pejorative, you're supposed to understand the feeling of contempt and almost repulsion that these beings have towards us.
Another great interpretation of the story would be via speciesism/anthropocentrism, we consider ourselves better than other animals, but for the aliens, we would all be "meat".
I feel somewhat similar about the arrival of AGI, I definitely want to cherish my time now more. I think it is at least plausible that these are the last years of anything resembling any value. Even beyond my own existence. It is not exactly the same, with AGI there is more uncertainty about timelines and outcome, and there is no general expert opinion that one could defer to.
Surely an super intelligent alien species that studied “meat” would know the depth of the organization starting with the organic molecules and extending to the cell and then to the neurons and neural networks and then the brain.
The surprising thing to them would not be that we are made of meat but that we eat meat. How could we take such intricately organized matter and just burn it for fuel? It would be like coming across a power plant that is powered by burning CPUs and motherboards.
They would wonder why we didn’t just use the abundant sunlight and elements to power ourselves (like for example plants).
It’s important to remember science fiction tends to explore and make a commentary on the human aspect. The best stories aren’t about new technology, but its implications and effects on the human condition.
With that view, you could read this story as a commentary on humans themselves. We also don’t fully understand other species and are often astonished by what we discover. Above all, we can be incompetent and make mistakes. Remember the myth that “bees shouldn’t be able to fly”?
We can assume from the descriptions that while they've come into contact with beings with meat or meat-like structures as part of them, they've never encountered intelligent beings made of meat. And so they necessarily can't know all that much about the possible states of meat. So your "surely" is canonically wrong.
You can argue they ought to have known about that, but that is based on assuming life like ours is common, and the point of the story is that this is an assumption we're making from a sample of one planet. In the in-story universe it is also canonically doubtful that life like ours in common, given that they clearly know of many other species, and can explore at FTL speeds, and yet still haven't run into one like ours.
To me it feels shallow to criticise a story based on ignoring premises of the universe the story is set in. Criticise the premises, by all means, and argue it doesn't fit our universe. That's fine. That gets you to the point of the story: To get you thinking about why we should assume life like ours is common.
You think an advanced species would be surprised that we're made of what we consume? Quite a funny take, because there's literally no other option.
The reason a power plant, or factory, or any machine at all doesn't "eat" what it's made of is because human engineers are the digestion enzymes and protein factories. We digest raw materials (amino acids) into parts (proteins) based on plans and schematics (DNA), and then we put them into the machines.
This is what your body does by itself. It's a factory that keeps building and rebuilding itself. That's in fact the only viable option for a resilient system. Think what's better, having kidneys, or needing dialysis? Self-sufficiency is always better for resilience and flexibility. Which... again, any intelligent species would know.
The purpose of this story is to jolt us out of the status quo and see things from another perspective. A species having advanced culture doesn't mean they have no biases and prejudices based on their preconceived notions.
We also fancy ourselves intelligent, but we have zero regard for "lower" lifeforms. In fact, we also exhibit odd and illogical cultural trends such as:
1. If someone abuses a pet dog or cat, we may put them in jail.
2. At the same time we abuse, kill and eat farm animals on a vast scale. Pigs are no less intelligent that a dog or a cat.
3. Yet if someone has a pet pig, we may call law enforcement on them for animal abuse, even if they take good care of their pig.
Those three don't belong together in any way. Yet here we are.
> Those three don't belong together in any way. Yet here we are.
The difference here is degree of humanization of an animal. Recent Andrew Huberman podcast with a former FBI hostage negotiator[1] touched upon the topic.
In animal research labs, the researchers are disallowed to name the animal subjects, only to assign numbers or codes.
In a hostage situation, simply letting your captor know your name increases the chances of your survival. Conversely, having your face covered reduces the chances.
Humanization and dehumanization of things, living beings, other humans and ourselves is something that we generally tend to do. A lot of cruelty in the world can be traced to this observation.
>You think an advanced species would be surprised that we're made of what we consume?
The problem with advanced species we we have a sample size of one.
The problem with this sample size is it gives us no idea on the probabilities of intelligence looking anything like we think it does. In fact there is a non-zero probability that any intelligences we meet that cross space will have nothing to do with the host intelligences that created them. At least with our current knowledge of physics we don't see any way that digital 'life' could bootstrap itself. But currently us carbon based lifeforms are furiously cranking away at making thinking rocks that are built in factories. The fact that humans have a 4 billion long uninterrupted chain of molecular factories has nothing to do with other forms of life needing that at all.
Of course, if an AI kills another AI embodied AI is that much different from us killing a human and eating them?
Our sample size is way more than one actually, maybe if we just abandon the superficial concept of "advanced". For example the way insects organize in a colony and your cells organize in a body and humans organize in society is identical bar some circumstantial distinctions. When a principle comes about, reinvented independently so many times, we as intelligent beings need to realize "hey maybe that simply what it's like in general".
Most of what we are is actually none of our doing. Most of our discoveries are incidental (including in medicine, we don't know how many of our drugs work for example), and we're clearly unprepared to live in the world we ourselves created, hunched over keyboards in claustrophobic offices or locked up at home.
We're not an advanced species, our society is in-between a "colony" and "multicellular organism" and more and more of our advancements are created by computers for computers. We don't understand a lot of how an AI works, it trained itself, we just did back propagation and observed the prediction error get smaller over time.
Similarly today CPUs are designed by software written on the previous CPUs, machines are engineered on machines, and so on. The digital civilization is bootstrapping itself and eventually might leave the cocoon.
Saying other forms of life won't have parts that self-maintain to a degree is quite odd, because it's logically impossible. You see if you are not made of semi-indendent parts, you become extremely fragile. What exactly you think is the alternative? This is not about silicon vs carbon or analog vs digital. It's more about basic logic.
Huh? You don’t understand what culture is. The study of logic itself, science, philosophy, etc are all part of culture. Culture is a shared heritage without which you would still be cracking nuts open with rocks.
Not illogical at all. A person that abuses animals is a potential menace to society - lack of empathy means they might easily abuse humans too. We are punishing sociopathic tendencies here.
Farming animals is not sociopathic, it's a business decision based on economic interest.
This works because animals don't have human rights. (Obviously.)
We jail animal abusers based on a real-world idea of "pre-crime". (Unmotivated animal abuse strongly correlates with unmotivated violence against humans. For this same reason cartoon child porn is also illegal.)
It is a personality trait I identify with, made more stark because my wife is quite the opposite.
IMHO, good fiction asks us to suspend our disbelief to create a novel setting and unique circumstances. Having accepted that, we still expect the world to behave according to its own logic.
Bad fiction abuses the suspension of disbelief, and it rubs people like me (and the gp) wrong.
In this case, it is a silly short story, so it doesn't bother me much. On the other hand, complaining about TV shows and movies can practically become a sport with the the right company.
For example, I quite enjoyed the Netflix movie Spectral, right up until the end, where they tried too hard to explain the mystery and violated things that I had not suspended my disbelief about. The TV show Fringe had a ton of these moments as well. Some were easy to accept, some episodes were painful to get through.
As a (former?) physicist, I very much prefer to try to imagine what would need to be changed in our rules so the presented world would be possible, rather than "ok I accept this little change but everything else has to work as close as possible to our own universe"
Agreed. In this case we could infer that perhaps members of this alien species are either not all super intelligent, not super knowledgeable or at least not super knowledgeable in all areas. In this case, however, the the usage of "meat" here is intended to be a commentary on humanity of some sort. If the idea is that "aliens would just see humans as meat" then I do in fact think that point is somewhat diluted by GP's comment. "Meat" is not really an accurate word here, unless we take it as it's broader meaning of "food"[0], at which point "food" itself would be a better word.
>Seeds of Monotropa uniflora - a plant that parasitizes fungi - are incredibly tiny. And they can afford to be, because all they need to grow is to be able to germinate on a mycelial thread of the mycorrhizal fungus that they parasitize.
8:22> "Mycoheterotrophic Lifestyles of the Lewd and Depraved"
>Myco-heterotrophy (from Greek μύκης mykes, "fungus", ἕτερος heteros, "another", "different" and τροφή trophe, "nutrition") is a symbiotic relationship between certain kinds of plants and fungi, in which the plant gets all or part of its food from parasitism upon fungi rather than from photosynthesis. A myco-heterotroph is the parasitic plant partner in this relationship. Myco-heterotrophy is considered a kind of cheating relationship and myco-heterotrophs are sometimes informally referred to as "mycorrhizal cheaters". This relationship is sometimes referred to as mycotrophy, though this term is also used for plants that engage in mutualistic mycorrhizal relationships.
The answer is pretty obvious - building this intricately organized matter from scratch out of sunlight and elements is extremely inefficient, much better to recycle the existing bulding blocks of lower levels of organization.
(You don't make software from scratch from sand and electricity, you use an off-the-shelf CPU and existing libraries.)
Protein is used by the human body as construction material, not as fuel. (It is possible to use protein as a fuel source, but virtually no modern human follows a diet where that actually happens.)
I can't imagine what a sentient entity would have to be made out of or how much less "efficient" or stable it would have to be than me for me to find it amusing. If it turns out that some slow geological process or a momentary dust cloud are actually self aware, the last thing I would do is laugh.
Is it just an abstract association that we're wet inside whereas computer chips are dry? That's just where our technology is right now, it largely reflects how our brains can't simulate fluid dynamics or conceive self replicating distributed systems efficiently enough so we resort to designing simple, solid state things.