Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The AI delusion: why humans trump machines (prospectmagazine.co.uk)
65 points by sebg on Feb 7, 2020 | hide | past | favorite | 96 comments


Recognition of the powerful pattern matching ability of humans is growing. As a result, humans are increasingly being deployed to make decisions that affect the well-being of other humans. We are starting to see the use of human decision makers in courts, in university admissions offices, in loan application departments, and in recruitment. Soon humans will be the primary gateway to many core services. The use of humans undoubtedly comes with benefits relative to the data-derived algorithms that we have used in the past. The human ability to spot anomalies that are missed by our rigid algorithms is unparalleled. A human decision maker also allows us to hold someone directly accountable for the decisions. However, the replacement of algorithms with a powerful technology in the form of the human brain is not without risks. Before humans become the standard way in which we make decisions, we need to consider the risks and ensure implementation of human decision-making systems does not cause widespread harm. To this end, we need to develop principles for the application for the human intelligence to decision making.

https://behavioralscientist.org/principles-for-the-applicati...


Myron Aub, a low grade Technician, discovers how to reverse-engineer the principles of pencil-and-paper arithmetic by studying the workings of ancient computers which were programmed by human beings, before bootstrapping became the norm—a development which is later dubbed "Graphitics". The discovery is appropriated by the military establishment, who use it to re-invent their understanding of mathematics. They also plan to replace their computer-operated ships with lower cost, more expendable manned ships to continue the war.

Aub is so upset by the appropriation of his discovery for military purposes that he commits suicide. As Aub's funeral proceeds, his supervisor realizes that even with Aub dead, the advancement of Graphitics is unstoppable. He executes simple multiplications in his mind without help from any machine, which gives him a great feeling of power.

https://en.wikipedia.org/wiki/The_Feeling_of_Power


> In some circumstances it has been surpassed already. It’s not an unfamiliar experience today to wonder whether we’re interacting online with a human or an AI system, and even alleged experts have been taken in by bots like “Eugene Goostman,” which, posing as a Ukrainian teenager, fooled a panel of Royal Society judges in 2014 into thinking it was human.

No, the Turing test has not been surpassed. The "panel" was not fooled; one judge on the panel was fooled. And how, I don't know. The sample dialog on the Wikipedia page for that bot is the sort of pointless trivia questions that a bot can answer and a human can get wrong, and it won't tell you anything about who you're interrogating. If you instead try holding context across multiple inquiries, the charade will quickly reveal itself,

(">" = human, "<" = bot's reply.)

> Hello.

< Nice to meet you! Please tell me how you found me and what you'd like to discuss :-)

> Through Wikipedia. Tell me about somewhere you've visited.

< May I ask you about the real reason of your interest? Maybe, let's talk about something else? What would you like to discuss?

> Curious about your history. Why do you not want to discuss it?

< How do you know? I thought nobody could guess that! Could you tell me what are you? I mean your profession.

The previous reply is nonsense, and the constant evasion is typical of chat bots like Cleverbot. Clearly bot. I know I'm not blind, but I think we're kidding ourselves to equate this with the dialogue capabilities of a human. It took 3 queries to get to that. Even more recent attempts, like AI dungeon, fall apart after 2-3 queries that require context and understanding.


On the other hand, I regularly get phone (sometimes email) support that seems to have similar flaws and I know (I think I know) it's humans. It seems quite dramatic the difference when the person on the other end does handle context and analytic thinking.

Maybe there's a bifurcation going on, and some people are converging with AI and others are not?


> On the other hand, I regularly get phone (sometimes email) support that seems to have similar flaws and I know (I think I know) it's humans.

Conversational bots for support are increasingly common, and the voices are getting better, and even outside of actual conversational bots humans following flowcharts that essentially render them into conversational bots have been a thing for quite a long time.


By "conversational bots", you mean the sort that say "I'm sorry, I didn't quite catch that" in response to most things? I'm not talking about that. I'm talking about being able to discuss a topic briefly but not focus on the right details in order to be responsive.


This!

If humans do not "tolerate" other humans that are perceived like bots, how will we ever tolerate bots themselves which will probably never be able to imitate robotic humans the way humans do so today :)


Koch (or perhaps just the reporter quoting him) contradicts himself. Even in his own definition of consciousness a machine architecture merely needs a feedback loop to be conscious, something hardly unheard of in computer programs. Now arguably that definition isn't terrible because human consciousness does seem like a supervisor -- something that synthesizes all the subprocess work and makes sure it has a coherent story.


Don't worry: Tononi (and, by extension, Koch) have thought about IIT much more than the author of this article.

You are correctly pointing out the conflict between computationalism ("consciousness is what certain computations feels like from the inside") and physicalism ("consciousness is what certain computations feel like from the inside, when they take place on physical substrates, perhaps requiring the involvement of quantum decoherence").

One of the best primers on the current state of academic hypotheses on these topics is the whitepaper "Principia Qualia" by the good folks at the Qualia Research Institute [0].

[0] https://opentheory.net/PrincipiaQualia.pdf


Don’t want to drag down the enthusiasm, but we really should distinct between data analytics, actor systems, etc. and of course „artificial intelligence“.


Also, we should make a distinction between multi-layered, hierarchical information, classification and decision finding systems in general and a possible machine implementation thereof. Those systems have shown remarkable performance when operated by humans, e.g., the Dowding System [0]. (I don't think that any ML/AI systems are able to surpass this currently.)

[0] https://en.wikipedia.org/wiki/Dowding_system (The article contains only a quite crude and basic representation of the operations of the filter room.)


The point is that looking at artificial intelligence like this is naive. There are some things Actor systems excel, but fail at others. There are some things NN can do pretty well, but suck at others. Comparing humans to all applications of AI is naive and no definitive answer can be given, because it really depends. I think that this will be the status quo in the future.


This was more about comparing multi-layered systems to single actor ones and about us more or less ignoring, what humans could achieve and historically have been achieving with those, rather taking multi-layered information processing as a defining characteristic of algorithmic systems. (Actually, this isn't new, it had been deployed with massive success, and we are repurposing it with mixed results.)


In Koch’s picture, then, the Turing Test is irrelevant to diagnosing inner life. What’s more, it implies that the transhumanist dream of downloading one’s mind into an (immortal) computer circuit is a fantasy. At best, such circuits would simulate the inputs and outputs of a brain while having absolutely no experience at all. It will be “nothing but clever programming… fake consciousness—pretending by imitating people at the biophysical level.” For that, he thinks, is all AI can be. These systems might beat us at chess and Go, and deceive us into thinking they are alive. But those will always be hollow victories, for the machine will never enjoy them.

This is really damning humans with faint praise. "Machines may eventually do every job better than we can, and be immortal, but I promise that humans will remain superior in some completely undetectable way."


Koch believes “that consciousness is a fundamental, elementary property of living matter.”

Consciousness is “magic” then.

Even if we build machines to mimic a real brain, “it’ll be like a golem stumbling about,” he writes: adept at the Turing Test perhaps, but a zombie.

This guy created a whole moat of unfalsifiability around his views.


Your second quote sounds like a fairly straightforward description of P-zombies to me: https://en.m.wikipedia.org/wiki/Philosophical_zombie

However, to add to the criticism you have of this article about this book, your two quotes taken together appear to be contradictory: if consciousness is a fundamental element of living matter, then given that we can make new living matter, why should there be any reason we can’t make a conscious artificial machine?


P-zombies are magical nonsense.


Magic has nothing to do with it. P-zombies are a philosophical thought experiment against physicalism. The point of the argument is to demonstrate that a purely physical description leaves out conscious experience. Nagel made the same point regarding bat sonar consciousness and science’s view from nowhere.


No more magic than anything else fundamental. I don’t know whether consciousness is fundamental, but magical has nothing to do with being fundamental, unless existence itself is magical.


Consciousness is “magic” then.

That doesn't follow at all.


They're in the camp that since you can't measure subjective experience its magic.

Well, they're here reading this, since they cant measure their experience they must not exist. /s


How would Koch see a mechanism where neurons or other cells in a human brain are sequentially replaced over time with synthetic components that simulate their function, is the consciousness lost along the way? Is there consciousness as long as there is at least one biological cell left?


I wonder if an intelligence higher than ours were to figure out exactly how our brain worked. Every neuron, every synapse, every hormone and neurotransmitter.. How memories were made, stored and retrieved. Would we appear to them to merely be "simulating" consciousness through the use of these mechanisms.


You should write a book, "The Cyborg of Theseus" :)


I am a "panpsychist," or whatever, and I don't really see the silver lining in any of this, either. I really don't care if DeepMind or Stockfish are enjoying how much they can drag me through the mud, because I already don't. It's a huge non sequitur to respond to such things with "it doesn't matter, it's just fake thought." And it is one the most embarassing things to see people going down that lane, alongside that other road, of "but can machines make art?"


I think the author is being a bit fast and loose with Koch's quotes. Koch is not postulating that computers can never be conscious, he's generally saying current approaches to making computer systems are not based on feedback hence can't be conscious because IIT requires feedback.

This paraphrase "Koch believes the theory establishes that machines built along the lines of our current silicon-chip technology can never become conscious, no matter what awesome degree of processing power they possess." would seem to come from this statement: "Whether or not a network has this property of influencing itself depends on its architecture. If information is merely “fed forward” to convert inputs to outputs, as in digital computers, then IIT insists it can only generate zombie intelligence."

However all IIT (and presumably therefore Koch) requires we do to solve this is use a different architecture (i.e. some kind of self-modifying feedback approach in the computer). "In Koch’s words, consciousness is “a system’s ability to be acted upon by its own state in the past and to influence its own future.”"


In this case, Koch really needs to read Daniel Dennett's "Consciousness Explained". Even though it does not explains consciousness, it takes it seriously to dispel myths and magical thinking about consciousness.


He makes the point that there’s no magic, but the tell is that he’s not able to explain it. His religion is that it’s explainable.


I've never understood why people feel that way. As far as I can tell, there is no evidence that consciousness is not able to be emulated by a machine.


“Consciousness” is too poorly defined to have a proper discussion about what is required to have it. We can only be somewhat sure about certain things altering or removing consciousness, but even then I’m sure you can have a very long argument about whether or not a dream is a state of consciousness. Or if consciousness is a continuous variable, where perhaps a newborn has less than an adult, or a dog has less than a human.


Emulation is fine, is whether machines are able to achieve "consciousness" that is at stake. I don't know if anyone feels strongly on the matter either, at least on the academic level. But it's the same as with other odd philosophical position: people run into problems with other alternatives, go like "why not..." and suddenly they have an odd belief.


Personally, I don't see the difference.


The difference is that a physical process makes specific physical state changes to the physical world while an emulation does not. A computer simulation of an ice cube melting does not and cannot make state changes to the physical world as a physical ice cube can. For example, if you wanted to chill your beverage, dropping an ice-cubed sized computer running the algorithmic emulation of an of an ice cube melting will not chill your drink no matter how perfect your emulation is, while a physical ice cube can perform that physical state change of your drink.

Is consciousness a physical phenomenon? If it's not physical, then what is it? Is it an algorithm? Well, for one, there is no precedent. Nothing else in the material world is purely an algorithm...except maybe a computer running an algorithm...


>For example, if you wanted to chill your beverage, dropping an ice-cubed sized computer running the algorithmic emulation of an of an ice cube melting will not chill your drink no matter how perfect your emulation is, while a physical ice cube can perform that physical state change of your drink.

Interestingly enough, this may not be entirely true. There are some fundamental relationships between computation, information, and entropy, that, in the limit of computational and representational perfection, probably would require exactly the amount of input energy as the ice-cube itself would to melt.

Now, you might argue that this is a pedantic attack on your argument. It's not though. The real point is that the laws of physics often seem to have grander symmetries and transformation laws than observable at first glance, and the universe has very clever ways of enforcing these symmetries in what appear to be very different contexts, but are indicative of less intuitive symmetry laws.


"If it's not physical, then what is it? Is it an algorithm? Well, for one, there is no precedent. Nothing else in the material world is purely an algorithm...except maybe a computer running an algorithm..."

I don't see the distinction you're making. I think I want to say "ok, so nothing is purely an algorithm". A computer relies on the rest of the universe just like your brain and I guess it's therefore a "physical phenomenon"?

In order to draw a distinction, perhaps one could say that a "pure algorithm" is something that can run on "many" substrates, so the difference between a "pure algorithm" and a "physical phenomenon" is about the variance in the substrates maybe? But everything is physical. Statements about abstractions are statements about patterns of physical things.


There is no evidence either for or against, because evidence is objective, while consciousness is subjective. Instead there are arguments for and against, depending on what people think the nature of subjectivity is, and therefore whether a machine could be conscious.


A machine could emulate consciousness in the sense that we could probably in principle build a machine that acted, viewed from the outside, as if it were conscious. But we have no way to measure whether something is conscious or not so we would never really know if the machine actually was conscious, had interiority, had qualia, etc. (three different ways of saying the same thing).

There is no good reason to believe that consciousness is reducible to physical phenomena. I think that intelligence is almost certainly reducible to physical phenomena, but consciousness? No. Consciousness is a mystery that quite likely will forever be beyond the reach of physical investigation.


>A machine could emulate consciousness in the sense that we could probably in principle build a machine that acted, viewed from the outside, as if it were conscious. But we have no way to measure whether something is conscious or not so we would never really know if the machine actually was conscious, had interiority, had qualia, etc. (three different ways of saying the same thing).

You could ask try asking it? You can ask clever questions that would prove things like persistence of thought in between questions, and ability to reason about things. If it's able to reason about it's own processes, it's basically conscious by any measure that matters.

>There is no good reason to believe that consciousness is reducible to physical phenomena.

Lol, what? Other than the fact that we have millennia of scientific progress that have, in every single area of exploration developed predictive models of reality that have been tested, confirmed, retested and refined over time, and every single one of those successful models is not based on some invisible unmeasurable magic nonsense. What on earth could possibly make you think that consciousness is somehow the sole outlier that doesn't obey the laws of physics and is therefore reducible to physical phenomena?


By the same argument I'll never know if other humans are conscious.


This is clearly the case today, pending some new discovery or intellectual leap surrounding consciousness. For all I know, you are unconscious, and your purpose was to type that comment so as to trigger my realization I am the only conscious being in existence :)

(Not a good way to live your live, but a theory consistent with the evidence.)


And that’s why solipsism is not entirely defeasible. You can’t be certain other people are conscious. But it’s reasonable to think they are. It gets harder with other animals, and even worse with machines. See Ned Block’s paper on The Harder Problem and Commander Data.


Because if you don't subscribe to a branch of philosophy that's ok with being just a Newtonian thinking machine, it's kind of a scary concept.


Not really.


That's like saying spiders aren't scary. It's a little subjective.


There is also no evidence that consciousness can be emulated on machines. Before we don’t even understand how the brain functions and creates consciousness, we aren’t really smarter than fifty years ago. Until then we just created machines that can find optimized patterns in highly abstracted actions of humans. Yes, Stockfisch can beat you at chess, but you can’t ask it what the difference between poker and backgammon is. Currently there doesn’t exist a general artificial intelligence.


If consciousness is a special state of matter, it should be analysable and detectable, right? Can't we apply scientific methods to the hypothesis that "there exist a special state of matter"?


I don’t think we can replicate this state solely with current electric components. What is way more interesting would be the connection of human nerve tissue and computers.


You think we get to know, or at least we can know, everything we want to know. Why? Is it faith? Or has your brain tricked you?


No, I don't think. I assume, and bring arguments, and can listen to others.


Hell, we don’t even know if the brain is the causal creator of consciousness. It’s a contrarian view to think otherwise, but it can’t be proven false.


It can be proven false to any reasonable degree of certainty. With experiment, you'd increasingly be looking at an overwhelming probability that it is, or that the universe is actively conspiring against you to fool you on that notion. Which is, frankly, a bad argument.

If something is possible, but has measure zero in the probability space, it's really not worth considering.


Is there some proof that we can never detect consciousness?


Brain-neural interfaces or other high bandwidth sensory tools (like VR) seem to be a good path to start answering the falsifiable hypotheses around consciousness. It seems imminently possible to link minds, either physical minds or a physical-virtual duality, and start experimenting with the qualia that emerge. I’d conjecture that a highly dedicated individual today, with clever hacking, could actually perform crude experiments using available hardware to deduce if they can shift their consciousness to a form of duality with a virtual agent by sufficient sensory override.


> if they can shift their consciousness to a form of duality with a virtual agent by sufficient sensory override.

Can you explain what you mean? It seems we already have this with naïve use of consumer VR devices on most VR experiences.

For instance, while playing Richie's Plank Experience, the player already feels qualia due to:

1. virtual self in virtual world: looking down the plank in mid air where the plank is virtual and being in mid air outside a tall building is virtual

2. physical self in physical world: if a friend nudges you from behind (or out of sight of virtual self), or you have a fan blowing wind that would correspond to the breeze you would expect to feel in the virtual world, or you're walking on a real plank to which the virtual plank has been resized/calibrated (this one is a true hybrid real+virtual 'quale': you see the virtual plank but feel the real plank with your feet).

If you meant the virtual agent would have separate agency from the player, please explain as I don't quite follow.


Well, there are a variety of theses around if one could transfer ones consciousness into a computer. Presumably, if this were possible, it would also be possible to partially transfer ones consciousness into a computer. Now, this is going to be a subjective measure (at least for now, unless we find a way to measure consciousness.) However, if someone were able to self report that their consciousness were transferred into a computer, that would at least be something. So, inductively, if a person were able to self report that their consciousness were partially transferred into a computer, in some form or another, that would be some evidence a more meaningful transfer could occur. So the question reduces down to what mechanisms exist today that would potentially allow such a partial transfer.

I think a huge gate opens if you get a high bandwidth brain/computer interface. But until then, an experiment like the following may give us a glimpse of the tractibility of such transfer:

- First, you need a full audio and visual override of one eye and one ear (basically over-ear headphone in one ear, and over-eye VR goggle.)

- You also need to be able to track something on a person that a) is underutilized in the 'real' world and b) has sufficient bandwidth to convey some kind of agency of control over a virtual agent. For example, you could wire things up so a person's toe orientations could measured, or internal teeth motion, or subvocalization, etc. You could have dual-use mechanics, like a dedicated hand controller that still gives you freedom to use your hand, but I'd argue this might compromise the experiment.

- Combined with these two, with the hardware on and the inputs working, connect this person bi-directionally with a virtual agent in a simulation. (Basically a modern first person video game would probably suffice.) The video and audio feedback is obvious, you'd need to come up with a mapping of the inputs. Give the person some goals to accomplish in the simulation that are non-trivial and take concerted mental and physical effort to do.

At the end of say a months worth of this dual immersion in both the physical body and the virtual, have the person self-assess their conscious state. I think it's even odds that they feel like they are playing a video game still, or are actually manifested consciously as inhabiting two places at once, whatever the hell that means. The experiment is predicated on the idea that brain plasticity would start to allow the person to experience a level of conscious experience and control over the virtual agent similar to the physical world, since their sensory inputs and agency is at some level of parity in time and bandwidth between both physical and virtual. It's far from total parity, but it's well beyond what has been possible to do in recent history, so may be past the tipping point necessary for changes in conscious experience.

If the latter were to occur, and the person feels a duality of presence, the subjective experience of say, closing off the person's other remaining senses which are attached to the physical world (ie their free eye and ear, etc.) and waiting another month, may actually result in a representitive experience of what a consciousness transfer would feel like.


No you don't: Have you seen the TV show "Caprica" ? Part of the plot is that a girl creates an artificial afterlife by duplicating people upon their death from data collected during their life, ironically accidentally killing herself in the process. Technically that's a spoiler, but it's put sufficiently misleading I hope, plus it's revealed in the first 20 minutes of the first episode.

And this can work. You could copy a human merely by observing them. If you get enough data on a person (say yourself, so it isn't creepy), and train an algorithm. Once the algorithm is good enough at imitating you that your own mother can't tell the difference between a robot containing that algorithm and you, is there still a difference. And now every mathematician should say "No". After all, 4 and IV is the same number.

You don't need audio or visual override. You don't need to give any agency to any flesh and blood over a virtual body. Even the robot mentioned before is really more of an optional thing, you could eliminate it.

You could simplify further. You don't need to duplicate a human, you just need to provide them with "a kid". An algorithm that's purely virtual but that is sufficiently smart that humans can raise it to become part of human society. But it's mind can start out mostly empty. So you don't even need to duplicate anything, you just need a sufficiently complex learning algorithm and a way to interact with it, and a human (perhaps preferably a couple?) to teach it, if you're willing to give it a generation or two.


I'm not sure we're talking about the same thing. I'm not arguing if it's possible for consciousness to exist outside of my own. I'm arguing that insofar as we want to understand if it's possible to transfer the medium ones own consciousness "runs on" out of your own brain, it seems like experiments like the one I wrote may be able to provide some knowledge about the viability. It's still running on your brain of course, so it's not trying to actually do any real transferal, but it is showing the degree to which conscious experience can feel non-local, which is part of what would be needed to be a person externalized from their own brain. So it may not exactly provide evidence in favor or against transfer viability (though it might, since we may be able to see that it would result in ego death or some equally horrifying result) but it would provide evidence that the subjective experience of such a transfer has meaning, and what that feeling could be.

Now, once we get a brain/computer interface, things get really interesting. Because you can start to link minds, and even at low bandwidth, it may rapidly increase our understanding of the boundaries of what a conscious mind "is" and how much we should care about the "death" or "merging" of such things. It would be quite impactful, to say the least, to see that if you connected a person's mind to that of a simple organism in a bidirectional interface, if over time they act as though they had a single consciousness (probably mostly represented by the person's prior self, but perhaps with characteristics of the organism leaking in the other direction as well.)

The closest analog we have today is when one separates the left and right hemisphere of the brain, and that is a crude, fixed, and brutal methodology for such experiments, so it's only done as a way to treat people whose health it may improve, and is irreversible.


"Once the algorithm is good enough at imitating you that your own mother can't tell the difference between a robot containing that algorithm and you, is there still a difference. And now every mathematician should say "No". After all, 4 and IV is the same number."

I don't understand what you are talking about. Do you think identical twins share the same consciousness? Is a clone the same person as who it was cloned from?

It seems to me that "4" and "IV" are abstract concepts that have a basically unlimited number of relationships with the real world, like tentacles. The mathematical concept of four is just one of those linkages.


The point of the exercise was to prove that something non-human can be conscious at all. So if it "shares the same consciousness" that's great. If not, still good enough.


That’s an interesting experiment and thanks for writing it up in detail.

However I’m not sure what the VR really adds to it, above e.g. learning to shift the default point where most of us are conscious from (behind the eyes) to the lower abdomen as done in Zazen. We can similarly learn to shift this locus of consciousness completely into our FPS VR avatars but that doesn’t really make them ‘agents’ in the full sense of the term. The transfer is illusory.


This seems like confusing correlation with causation in an exquisitely pure sense. Maybe that extends to consciousness downloading as a concept in general.


>It will be “nothing but clever programming… fake consciousness—pretending by imitating people at the biophysical level.” For that, he thinks, is all AI can be. These systems might beat us at chess and Go, and deceive us into thinking they are alive. But those will always be hollow victories, for the machine will never enjoy them.

Mostly agree with Koch, but I'd take it a step further...

There's major problems behind the concepts of AI and even intelligence itself - and it's difficult to articulate why. It’s as if these terms require aggrandizing to the point of impossibility or they lose all their apparent meaning. Which is why I feel we'll never achieve what we call (Strong/General) AI, or if we do, we will always find ways to be unimpressed by it...

I mean, is it that absurd to consider that the ideal concept beholden to intelligence isn't a reality - even in humans? If you pull back enough layers on how or why humans think or do the things they do - we arrive at things we can't explain. We don't know what causes intelligence and have trouble coming up with an adequate definition for it; similar to the concept of life. For all we know we might be just highly complex biomechanical machines operating on stimuli, analogous to what current computers already do. Where's the fine line between making something conscious/unconcious?


Self-aware introspection and self-preserving strategy formation? I think the fact that human technology is meant to serve humans is the reason why definitions of AI and consciousness seem lacking. Even an amoeba has some self preservatory adaptive decision making.

The idea of 'I', myself,separate from all else. My decisions affect my fate.

You like others conflate "how","what" and "why". Whether or not it is biomechanics,electrochemicals or a projected hologram of quanta is "how", so is whether or not stimuli or measurement of stimuli form our reality perception.

A person has fainted, that person then wakes back up,you say:"the person lost consciousness but is now conscious". He was no longer in a self aware state where he would have made decisons to self-preserve and he is now back in that state. Regardless of intelligence or processing capacity, a self aware program that self preserves without explicit instructions for either functions is conscious.


>You like others conflate "how","what" and "why". Whether or not it is biomechanics,electrochemicals or a projected hologram of quanta is "how", so is whether or not stimuli or measurement of stimuli form our reality perception

I'm not conflating anything. I'm trying to understand the meaning behind phrases like "reality perception" - which convey little meaning without a definition. “What” “Why” & “How” are used to grasp at meanings.

See, words are for conveying meaning. When they have too broad of a definition, little information is conveyed (which is why phatic expressions and small talk suck). To better convey meaning, we form more stringent definitions and create new words. Notice however, the formation of new words and stricter definitions is a never ending pursuit, otherwise if we could fully explain things we would end doing so.

For example, notice we started somewhere with a definition for life, and then years go by and it ends up being too ill-suited for discriminating life from non-life. So we add additional words to describe what makes life... life. Eventually we'll either have a well defined line between what makes something living and non-living, or we perpetually keep on tacking on existing words or new words to describe it. The same goes for intelligence.

I can only see two scenario’s:

Scenario A: We end up perpetually trying to define intelligence, and thus we’ll always struggle to differentiate intelligence.

Scenario B: we come to full stop and are able to fully explain everything (at least regarding intelligence). At which point intelligence will be fully differentiable.

Are we at scenario B yet? Then it should be easy to come up with a satisfactory test for AI, let alone know if it’s even a possibility. Yet here we are today still unsatisfied by the state of computer intelligence… This is what I mean in my OP by: “These terms require aggrandizing to the point of impossibility or they lose all apparent meaning.”

Anyhow, I like your points on self-aware introspection and self-preserving strategy formation. But I'm not yet convinced that it has strict enough definitions to differentiate if something is truly AI. I mean is a computer that tries to prevent itself from going into sleep mode, and is aware when attempts are being made to put it into sleep mode - fit your definition? It kind of does... yet, I think we'd agree that wouldn't be AI.


> Anyhow, I like your points on self-aware introspection and self-preserving strategy formation. But I'm not yet convinced that it has strict enough definitions to differentiate if something is truly AI. I mean is a computer that tries to prevent itself from going into sleep mode, and is aware when attempts are being made to put it into sleep mode - fit your definition? It kind of does... yet, I think we'd agree that wouldn't be AI.

If that is a programmed function then it does not fit my definition. If it was not made to be aware,it became aware as a result of learning information and adjusting its programming and it realized the difference between sleep mode and a system wipe(death) and adjusts its programming to prevent a death scenario then it is conscious.

As for the rest, I don't think I am part of "We" I have pretty strict,well understood and time tested definitions for life and intelligence,separate from consciousness.


https://en.wikipedia.org/wiki/Philosophical_zombie

There's no discernible difference between p-zombies and 'real' conscious beings. There's a good chance that we're all p-zombies and a distinction between zombie and real doesn't exist.

> If you pull back enough layers on how or why humans think or do the things they do - we arrive at things we can't explain.

Sounds a lot like a magical argument. There's no evidence that anything about the way human minds work is fundamentally unexplainable.


>There's no evidence that anything about the way human minds work is fundamentally unexplainable.

Agreed. What I'm implying here is if we come circle to fully explaining how the brain works, then in theory we could predict how someone will function if given all input conditions (whether this is practical is another matter). If this is true, then wouldn't we just be biomechanical robots operating on stimuli? Would we be any more intelligent than say a rock, that similarly just reacts to physical stimuli? Or are we just a more complex rock?


> If this is true, then wouldn't we just be biomechanical robots operating on stimuli

Yes, that's right. Which is one of the big problems with this line of argument for many people -- how do you reconcile "biomechanical robot" with the concepts of "free will" and "moral decision-making"?

I'm personally agnostic on the issue (I don't believe science is able to answer this question yet -- or possibly ever.)

I do think it's interesting that you say "just a biomechanical robot." The "just" implies that being a robot -- a fancy rock -- in some way isn't enough. But in my mind, there's absolutely no (objective) reason to think of a human as any better or more important than a robot, or a rock.


Yeah, sorry I don't mean to imply that.

I'm just trying to challenge people's assumptions of what makes something intelligent. To me, it's not precisely clear as to what makes something intelligent, and is why I think we've had trouble coming up with a satisfactory test for determining when a computer is intelligent.


Gotcha -- that makes total sense. Thanks!

(P.S. Sorry for putting words in your mouth -- I don't want to imply you said anything you didn't. My bad there.)


> wouldn't we just be biomechanical robots operating on stimuli?

> are we just a more complex rock?

of course we are. moreover, it's pretty straightforward to trace our evolution back to rocks or rock-like formations.


It's pretty straightforward to trace our evolution back to single cells, but to rocks? And I don't think there is a clear idea of where viruses and stuff come in. I vaguely remember an article suggesting there might be a previously unknown mechanism used by the brain to handle memories that resembles (literal) viral machinery. Some think there might have been a biosphere before DNA, that used RNA. But presumably early life underwent transformations that wiped out the very first, whatever it was.


https://en.wikipedia.org/wiki/Abiogenesis

(see the "Origin of biological metabolism" section - lots of good new research coming out recently, including simulations of the very early processes predating life as such)


A p-zombie doesn’t experience pain, color, dreams, etc. If you do have those experiences, then you’re not a p-zombie.

You might counter that your brain is tricking you into thinking you have those experiences, when you don’t. It’s an illusion.

Bit ask yourself this. What would it even mean for the experience of pain or color to be an illusion? An illusion is itself a conscious experience!


"There's no evidence that anything about the way human minds work is fundamentally unexplainable."

Can you have evidence that something is "fundamentally unexplainable"? Other than a lack of explanation?

If your statement has consequences, then there must be an imaginable counterfactual where it does not hold. I'm struggling to think of that.


You’re the one with magical beliefs. Why should the brain or anything for that matter be explainable? Cause your brain has tricked you into believing it’s a universal understanding machine?


> There's no discernible difference between p-zombies and 'real' conscious beings. There's a good chance that we're all p-zombies and a distinction between zombie and real doesn't exist.

I think you're getting it wrong. There's no externally discernible difference between a normal person and a p-zombie, because the difference is in how they experience. I know I'm conscious, because I subjectively experience it, but I can't be sure about anyone else because they could be a p-zombie that has everything except that subjective experience.


You think this thing you call "you" is conscious because your subconscious has a very good marketing department.


That's clever, can I use that? :)


Consciousness is not how a mind works. It is a state property of the machine. Much like how you use to define a program using a finite state machine. It has nothing to do with reality or perception


>If you pull back enough layers on how or why humans think or do the things they do - we arrive at things we can't explain.

No, we arrive at things that are uncomfortable to explain.

I think one of the biggest impacts of AI is that it will force us to confront this.


Well, both. We can’t explain qualia for example, and no AI (at least, no non-super intelligent AI) is going to give us insights there. Perhaps we can create an AI smarter than us that can actually tell us what is going on, even if they can’t experience qualia.


What if the answer caused you to turn off the AI (or yourself) while it was still just setting up the prerequisite ideas?


> I think one of the biggest impacts of AI is that it will force us to confront this.

I think it's already being confronted but behind corporate greed instead of public need.


The fine line is around whether a machine can actually experience conscious perception, such as actually feeling pain, for example. Of course, there's no way to know...


Pain is a signal to your brain, which causes you to react to a stimuli.

Computers react to electrical inputs, and on some level can be considered reacting to a stimuli.

Is a computer therefore conscious?


Pain is an experience, not a physiological description. When you stub your toe, you feel pain, not a description of the biological mechanism.


No. The signal is not the pain. Pain signals are not feelings of pain. Pain happens in the mind.


Right, the mind processes it, and in turn does something, analogous to how computers process signals, and in turn do something.


Maybe analogous, maybe not. Analogy is in the eye of the beholder. There's no law of the universe that says that consciousness is equivalent to computation. Maybe consciousness is an emergent property of biological systems, along some dimension currently unknown to us.


Which is my point.

Consciousness and computation are entirely up to what we define these words to mean. At some point we have to settle on a satisfactory definition of consciousness that fully differentiates it from computation, or they’re left with at least some overlapping (analogous) meaning.

If we dismiss consciousness as currently unknowable us, and thus undefinable (as your statement about dimensions alludes to), then how can we say assume with certainty, that we haven’t already achieved conscious AI?


Does it feel pain? I think not.


Then what is pain, and how do we make computers artificially feel it?


If we knew the answer to that, there would be no hard problem of consciousness.


"for the machine will never enjoy them."

So the human brain can enjoy things because..... magic?


I was quoting the article, but I agree with your point.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: