Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence: What’s to Fear? (the-american-interest.com)
36 points by the-enemy on Oct 11, 2019 | hide | past | favorite | 65 comments



"The sea makes people feel as if they are in a whirl of action ... But this is wrong thinking, We say waves “cause” a ship to break up ... but the wave itself is passive and without the power to act."

Nope, the wave is energy transmitted, from the sun, to the wind, to the sea ... to the ship.

In the same way a mind is a wave, constrained and channeled by it's neuron substrate, and by external stimulii.

The article is nonsense to me, there is no such thing as passive - the universe is all active, seething.


I think main point is that "wave" does not have any agenda, it does not have any purpose. By itself atoms cannot "decide" to cause chain reaction. All of this is just "programmed".

This kind of topic is taken on by Stanislaw Lem in https://en.wikipedia.org/wiki/The_Invincible

Really liked this book.


To add to previous comment. "What's to Fear", if author would read that book, he would notice there is still a lot to fear. Synthetic life form, without even mimicking humans could be dangerous. The same as waves in the ocean, even if those waves have no agenda to kill us all, if it is 10 in Beaufort scale you better stay away from those waves.


Grrrr. This could have been interesting if Umberto Ecco was arguing semiosis, instead of this poor exercise in semantics. As a concrete example, consider "The AI therapy app is not a mind. It cannot emote. It cannot feel the warmth behind its praise or the sense of urgency behind its criticism. It is passive." Yah, no. I can also do sentiment analysis on the users comments, and the behavior of the system can be driven by a goal to change that sentiment analysis, so I've got me an active participant in a feedback loop - yes, there is no their there, its just a numbers game - but this article is still complete twaddle


> Even lovemaking is at risk, as artificially intelligent robots stand poised to enter the market and provide sexual services and romantic intimacy.

This perfectly encapsulates why I'm so skeptical of AI alarmists. They tend to boil down the human experience to such coarse terms that it undermines their legitimate issues.

> AI is a collection of passive silicon and metal. It does not speak. It has no mind. Like an ocean wave, it forms part of an ordered sequence of passive realities moved by an agency not its own

No, AI is a series of numbers. I just wrote about this misconception the other day [0]. It would be great if people stopped anthropomorphizing AI just because it has the word intelligence in it and humans are said to be intelligent

https://medium.com/ml-everything/artificial-general-intellig...


Don't let bad, misguided, or low-quality alarmism distract your from legitimate concerns or cautions.

Dworkin's critique is ... frankly, fairly bizarre to me. He himself is an anesthesiologist who's studied political philosophy, and his floundering in deep waters here shows. He's well out of his depth.

That does not mean, however, that there aren't very legitimate concerns to be made, and the have and are being made. Cathy O'Neil, Nick Bostrom, Jonathan Zittrain, Zynep Tufekci, and others, have raised numerous salient points.


It's not clear to me from your post or essay whether you are saying the current state of AI is absurdly overhyped and is not even remotely close to producing actual intelligence (I agree strongly); or if you are claiming it's not possible with computers even in principle because "numbers", in which case I don't understand the justification at all, as it implies a mystical quality to human intelligence you rightly dismiss in the current hype.


Definitely your first point.

As for the second point, I don't think there is a mystical quality to human intelligence. I guess its mystical in the sense that no one really understands consciousness just yet. But I think something else is going on other than the brain being a rote inefficient chemical cruncher. I find the whole AGI, singularity, and AI hysteria reductive


Yeah I'd agree with most of that, consciousness (as opposed to cognition) definitely fascinatingly different to ponder.


And humans are not just a series of numbers?


No. There is absolutely no empirical evidence that consciousness is just a series of numbers.

There's an assumption that consciousness is just a series of numbers, and some more or less credible philosophical arguments for/against this point of view.

But believing there's more than that is faith-based, not empirical.


> No. There is absolutely no empirical evidence that consciousness is just a series of numbers.

The Bekenstein Bound says that any finite volume of space must contain finite information. Finite information can be fully encoded as a number. Ergo, human consciousness is a number.

So unless you're going to suggest that the Bekenstein Bound has no empirical support, which would be a pretty wild claim, then yes, there is indeed empirical evidence that consciousness is just a number.


> The Bekenstein Bound says that any finite volume of space must contain finite information. Finite information can be fully encoded as a number. Ergo, human consciousness is a number.

That would be "many numbers" not "a number." So many numbers that it's possible no digital machine can simulate it.


> That would be "many numbers" not "a number."

A distinction without a difference, honestly. Your computer conceptually has "many numbers", but its addressable storage is finite, and so is really just one big number.

> So many numbers that it's possible no digital machine can simulate it.

The very fact that the brain exists is proof that a machine of this size fully mimicking the brain's capabilities is feasible. Encoding information digitally is at best a constant factor difference from other numerical bases.

And this is assuming a quantum-level reproduction, which very likely isn't necessary.


No, a set of "many numbers" trivially maps 1:1 to a single number with the same information content. Constructs like space-filling curves are a manifestations of this relationship. The property is occasionally exploited in algorithm designs in order to modify the number of elements in a set.


If human reasoning can fit in say a 2m^3 object, you can (in theory) always build a machine that encodes the information in 2m^3 or less.


But probably not a digital machine using today's form of digital technology.


We can't in principal encode it as a numeric copy; it's not measurable. Position or momentum.

The mind isnt a closed system unless it's drawn as lightcurves.


>No, AI is a series of numbers.

Roughly as much as human reasoning is a series of numbers, yes.


AI is the perfect term for what we have now - it's artificial intelligence. It _seems_ intelligent, but the reality is that all we have now is pretty good pattern recognition coupled with an index of an incredible amount of information. That's not intelligence.

Sure, it _seems_ smart when it can tell you the volume of the moon or whether a tomato is a fruit or a vegetable, but ask it to explain a joke if you want to see just how unimpressive current the state of the art is.

Intelligence is the ability to observe patterns and integrate that information into abstract concepts and models that describe (and predict) reality.

You'll know when we've cracked General Intelligence when your Tesla knows to slow down tonight because it's Halloween and there was a new Star Wars movie this year so there's going to be a lot of kids running around in hard-to-see Darth Vader costumes this year...


I'm uncomfortable with any term involving "intelligence" just because it's so misleading when describing the current state of affairs with computers.

Also, should machines ever actually become sentient to our level (and beyond), the "artificial" is then misleading - and their intelligence, and default right to be considered human (or human-equal) is undermined by implication.


It's a nice dovetail with American prudishness. The same utterance, with only a few words changed, could justify an anti-prostitution stance.


Who said anything about prudishness? I'm saying that there is a deeper connection people make with other human beings, rather than some inanimate piece of matter.


However, making a real connection is difficult, and there's a huge profit potential in selling people (usually, but not exclusively, men) a fake connection. At what point do AIs start passing the "phone sex Turing test"?

That is one of the things people are explicitly worried about: the possibility that they might make a connection with someone they're communicating with over the internet and then discover it's not a real person.


I don’t see why there is any need to worry about that, you can make fake connections with a real person to. They might be your world but from their end they might not actually give a fuck about you, they just pretend they do. Ever get worried about that?


This article completely misses what I see as the biggest problem with intelligence in machines: Corporations can use them to be unaccountable. Insider trading? No, that was our "algorithm". Racist policies? Oops, sorry that was just our bot. Inhumane practices? Sorry, we are still "working" on our artificial intelligence since you know it's very difficult.

Where most people see mysterious advanced help in day to day tasks, some corporations are seeing obedient, unaccountable and un-auditable workers. They are already starting to use them to perform their dirty deeds, and hiding behind the black box nature of these complex algorithms.


> Corporations can use them to be unaccountable. Insider trading? No, that was our "algorithm". Racist policies? Oops, sorry that was just our bot.

I would say the opposite. Insider trading? No, here are the inputs to our algorithm. Racist policies? No, here are the inputs to our algorithm.


How would that solve anything? Most machine learning algorithms being used today are black boxes and the people implementing them often have little understanding of exactly what transformations the model ends up making between the inputs and the outputs. Its entirely possible for seemingly-innocuous inputs to pick up outright problematic features (i.e. skin color from a photo id) or indirectly problematic features (i.e. income level and address located in an area predominantly populated by minorities) that its outputs hinge on.

If anything, the issues with machine learning are even more problematic that what the OP posted, as simply identifying that something like systemic racism in a machine learning model can be very hard to identify.


"The input to our algorithm is the entire content of the Alexa top 1,000 websites over the last year". Good luck finding the secret signal in that one.

Similarly, there's plenty of evidence that algorithms can reconstruct discrimination from inputs other than skin color - names are the obvious example, but address seems a likely risk too. Algorithms need anti-racism quality control on their output.


And tell me, how can you verify that these were the inputs used? How can you check to see how those inputs were used within the algorithm? How can you audit such a system?


You rebuild the system using the same inputs and initial conditions and observe the outcome. Thank you, next.


Talk about underplaying the difficulties. Computer science needed a whole new package system (Nix) just to guarantee reproducible builds. Compilation is pretty much the definition of a pure input/output function, and most builds are simply not reproducible. If even builds aren't reproducible, good luck reproducing anything more complex like machine learning.


Auditing is a complicated process, if you don’t want to do the work you’ll just have to take their word for it.


The problem is the feasibility of auditing, not whether the motivation exists. And no, taking their word for it is not adequate.


Then what do you propose be done? If you accuse someone of having an extremely racist AI you need a way to prove they set out to make it that way. You don’t just get to say “It’s racist” and shut it down.


You look at the outcome distribution.


The danger here is of assuming the conclusion, in this case, that race doesn't correlate with anything meaningful. This is still a hotly disputed point.

Ultimately, transparency and reproducibility are the key criteria I think, like it is in science.


And then what? Shut it down if it doesn’t conform to your expectations or world view?


Biased algorithms is a thing. We are often blind to that realization because we are often blind to our own biases.


"Our" algorithm ? I was under the impression that AI were generating their own algorithms.


In a sense you are building an algorithm but with different means from the usual ones. It is built trough compilation of a large amount of data using the well known machine learning algorithms. Tbh and especially in this case these are vocabulary terminations we should not but stuck on.


It's an algorithm, just at a higher level of abstraction. Turtles all the way down.


This already happens. It's actually bureaucracy, but it's rarely called that.

Try dealing with YouTube after a fraudulent copyright strike, or getting money from Amazon or PayPal after your account has been suspended for an arbitrary reason.

AI has the potential to automate bureaucratic corporate hostility and indifference to weaponised levels.


See also the unfolding Internet of Shit saga. While everybody is afraid of artificial intelligence, I am currently more afraid of Artificial Stupidity...


This was an absolutely fascinating read...

> An atomic bomb is a frantic explosion. Yet none of these phenomena is active; they are all passive. They lack the power of cause.

To me it’s completely incomprehensible that an obviously intelligent and well educated (MD PhD) person can think like this... How is it possible to deny that a thermonuclear explosion can cause... a lot of things...?

I’ve lately come to realize that there is much more variation in how people see the world and reason than I thought possible. I would genuinely like to understand those “modes of thinking” that are most far removed from my own. Can anybody explain how this particular worldview works or point me to something I can read?


I think the author is distinguishing between intent/agency/free will in the case of mental action and the empty, meaningless quality of material "causes".

For causality and the slipperiness of it, read Hume.


Hume the empiricist... yea, I find this philosophical view incredibly difficult to understand. I naturally gravitate to the hypotetico-deductive way of thinking, and find it very hard to break out of.

I don’t think going straight to Hume will be very meaningful for me. Is there anybody who has explained empiricism from a more hypothetical-deductive perspective?


Yet not only does AI win at cards now, it also creates art, writes poetry, and performs psychotherapy.

Those are shockingly broad claims that are not even close to being accepted by even a minority of people.


AIs can absolutely do all those things. The question is if they can do it well enough to be of any value.


i believe this is a category error. AI is not sentient or aware, it defintionally cannot “do” anything.


I see no reason why "doing something" requires sentience.


https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/E...

here's the relevant djikstra. To be the agent of an event requires agency. We have it, principal component analysis doesn't.

I'm firmly of the opinion that anthropomorphization of code is a category error and a bad idea.


> To be the agent of an event requires agency. We have it, principal component analysis doesn't.

Agency remains undefined, just like "sentience" from your other post. Unless you actually define it, how can you possibly know that we have it and principle component analysis doesn't?

Even if we take a connotative definition of agency, ie. that it is some quality or behaviour that humans exhibit, then it remains to be shown that code cannot possibly have agency in this exact sense.

Certainly not all code will have agency, just like all code does not have a "sorter" property exhibited by sorting algorithms. I also agree with you that current forms of AI don't have what we recognize as agency, but I don't think we're as far off as some think, and I definitely don't see how it's a category error to talk about AI in this fashion. Whatever qualities you think humans have that grants agency, it might not be nearly as complicated as you assume.


My problem with these articles isn't that they overestimate AI, but that they underestimate humanity. Yes, many millenials have a serious electronics addiction, but many Gen Z kids are really good about managing their screen time, and try to make face-to-face communication happen more and more.

tl;dr: Humanity is really good at adapting to coexist with technology. Stop Panicking.


What about all other thechnological controversy ? Everything is not about Screen-Time.


I'm using a specific example of a panic to point out that the panic is often unfounded.


I waited for something insightful at the end but didn't find anything there. Just a bunch of "metal and silicon isn't intelligence." Ok, fine, but a whole article without any reference to the Turing test? That should have been the starting point.


Sure, let's dive into the philosophy of causation and completely butcher it. Very clear article /s


steam engine - what to fear ?

combustion engine - what to fear ?

computer - what to fear ?

internet - what to fear ?

robots - what to fear ?

artificial intelligence - what to fear ?

?????? - what to fear ?

Maybe there is not point to fear ?


I have a few thoughts about doomsday level AIs:

1. We _will_ eventually crack "general intelligence". If nature is able to do it with 9 months of build time and a few pounds of meat, there's no physical reason we can't do it on silicon (or whatever substrate is next)...

2. When we finally develop a half-way decent general intelligence, much like human intelligence, we're not going to have any idea how it works.

3. It will probably end up employing a lot of the same cognitive shortcuts that humans use (see https://en.wikipedia.org/wiki/List_of_cognitive_biases). And in doing so we're likely to end up with a general intelligence that, on the whole, is capable of much greater calculation, but ultimately will be a disaster as it will suffer from the same sort of psychological flaws.

Human survival is probably going to come down to humanity trying to talk a super-intelligent AI out of some crazy ass conspiracy theories. It will be like the ultimate Thanksgiving with your crazy uncle.


> If nature is able to do it with 9 months of build time and a few pounds of meat, there's no physical reason we can't do it on silicon (or whatever substrate is next)...

Nature had a couple billion years to figure it out.


Human intelligence isn't just the biological process - you can't have a baby in a vacuum and it becomes intelligent. You also need an intelligent culture to learn from and within.


Our Tower of Babel _will_ eventually reach heaven.


And mankind will never be able to soar like the birds.

shrug

We've seen this play out many, many times before...


To "soar like the birds" would mean to be able to fly at will and land at will, wherever and whenever you desire, as effortlessly as walking. You're absolutely right, mankind will never be able to soar like the birds. Even if we could figure out the technology, it would be a privacy/safety/liability nightmare and they'd make laws against doing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: