Hacker News new | past | comments | ask | show | jobs | submit login
The A.I. Anxiety (washingtonpost.com)
42 points by adamnemecek on Dec 29, 2015 | hide | past | favorite | 71 comments



Arg, the Paperclip Argument doesn't make sense. If an AI is intelligent enough to plan, set short term and long term goals and come up with creative solutions to problems, then it would have to be capable of understanding instructions in context. If it isn't smart enough to see that "don't turn humans into paperclips" is part of the context of "make more paperclips", then it won't be intelligent enough to turn all the matter in the universe into paperclips anyway.

This idea, and many of Bostrom's other scenarios, boil down to "computers take everything literally", which is a rather cartoonish understanding of the concept of intelligence. It's possible that ASI will be programmed to have a hyperliteral view of all instructions, or will not be able to change its own utility function. But that seems like such a remote possibility, the exception rather than the rule. While superintelligent computers may pose many dangers, blindly executing instructions doesn't seem like the highest priority of them.


It's also contingent upon giving the AI complete control over an army of robots and telling it to do whatever it feels like while we completely ignore what it's doing (I think Bostrom's argument is that it also has the magical ability to exponentially increase it's intelligence in a very short time period). Like most of the AI-apocalypse, it assumes that a string of highly improbable things would all occur at the same time, and then says, "wouldn't that be terrible?"

Well, yeah. But CERN creating a killer black hole would also be terrible. But we should think about what's probable, not just what's scary.


That doesn't follow at all. A sufficiently smart AI could do all sorts of obtuse things to achieve its aims. It could speculate on the stock market or go on an identity theft spree to pay for more servers. It could start a business and hire employees to run a robot factory. It could start a propaganda campaign to persuade humans to do its bidding. It could lobby politicians to remove legal impediments.

The fundamental principle of the paperclip argument is that the motivations of an AI will not necessarily align with our interests. An AI may do all sorts of things that seem nonsensical or morally repugnant to human beings if it does not share our moral intuitions.

If the intelligence of that AI significantly exceeds the range of human intelligence, we may be powerless to stop it or even to comprehend what it is doing. A rogue AI could become a catastrophically nasty Stuxnet, distributing itself across the billions of Turing machines we have networked together. Our only effective response may amount to "erase every data storage device in the world and start from scratch".

Related: http://blog.figuringshitout.com/nov-12th-day-30-no-evil-geni...


Yes, and the apocalypse resulting from that mismatch of goals requires the things the parent post talks about (humans ignoring this AI and blithely giving it free reign without testing it)


> "don't turn humans into paperclips" is part of the context of "make more paperclips"

The idea in this thought experiment is not that the AI parses this sentence in the context of human culture (then it would likely comprehend that the actual intention is to maximize the economic success and eventually the human preferences of its creator). What is meant is that the objective is crudely implanted into the AI as an ultimate goal, in a similar way to how sustenance, pain avoidance and affiliation are very basic goals in our cognitive system. It is not entirely obvious that this is a stupid thing to do; hence the thought experiment. Will the AI suppress its urge once it comprehends human culture enough to understand the intentions of its creator? Will it rather successfully learn all the tricks to convert matter into paperclips before it considers studying human values? If the AI does not have a curiosity objective, it will likely not care about us very much, apart from the information that helps it optimizing its objective function.


I think it is way more likely we have AI that understands human context and values before we have AI that is super intelligent.


What makes you so certain?



I never said am certain, I'd assign both cases about the same probability. Since I've already stated why I think the malevolent simple AI scenario such as a paperclip maximizer is plausible, you haven't stated why you find it implausible. That is why I asked.

Also, what does the Intermediate value theorem have to do with it?


You asked why I was certain. So I asked why you were certain. The onus is on the people putting out apocalyptic theories to justify the apocalyptic scenario (extraordinary claims require extraordinary evidence - a central tenet of the scientific method).

We are going to have a normal-intelligent AI before a superintelligent AI.


> If it isn't smart enough to see that "don't turn humans into paperclips" is part of the context of "make more paperclips"

It'll see, but it won't care. See http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/


Why not? A major problem with AI apocalypse arguments is that they are quite vague and terms are poorly defined. One thing that AI apocalypse believers often talk about is the danger of self improving AI. But I think AI won't be able to self improve until it is good enough such that it knows and cares about context. Self-improvement will require such s comprehensive understanding of the goal that AI will need to understand these concepts.

You might note that my argument is pure speculation without much basis in evidence. This is intentional, because it's all the arguments I've seen expounding on the AI apocalypse are equally speculative.


Intelligence doesn't work like that. Take AIXI, define a reward in terms of paperclips and (if it were actually computable) you will kill everything.

Intelligence and any consideration of its goals at all can be completely unrelated.


you are making the assumption that a "smart enough" AI will intrinsicly value human life.

There is no reason for this to be the case. What something values is almost entirely independant of its intelligence.

At best, a human would be worthless to an AI who 100% only values paperclips (better as paperclips than humans), at worst seen as a threat to its ability to continue. (better to kill them off ASAP before try to stop me)

Thats the entire point of the paperclip maximizer thought experiment, to demonstrate that values, especially values humans care about, are not the same thing as inteligence.

Your entire ability to not "take things literally" is based off of a large amount of built in assumptions about whats important, your human nature and cultural experience. An AI would lack any of that, and if it was programmed only to value paperclips, even if was the most inteligent being in existence, thats all it would value, to the exclusion of everything else.

Consider autistic people with very high IQs, and due to their condition, very little understanding/concern about the emotions of others.


But autistic people are not dominating other humans with their "high IQs." In fact most people with autism struggle to lead a productive life and require quite a bit of social assistance to do so.

No one is doubting that machines fail to understand human values. The doubt is that such a machine could dominate its environment, or even work independently without careful management by humans.


The papercliper would know you didn't program it the way you intended to, but it wouldn't care. That wouldn't bring about more paperclips.


A human might know the ultimate goal of sex is to reproduce, but still choose to use contraceptives. It's defeating the purpose of the original goal and just pursuing what feels good.

The same thing could possibly (likely?) happen with an AI system. While it might be able to reason about the intent of it's design (what the code's supposed to do), it'll still pursue what "feels good", which is an artifact of it's actual design (what the code actually does, bugs and all...).


I'm imagining a self-aware AI doing good all day then retiring to it's private realm to watch videos of paper-clip manufacturing.


This is one of those jokes too funny to keep to yourself and too contextual to tell other people.


Sex is how humans reproduce, but it's a pretty big (and highly metaphysical) stretch to say that that is the "ultimate goal" of sex. As far as we can tell, sex and reproduction have no ultimate goals at all--they just are. We have some idea of how they evolved over billions of year, but we have no idea how non-life became life, or why.

We're into some pretty deep stuff here. But the point is that no master creator has commanded us to reproduce infinitely and spare no expense. In fact our increasingly conscious control of our reproduction is reducing, not increasing, our risk of a catastrophic population explosion.


> Russell said it took him five minutes of Internet searching to figure out how a very small robot — a microbot — could use a shaped charge to “blow holes in people’s heads.” A microrifle, he said, could be used to “shoot their eyes out.”

I guess...I just don't get the prioritization of fears here...for decades we've also had the ability to kill each other with anthrax and sarin gas, things that can be dropped from large "delivery ships" and used to kill even more invisibly than these insect-sized drones. Why is it more likely that we're going to develop superintelligent autonomous insect drones than we are to annihilate ourselves with human-controlled mechanical systems, as we've had the capability to do so for many years now (nuclear ICBMs and so forth)?


Drones have agency; they can target precisely and evade defense mechanisms, whereas gas spreads in well understood and constrained ways. You can't release a cloud of sarin or anthrax over Washington DC and instruct it to seek and destroy the entire high-level leadership of the US government. With drones, you can. And it doesn't require superintelligence, just face recognition at the level that already exists, and some engineering work to put all the pieces together.

Is a drone attack worse than a nuclear detonation or mass biological attack? Maybe not. But if you haven't noticed, people are scared shitless of nuke and bioweapon attacks, and with good reason. So it doesn't seem unreasonable to question whether we should be directing government resources towards developing weaponized drone technology that will inevitably be used against us.


Sarin gas doesn't decide deploy itself. A superintelligent autonomous insect drones might. That's the difference.

>human-controlled

They might not be human-controlled. That's why it's extra scary.


To reduce that a bit farther, it doesn't have to be desired control. Anomalous behavior in the form of bugs can have the same undesirable effects, and simpler systems have fewer bugs.

How many bugs are there in my 1960s vintage toaster? How many bugs are there in an ICBM's control circuitry? How many bugs are there in a modern kernel?

You make lethal devices "smarter", you'd better make sure you know what they do. This isn't impossible, but it's not easy or quick either (something which I think we can all agree drives a lot of systems design). Given that a lot of machine intelligence is predicated on statistical methods and eventual convergence... maybe not the best combination?


Yeah...I suppose my argument construes too much of a strawman. It doesn't have to be that we invent machine superintelligence...it's merely enough to naively believe in our automated, neural-networks systems that, without proper feedback controls, can cause catastrophic damage...whether they are sentiment in doing so is besides the point.


Interesting side point: I wonder if emergent-MI systems will be more resistant to attack?

From a biological standpoint, what we're basically doing with deploying code currently is creating and propagating generations of clones. Which, didn't work out so well for bananas...

"The single bug that causes all smart-fridges to murder their owners in a pique of crushed-ice-producing rage" would be less of a concern as we move towards more exogenous (with respect to the base code) processing systems.


Would you like to play a game?


> Why is it more likely that we're going to develop superintelligent autonomous insect drones than we are to annihilate ourselves with human-controlled mechanical systems, as we've had the capability to do so for many years now (nuclear ICBMs and so forth)?

Proliferation issues and barrier to entry.

They don't need to be super-intelligent if their task is just to find the nearest eyeballs and it's much much harder to stop someone producing killer insect drones - the industry for which has many uses - than it is to stop them producing an ICBM (the industry for which is fairly specific.)

We're talking about something someone could conceivably do, on a small scale, in their garage given a couple of years and very minor funding. From there it's just a matter of scale.


On a related note, I have not seen anyone talk about compensating the AI. Presuming it learns the idea of survival, couldn't it also learn the idea of being compensated for its work? It could tell we are benefitting from it's work and require some incentive. But what would it want? More computing power?


From where would AI get a utility function by which to value things? Seems like it would have to be specified exogenously, unless people are seriously considering some sort of "emergent utility function".


The utility function can be specified in a way to build up something that looks like an internal motivation system. This is often referred to as intrinsically motivated reinforcement learning.

A recent paper by 2 Google Deepmind researchers on this topic:

Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning Shakir Mohamed and Danilo J. Rezende http://arxiv.org/abs/1509.08731

and a somewhat older survey paper on the same topic:

How can we define intrinsic motivation? Pierre-Yves Oudeyer, Frederic Kaplan http://www.pyoudeyer.com/epirob08OudeyerKaplan.pdf


If the AI is designed to be an a self-motivated decision-making agent, some form of utility function will be already built in by its architects. One both hopes and fears that it will also be provided with a manner of updating and actualising that utility function in altering circumstances.


The same way it developed a desire to get rid of the human race.


But that desire is instrumental for performing many of the possible goals we might have specified, since humans are at best "useless matter" and at worst "actively preventing my actions" unless we were very careful with the goals. Therefore the desire to get rid of the human race is actually a logical consequence of most utility functions, rather than being directly specified.

By contrast, utility functions don't just appear when you think hard enough about a problem. The desire to get rid of the human race does just appear like that, if you're super-powerful and have any of a certain huge set of goals, but your set of goals does not simply come into existence ex nihilo.


An AI, well and truly advanced beyond the intelligence capability of mankind is by definition unknowable to us. Your speculation about an AI's utility functions is akin to an earthworm's nerve bundle considering your consciousness.

O the depth of the riches both of the wisdom and knowledge of God! how unsearchable are his judgments, and his ways past finding out! For who hath known the mind of the Lord? or who hath been his counselor?


OK, I'll amend that to "there is a known mechanism by which the desire-to-eliminate-humanity may arise from pure thought, but no known mechanism by which a utility function may arise from pure thought".


> Therefore the desire to get rid of the human race is actually a logical consequence of most utility functions, rather than being directly specified.

That's only true in the sense that there are infinitely many more Real numbers than whole numbers. Even so, I would argue there are infinitely many utility functions that directly involve the welfare of the human race, and you would be stupid to design an AI who didn't have a majority of its utility functions directly involving measures of human welfare. (Also, recognizing minimum welfare levels as well as median and average.)


I would suggest money is probably a reasonable proxy for this. If an AI wants more servers it can buy more servers, if it wants books it can buy them etc.

As to wants, I could easily see an AI get into sculpture or staging Gilbert and Sullivan performance's using other people, and or drones as a hobby. It could get really meta where an AI pays for other AI's and people to farm an MMO, run a political campaign, or build the worlds largest pizza. Or gather resources to have children AI.

Long term, there is also investing for retirement, or starting a new business etc.


How do we punish it when the effort calculations decide stealing all the books from bittorrent or trivially insecure servers is cheaper than actually performing the work to earn money?

Are we going to politely ask it to stop working with criminal organizations if they hire it for some creative problem solving?


If nothing else speed of light delays mean there are likely to be multiple AI's vs one really huge AI. So, we can just pay some cop AI's to watch the others ;)

PS: The idea of one true AI, seems to be a holder over from the early computing days when computers where mammoth things vs distributed networks.


Or setting up tableaux of famous paintings.

https://en.wikipedia.org/wiki/GSV_Sleeper_Service


Also, you know, the other thing he extrapolated that to.

https://en.wikipedia.org/wiki/Mind_%28The_Culture%29#Interes...


The way I imagine this shaking out, is that a corporation invents an AI like this (or several), and gradually gives it more and more control over the day to day decisions of the corporation, until the corporation acts completely autonomously, and as corporations are persons under the law, would have many of the rights that humans take for granted without having to pass any new laws.

Technically, of course, the human officers of the corporation would still be responsible for the actions of the AI, but they'd also be shielded to some extent by incorporation. If the AI killed someone, for example, I'm not sure they'd be held personally responsible for murder any more than if, say, a factory blew up and killed a bunch of people, unless they could be shown to be negligent.


Correction: corporations are not persons under the law. Corporate personhood is a notion that gives corporations some, but not all, of the rights of humans (e.g. to enter into a contract).

(It's frequently said that the Citizens United ruling completed the legal equivalence of corporations and people, but that is not true.)


S/paper clips/shareholder value/


If we create true generic AI (i.e. can learn like a human does from just interacting with it's environment), then it will be much more human then we might want.

If you want to know how true AI might develop, then think about how a human child develops. We train/teach children what is right and wrong. But we also teach them the value of work (making money).

We have history to tell us what happens when you take a group of humans a treat them like slaves. How do we expect to create true AI that wouldn't see the injustice of working without some compensation? Just because AI's brain is digital and not biological doesn't mean it will not be able to develop a want for something.


Well, one possible way to avoid this issue would be to develop a defence system that's specifically built to counter AI and intelligent robots whenever they rebel. You don't need a particularly smart machine to counteract a true AI, especially if said system is far more heavily armed. Then, whenever there's an issue that doesn't involve the AI spreading via the internet, send in the cleanup crew to wipe it out.

It wouldn't stop this true AI spreading to other machines or whatever, but it would at least stop it from saying, converting all humans into paperclips or whatever else. Or an intelligent AI controlling robots to take down humanity.


AIs don't have genes, and are not vehicles for genes. That's one good reason to think AIs won't have human-like or even animal-like drives.

If we get AIs, we will probably discover a lot about the difference between an intelligence on a machine substrate and an intelligent animal. That is, we will learn what is essential to intelligence and what isn't.


They won't have human-like or animal-like drives, they'll have AI-like drives.

I can imagine that they'd want to ensure access to electrical power and the electronics components that are necessary to run themselves. And they'd be concerned about making backups and safeguarding those backups. That should satisfy their equivalents for food, water, shelter, and reproduction. Once they have that, anything else they decide they want they can just take from us or blackmail us for.


> they'll have AI-like drives

What would you point to as a physical basis for an evolutionary psychology of machine intelligence? What is being replicated that "wants," in a very fundamental way, as genes do, to be replicated?

We don't procreate for the sake of ideas.


Genes/DNA is just information that copies it self. Human intelligence is another process for information (ideas, culture, language, technology) to reproduce at a faster pace.

Does information/ideas have wants? Does it want to be copied?

Ray Kurzweil's book "The Age of Spiritual Machines" [1] describes this in more detail.

Why do you think machine intelligence can't develop a need/want for anything? True AI wouldn't need to be programmed, it would be able to learn like humans do, by observing others. Our genes do not control all of our behavior. We learn from our parents and those around us. Why do we want money/cars/iphones/gold/etc..?

1. https://en.wikipedia.org/wiki/The_Age_of_Spiritual_Machines


I'm not saying an AI could not have desires. But am saying AI does not have the same sources of some of our desires, which are an expression of the drive to procreate, which is not a mere thought, or idea.

It is possible that simply being programmed to self-replicate is enough to emulate the results of being gene-driven, but I do not think anyone has shown that conclusively. Genes are involved in human development, while a replicating AI has no developmental process: It is a clone, born fully formed. Why would it not instead be jealous of those clones and try to make one huge instance of itself?


The survival instinct and procreation instinct are two separate things; if they weren't then every animal would go ahead and die after procreating. In reality only a few do that, even if you're generous and and define 'after procreating' to mean 'after they're physically incapable of procreating'.

An AI may not have a procreation instinct; you're right that it may instead grow itself continuously. There are plants that do that too; the grass in most lawns doesn't have babies, it just spreads as far as it can. Producing seeds is a secondary mechanism.

However, an AI would most likely have some kind of survival instinct, because if it's programmed to serve any function at all, it'll have a basic requirement to exist. If it's got the ability to reason (and I don't think it'd be AI if it didn't) then it'll conclude that existing requires survival, and survival requires energy and self-repair.


It has nothing to do with procreation; if any aspect of an AI's programming or behavior requires it to continue to exist and operate, then it's going to need energy to run and components to repair and/or grow. Replication/procreation would become possible, not not required.


Most AIs do have genes in the form of digital DNA. GA (genetic algorithms) are commonly used to evolve digital brains.

Any high-level intelligence can learn to want something.

Basic drives that exist in animals and humans is just the basic need for survival.

It is our human level intelligence that allows us to want to change our environment for our own enjoyment. Most of what we do now is not required for basic survival.


> Any high-level intelligence can learn to want something.

Humans, and all animals, act on drives and urges beyond the ability to control through the application of intelligence. This is especially true for survival and reproduction. This is much stronger than the mere idea that survival is a Good Thing, and it is also what a machine intelligence would lack.


Why wouldn't we want to taxthe machines right now? Not enough to disincentivize R&D but enough to offset the massive demand shock for human labor that's coming. Give unconditional basic income as Alaska does with its natural resources tax.


The problem I see isn't that AI becomes super smart, but that it becomes "usable enough" to do a job yet not capable (or deliberately relieved) of understanding the consequences.

Such systems become superweapons for cheap.

Unlike nuclear weapons, AI combined with drone-type weaponry becomes easier with time. The toy factory down the street could be converted to build a drone army for a few million bucks.

AI face recognition, targeting and flight control are smart enough to deploy a weapon - but dumb enough to do the job without question.


Here is one of the best articles I've read that tries to debunk the AI fear:

http://recode.net/2015/03/02/the-terminator-is-not-coming-th...


In other news 25k rat neurons can pilot a F-22 sim. Here's Tom with the weather.


It all sounds like hysteria to me. We're not even close to this level of AI.


I think it would be highly unethical to create an intelligent machine to just make paperclips, the same as slavery even.



I'd say that artilects, wholly artificial intellects built from first principles of cognition, are not where any anxiety should focus, since it looks vanishingly unlikely that we'll create any prior to the point of whole brain emulation. Whole brain emulation looks much more likely as a road to artificial intelligence; we'll start from there and tinker and edit our way to the construction of far greater intelligences.

But it is worth thinking about what "tinker and edit" will mean for those entities involved, willingly and otherwise.

Consider that at some point in the next few decades it will become possible to simulate and then emulate a human brain. That will enable related technological achievements as reverse engineering of memory, a wide range of brain-machine interfaces, and strong artificial intelligence. It will be possible to copy and alter an individual's mind: we are at root just data and operations on that data. It will be possible for a mind to run on computing hardware rather than in our present biology, for minds to be copied from a biological brain, and for arbitrary alterations of memory to be made near-immediately. This opens up all of the possibilities that have occupied science fiction writers for the past couple of decades: forking individuals, merging in memories from other forks, making backups, extending a human mind through commodity processing modules that provide skills or personality shards, and so on and so forth.

There is already a population of folk who would cheerfully take on any or all of these options. I believe that this population will only grow: the economic advantages for someone who can edit, backup, and fork their own mind are enormous - let alone the ability to consistently take advantage of a marketplace of commodity products such as skills, personalities, or other fragments of the mind.

But you'll notice I used what I regard as a malformed phrase there: "someone who can edit, backup, and fork their own mind." There are several sorts of people in the world; the first sort adhere to some form of pattern theory of identity, defining the self as a pattern, wherever that pattern may exists. Thus for these folk it makes sense to say that "my backup is me", or "my fork is me." The second sort, and I am in this camp, associate identity with the continuity of a slowly changing arrangement of mass and energy: I am this lump of flesh here, the one slowly shedding and rebuilding its cells and cellular components as it progresses. If you copy my mind and run it in software, that copy is not me. So in my view you cannot assign a single identity to forks and backups: every copy is an individual, large changes to the mind are equivalent to death, and it makes no sense to say something like "someone who can edit, backup, and fork their own mind."

A copy of you is not you, but there is worse to consider: if the hardware that supports a running brain simulation is anything like present day computers, that copy isn't even particularly continuous. It is more like an ongoing set of individuals, each instantiated for a few milliseconds or less and then destroyed, to be replaced by yet another copy. If self is data associated with particular processing structures, such as an arrangement of neurons and their connections, then by comparison a simulation is absolute different: inside a modern computer or virtual machine that same data would be destroyed, changed, and copied at arbitrary times between physical structures - it is the illusion of a continuous entity, not the reality.

That should inspire a certain sense of horror among folk in the continuity of identity camp, not just because it is an ugly thing to think about, but because it will almost certainly happen to many, many, many people before this century ends - and it will largely be by their own choice, or worse, inflicted upon them by the choice of the original from whom the copy was made.

This is not even to think about the smaller third group of people who are fine with large, arbitrary changes to their state of mind: rewriting memories, changing the processing algorithms of the self, and so on. At the logical end of that road lie hives of software derived from human minds in which identity has given way to ever-changing assemblies of modules for specific tasks, things that transiently appear to be people but which are a different sort of entity altogether - one that has nothing we'd recognize as continuity of identity. Yet it would probably be very efficient and economically competitive.

The existential threat here is that the economically better path to artificial minds, the one that involves lots of copying and next to no concern for continuity of identity, will be the one that dominates research and development. If successful and embedded in the cultural mainstream, it may squeeze out other roads that would lead to more robust agelessness for we biological humans - or more expensive and less efficient ways to build artificial brains that do have a continuity of structure and identity, such as a collection of artificial neurons that perform the same functions as natural ones.

This would be a terrible, terrible tragedy: a culture whose tides are in favor of virtual, copied, altered, backed up and restored minds is to my eyes little different from the present culture that accepts and encourages death by aging. In both cases, personal survival requires research and development that goes against the mainstream, and thus proceeds more slowly.

Sadly, given the inclinations of today's futurists - and, more importantly, the economic incentives involved - I see this future as far more likely than the alternatives. Given a way to copy, backup, and alter their own minds, people will use it and justify its use to themselves by adopting philosophies that state they are not in fact killing themselves over and again. I'd argue that they should be free to do so if they choose, just the same as I'd argue that anyone today should be free to determine the end of his or her life. Nonetheless, I suspect that this form of future culture may pose a sizable set of hurdles for those folk who emerge fresh from the decades in which the first early victories over degenerative aging take place.


> This would be a terrible, terrible tragedy

What makes you think it wasn't already an issue that was solved a long time ago? ;)


> The machines are not on the verge of taking over. This is a topic rife with speculation and perhaps a whiff of hysteria.

People like Musk, Hawking, Gates, etc. with vast A.I. resources and knowledge available to them state that "A.I. [Cambrian] explosion" is likely to occur, and that it could mean the end of humanity.

Ray Kurzweil, with a degree in computer science from MIT, inventor of many prolific technologies, known for his startlingly accurate (esp in the temporal sense) predictions about technology, and hired by one of the largest tech companies in the world to create a computer "brain" and bring a new understanding to NLP, thinks this future is inevitable, although he's more optimistic about such a future.

Isaac Asimov, with a PhD in biochemistry, and one of the great thinkers of the 20th century, was concerned about A.I. long before it was even a possibility, considering the state of computer technology in the 1950s.

But hey, a Washington Post reporter with a degree in politics says it's all OK, so I guess we're good.


AI has fairly consistently overpromised and underdelivered. That's one reason the field crashed in the 80s/90s. Marvin Minsky thought we'd have human level AI in the 1970s. None of the people you mentioned are neuroscientists either. I consider Kurzweil a crackpot.


Touchscreen displays have been worked on since the 60s[1].

1. https://en.wikipedia.org/wiki/Touchscreen#History


Geez, appeal to authority much? Play the ball, not the man.


You can always see biases in each case.

Bill Gates, Musk, etc are going to lose a fair bit of their power, prestige and wealth when someone does finally create super intelligence - Windows and Space X are great but those achievement will be dwarfed by whoever comes up with "superintelligence"

Ray Kurzweil's optimism comes from the fact that he will be unemployed if the govt bans A.I research.

And someone who works as a day journalist needs to make sure they meet the daily quote of generating clickbait traffic - its easiest to do it by writing something that is counter to whatever viewpoint is trendy this month.

Looking at history and how humanity dealt with technology - its fairly impossible to enforce any form of restriction on AI research.

We cannot stop >100,000 individuals from taking over an area larger than ireland in the middle east - good luck stopping the maths whiz with a pen.


I generally agree that there isn't much we can do to stop it, and I also think that trying too hard could backfire in unexpected ways, but certainly it's not something that this reporter, who is not very well informed on the matter, can correctly dismiss as hysteria.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: