Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What the Deep Blue match tells us about AI (backchannel.com)
33 points by steven on May 23, 2017 | hide | past | favorite | 9 comments


It does strike me that neither Kasparov nor Lee Sedol insisted to see any play history of their AI opponents.

Not that it would have changed anything, but it's kind of silly to go into any situation one side has the other's entire history, while the other side has nothing. Not exactly fair.


Sounded to me like Kasparov did ask to see match history:

Going into the match, Kasparov was frustrated that IBM had not shared printouts of Deep Blue’s practice games. He felt at a disadvantage because in a contest with any human, he would have a long history of match performance and would be able to tailor a strategy against that person’s tendencies and weaknesses.


When he lost was during that time when there was a split between FIDE and PCA. I've always wondered if there was some leverage there that IBM could have just gone to the PCA champ for the match instead. As reigning champ, he should have been able to dictate terms to IBM, not the other way around. PCA would have jumped at the idea of having IBM spending a ton of money claiming theirs was the true champion.

My personal view is that computers should have to play candidate matches as well to get to a championship match. Well, should have, it's a moot point now that the desktop can beat the GM's.

Last I looked, computers didn't evaluate their own opening book. They are basically programmed not to play certain things. Computers should only be allowed opening books built by computers. I'm really curious if novelties would develop.


"My personal view is that computers should have to play candidate matches as well to get to a championship match"

Important point is that this was not a championship match, it was just a regular exhibition match. The world champions play a lot of players in such tournaments and matches outside of the championship circuit. I doubt a computer will ever be allowed in the championship circuit .


Yes, I saw that he asked but didn't insist. As in "I'm not participating if the playing field isn't level".

It appears both were somewhat overly confident, so neither really pushed for this.


That's part of the point. Being able to review a lifetime's worth of history in a short period of time is one of the things at which AI is better than humans.


My point was that neither AI group shared any play history at all, even when Kasparov asked specifically for it. If the AI is so good, what do they have to hide?

Lee Sedol won a game too, so perhaps if he had had similar level of play history as it had for him, he could have discovered and exploited its weaknesses.


It is a fascinating reminder of the power of AI. Not working independently, but in conjunction in people and their motivations.

Our most common 'AI concern' is the power that AI on its own 'will take over humanity' in some form. Ie the risk of it 'getting loose' and not being programmed to consider the well-being of humans.

The risk that we never seem to discuss is the human component, and the flaws of humanity. AI can also become a trained attack dog that will be used to further and amplify all of the flaws of humanity.

We complain about dark patterns in programming, what happens when you have deep-blue style AI that someone can turn on and target someone working full time to manipulate? All of the abusive relationships, gaslighting harassment - what happens when when it's all being done on someone's behalf by an AI? 100x better than any person could do it.

The article discusses how IBM essentially used psychological warfare to get Kasparov to doubt himself. We won't be able to out think an AI - what's is look like when one is doing this to us non stop.

There are already a growing concerns that human minds didn't evolve to handle a lot of the impacts of things like social media (Twitter, Instagram, Facebook) and the impact it's had on anxiety, depression and social well being. What's next when instead of scripters DDOSing a target, they launch a few bots to subtly yet effectively drive them insane?

What's your phobia? Spiders? How about a spider a day on every electronic screen you view? How about micro-targeted ads containing spiders just for you?

You're in high school 15+ years from now - you don't really fit in, you're having a rough day. How about a drone the size of a fly that a bully's AI has programmed to follow you around and anonymously broadcasts every crappy thing that happens to you. It taps on your window and night while you're sleeping just to wake you up. Multiple times a day it whispers to you to commit suicide.

We in technology are always overly optimistic in thinking simply create the new technology and the world will be better. Unfortunately it is often people with less noble motivations who quickly realize the potential impact of new tech and use it as a power multiplier.

The times society is most at risk is when technology outstrips our ability to understand its implications and abuses. There has been a lot of discussion of consequences of tech w.r.t. this recent US election.

Re-read the article. In something as simple as board game, look how much psychology played into Kasparov's defeat. Even in his own words he never got over game two. But what's more, is that so much of human competition and interaction boils down to theory of mind of your opponent. Kasparov's problem wasn't really the move itself in game two, it's that he had no idea what he was up against anymore. It brought him to paranoia, demanding source code printouts, and suspicious of Russian body guards.

It can play possum, it can drag discomfort on, it suffers no fatigue, and you don't need general AI for any of this. It would be unwise to try to avoid tech and AI progress - it has the potential for great good. But I think we're still misreading the risks out there. I seriously think the larger threat is putting the equivalent power of a nuke in everyone's hand. It's inevitable, but we're not ready for the fallout.


Yup. And we won't be. The porn industry already uses such systems to keep porn addicts on the site the whole day just clicking away like trained chimps.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: