Hacker News new | past | comments | ask | show | jobs | submit login

Key quote:

“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

-- Pedro Domingos, in The Master Algorithm (2015)




I think this is rooted in the fact that humans don't really do intelligence all that well themselves. Most of us raise kids in a manner that assumes chuldren aren't independent thinking beings and we have a lot of social "rules" that fail to take into account actual independent thought and then humans bring these big blind spots to AI work.

Until we overcome such issues in humans, they probably are not solvable in AI.


children aren't independent from their parents pretty much by definition. They do have individual thoughts, but if their thoughts are concerned with their dependence, are those thoughts really independent?


I have raised two children. They are now in their twenties. From the get go, I dealt with them as beings who did things for a reason, and that reason was generally assumed to be about dealing with their needs, not "doing" something to me. Many parents expect kids to "behave" and that definition of "behaving" is rooted very much in what adults see and think about the child, not what the child is experiencing. This is inherently problematic.

Children may be dependent in many ways on their parents, but once they are outside the womb, if the parent dies, the child does not automatically die. They are a separate being. They have separate experiences. Their reasons for doing things come from their first person experiences.

Then parents very often try to impose third person motivations -- people-pleasing expectations -- that frequently interfere with the child pursuing its own needs.

We need to get better at dealing with kids as separate entities if we want to have any hope of dealing with machines functioning independently.

Your remark just reinforces my opinion that people do this badly. You think dependence is a given and I am not even sure how to go forward with this conversation because of this stated assumption.

Thank you for replying.


>You think dependence is a given and I am not even sure how to go forward with this conversation

Well, even you evidently are dependent. At least, if you telling me what you think is to some degree intended to solicit a response from me, your conversation depends on my answer. Humans are social, which is by definition pretty much not independent. Sure, this is splitting hairs over the meaning of dependence, but I didn't get what that's to do with AI, anyway. Surely, the AI depends on its design while the design constraints, the laws of nature if you will, are a given.


but I didn't get what that's to do with AI, anyway.

Humans get a lot of feedback other than explicit algorithms as to how to act or behave or what to do. A lot of that is social feedback and a lot of the expectations are about what other people think, in essence.

If you want an individual item with AI to be functional and "intelligent," we need to be able to write algorithms that work without that extra stuff. In order to effectively write those algorithms, we need to be able to think differently about this problem space than most people do.

Yes, conversation is inherently dependent on another party being involved. It isn't conversation if you just talk to yourself. Conversation has the capacity to add value.


If you're going to go down that path then your entire argument is moot because the definition of "intelligence" is so unclear.


I may have been unclear about how I am defining this, but it is not unclear to me and I don't see how "going down that path" automatically makes it unclear.

Humans are pretty bad about imposing third party points of view on behavior and reasoning. Until we get better at understanding reasoning from a first person point of view for humans, we are going to have trouble figuring out how to write effective algorithms for AI.

Is that any clearer? If not, what is unclear?

Thanks.


I guess you are contradicting yourself, while you say

>humans don't really do intelligence all that well themselves

>[(the definition of) intelligence] is not unclear to me

because knowing the definition of something and knowing something is largely the same, so you say you know intelligence, but humans in general don't do.


In your previous reply you used the word "intelligence" in a manner which had assumptions in it (this is, after all, how humans communicate). "AI" uses the same word with an overlapping but different set of assumptions.


Not that your reply answers my actual question, but I would be interested in knowing what you believe my assumptions were and how these differ from those used in AI.

Thanks.


I try to avoid making assumptions, including what your assumptions in your definitions of your two uses of the word "intelligence" were. If you used them coherently then that's wonderful but their definitions are not distinct outside your head. Their general (ie. dictionary/scientific) definitions are not absolute.

I'm an intelligent person. I know lots of intelligent people who are stupid and I know lots of stupid people who are intelligent. And I'm one of them, and I don't even know which one.


Well, that's a rather weasel-y non-answer answer, but it sounds to me like you think I am calling people "stupid" and that isn't what I am doing. I do know something about the background of intelligence testing and what not for humans. That definition of intelligence is inherently problematic.

Again: My point is that people frame things far too often from a third party point of view. This inherently causes problems in decision-making. Sometimes, humans can kind of muddle through anyway, in spite of that default standard. But AI is much less likely to muddle through anyway when coded that way.

If you (or anyone) would like to engage that point, awesome! Otherwise, I think I am done here.


It's an entertaining quote, I laughed, but I think everyone who works in AI already knows this. It's not a blind spot.


Right, try saying that the next time we have a flash crash that really kills the markets.

There should be an addition to the statement though, which is "Unfortunately, Humans en masse are even stupider."


Flash crashes don't seem to kill markets. Too many people realize it's an opportunity, and quickly buy.

Also worth remembering, the stock market is not the economy


Whether the people that pay us know this, though? Less clear.


That it's not a blind spot among "everyone who works in AI" does not make it any less a blind spot.


Calling people stupid isn't cool, it's unproductive overall and there is not place for it no matter how educated someone is this.

Society is ill right now and maybe people are hoping artificial intelligence will take them to a better place?


He was talking about computers, not people.


I downvoted myself :)

Begin the human replacement procedure!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: