Hacker News new | past | comments | ask | show | jobs | submit login

No more so than trying to control a supersonic aircraft when we can't even control pigeons.



I know nothing about physics. If I came across some magic algorithm that occasionally poops out a plane that works 90 percent of the time, would you book a flight in it?

Sure, we can improve our understanding of how NNs work but that isn't enough. How are humans supposed to fully understand and control something that is smarter than themselves by definition? I think it's inevitable that at some point that smart thing will behave in ways humans don't expect.


> I know nothing about physics. If I came across some magic algorithm that occasionally poops out a plane that works 90 percent of the time, would you book a flight in it?

With this metaphor you seem to be saying we should, if possible, learn how to control AI? Preferably before anyone endangers their lives due to it? :)

> I think it's inevitable that at some point that smart thing will behave in ways humans don't expect.

Naturally.

The goal, at least for those most worried about this, is to make that surprise be not a… oh, I've just realised a good quote:

""" the kind of problem "most civilizations would encounter just once, and which they tended to encounter rather in the same way a sentence encountered a full stop." """ - https://en.wikipedia.org/wiki/Excession#Outside_Context_Prob...

Not that.


Excession is literally the next book on my reading list so I won't click on that yet :)

> With this metaphor you seem to be saying we should, if possible, learn how to control AI? Preferably before anyone endangers their lives due to it?

Yes, but that's a big if. Also that's something you could never ever be sure of. You could spend decades thinking alignment is a solved problem only to be outsmarted by something smarter than you in the end. If we end up conjuring a greater intelligence there will be the constant risk of a catastrophic event just like the risk of a nuclear armageddon that exists today.


Enjoy! No spoilers from me :)

I agree it's a big "if". For me, simply reducing the risk to less than the risk of the status quo is sufficient to count as a win.

I don't know the current chance of us wiping ourselves out in any given year, but I wouldn't be surprised if it's 1% with current technology; on the basis of that entirely arbitrary round number, an AI taking over that's got a 63% chance of killing us all in any given century is no worse than the status quo.


Correct, pidgeons are much more complicated and unpredictable than supersonic aircraft, and the way they fly is much more complex.


I can shoot down a pigeon that’s overhead pretty easily, but not so with an overhead supersonic jet.


If that's your standard of "control", then we can definitely "control" human intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: