Hacker News new | past | comments | ask | show | jobs | submit login

I find it unhelpful that several different ideas are covered under the blanket term of AI safety.

- There is AI safety in terms of producing "safe", politically correct outputs, no porn, etc.

- There is AI safety in terms of mass society/election manipulation.

- There is AI safety in terms of strong AI that kills us all a la Eliezer Yudkowsky.

I feel like these three have very little in common and it makes all debates less efficient.




To add to this, the people writing these articles are not stupid and know there is more than one understanding for the words they're using. They choose not to clarify on purpose, or they're incompetent and I don't want to believe that's the case.

"AI safety" imo should be the blanket term for "any harm done to society by the technology", and it's the responsibility of anyone using the term to clarify the usage.

If someone is trying to tell you to support/decry something because of "AI safety" they're tying to use the vagueness to take advantage of you.

If someone is trying to tell you that "ai safety people are dumb", they're trying to use the most extreme examples to change your opinion on the moderate cases, and are trying to take advantage of you.


> they're tying to use the vagueness to take advantage of you

Not always. I see AI safety used without further specification all the time even here on hacker news. I doubt the users are trying to take advantage of anyone.

In that case it's more of a case of the author considering one of those meanings so much more important/relevant that they think it's obvious what they mean. But often it's not – everyone fills in the meaning they care about the most.


In a comment it takes too much effort to specify every word you use, so instead comments relies much more on context. If a discussion is about models getting neutered then AI safety probably means that and so on.

But when you write a full article then the context is your article so you have to specify what you mean there, and the effort to specify it in an article is relatively little work compared to doing it in every comment.


The third effort is referred to sometimes as AI not-kill-everyone-ism, a tacky and unwieldy term that is unlikely to be co-opted or lead to the unproductive discussion like around the OP article.

It is pretty sad to read people bash together the efforts to understand and control the technology better and the companies doing their usual profit maximization.


Another problem might be that some of these might dress up as the other, so being skeptical by default might be prudent.


This is such a really important comment. I feel like so much of our discourse as a society suffers terribly from overloaded and oversimplified terminology. It's impossible to have a useful discussion when we aren't even in sync about what we're discussing.


Also AI safety in terms of protecting corporation profits (regulatory capture).


There is also the aspect of only having one cultural or ethnic group have strong AI. It's unsafe for those other guys to have AI, since they are bad.TM


Well yeah, but that one is dishonest. Probably covering itself as one of the ones I mentioned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: