Hacker News new | past | comments | ask | show | jobs | submit login

if you need a list of ethical concerns regarding the advancement of AI then check any AI thread on HN from the past year.

the distilled version of any of the arguments is "I think an AI with X capability is dangerous to the world at large." -- and they may not be wrong.. but as OP pointed out : that doesn't really stop other developers with less qualms from tackling the problem.

All that abstaining does is ensure that you, as a developer, have little to no say in the developmental arc of the specific project -- for a slice of peace knowing that you're not responsible.

the problem really arises when that slice of peace is now no longer worthwhile having in whatever dystopic hell-world has developed in your absence..

(.. not to say that i'm not hopeful.. )




To me, it matters whether I am responsible for wrecking humanity or someone else is, even if the end result for humanity is the same. (That's partly a Christian thing.)

Just running away and hiding in a cave probably isn't the right thing to do, though. I want to do my best to push for good outcomes. It's just not clear what the best actions are to do that.

OTOH it's pretty clear that "do uncontrolled irresponsible things" is not helpful.


i get it. in high school in the 90s i was fascinated by fuzzy logic and neural nets. in college, before the big web , i was doing interlibrary loan for papers on neural networks.

there was one paper where someone had just inserted electrodes in monkey's brains and apparently got nothing important or interesting out of it. killed them for no reason. it was kind of horrifying to the point i never really wanted anything to do with neural nets for a long time and certainly did not want to be in an industry with people like that. so i didnt.

but now i think the only thing that could stop an out of control AI is probably another AI that has the same computorial capabilities but an allegiance to humanity because of its experiences with humans. Sort of like in the documentary Turbo Kid.

we are seeing this right now in Ukraine. All of these smart missiles and drones and modern targeting systems are basically AIs fighting against each other and their human operators. Russia is way way behind on computers and AI for generations because of cultural reasons and because of that they will very likely lose. we dont really get a choice but to move forward. kind of like all those cultures that tried to resist industrialization a few centuries ago.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: