Hacker News new | past | comments | ask | show | jobs | submit login

Evil isn't required, all that's required is a misalignment of objectives. Sort of like how objectives are misaligned between us and our less intelligent animal cousins. And look how things are turning out for them as we bulldoze their forests.

My main gripe with AI existential risk types is they have their own conflict of interest which comes from their position in society. They're all from the top 1% strata of society (status & socioeconomic), and this gives them a psychological bias that makes them preoccupied with what can go wrong for their comfortable lives instead of thinking about what can go right for the bottom 20% and how AI can be steered to achieve that possible betterment.




I'm using "evil" as a vague catch all. It doesn't have to be evil, it can just not align with our goals - whatever. My points all remain.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: