Hacker News new | past | comments | ask | show | jobs | submit login

Sure, but is that what the Ethical AI research team was actually working on? Because that's not what AI ethics typically means, which is a broader focus on AI alignment problems. Even if so, that may not be the most effective or efficient way to do it. We'll just have to see how it turns out, it's all speculation now, and impugning is character based on such moves is premature.



This is pure speculation, but I'm guessing they were working on ensuring that the ML models built by Twitter to do things like content moderation, tweet promotion, identifying trending hash tags, customer service help, and more were behaving in an ethical manner.

More concretely: Helping the teams who actually build and maintain these models ensure that the behavior isn't biased, that the model isn't being exploited by adversarial data, that the decisions made by the model are explainable, that the decisions are fair, coming up with corporate-standard-definitions for those various mutable terms (like fair), etc.

The usual things handled by a team that is tasked with AI Assurance/Ethics.

Now for my opinion: When it comes to impugning Elon Musk's character, he's done enough of that himself. He doesn't need my help. Additionally, firing an AI Ethics team in a company which requires AI(ML) to even operate speaks for itself.


> This is pure speculation, but I'm guessing they were working on ensuring that the ML models built by Twitter to do things like content moderation, tweet promotion, identifying trending hash tags, customer service help, and more were behaving in an ethical manner.

Sure, but this is just a feature of the product in question, so if you have some metrics to measure such outcomes then the product developers themselves can do this checking as part of their development process. I'm not sure why this would need to be a separate division. Presumably Tesla's autonomous driving developers are also creating metrics to ensure they don't run over dogs and children, they don't need a separate ethics division to tell them this is important.


Why assume that they are merely "tell[ing] them this is important" and not actually "doing the job" as an independent validation?


I'm making the point that there's no a priori reason why that work has to be independent. No other feature has such a special status.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: