AI (let's be real, we're actually talking about ML here) doesn't need to pose an existential risk to harm people in the real world. And given that tweets are displayed and promoted using ML algorithms, the ethicalness of the ML algorithms matters to Twitter's userbase.
Sure, but is that what the Ethical AI research team was actually working on? Because that's not what AI ethics typically means, which is a broader focus on AI alignment problems. Even if so, that may not be the most effective or efficient way to do it. We'll just have to see how it turns out, it's all speculation now, and impugning is character based on such moves is premature.
This is pure speculation, but I'm guessing they were working on ensuring that the ML models built by Twitter to do things like content moderation, tweet promotion, identifying trending hash tags, customer service help, and more were behaving in an ethical manner.
More concretely: Helping the teams who actually build and maintain these models ensure that the behavior isn't biased, that the model isn't being exploited by adversarial data, that the decisions made by the model are explainable, that the decisions are fair, coming up with corporate-standard-definitions for those various mutable terms (like fair), etc.
The usual things handled by a team that is tasked with AI Assurance/Ethics.
Now for my opinion: When it comes to impugning Elon Musk's character, he's done enough of that himself. He doesn't need my help. Additionally, firing an AI Ethics team in a company which requires AI(ML) to even operate speaks for itself.
> This is pure speculation, but I'm guessing they were working on ensuring that the ML models built by Twitter to do things like content moderation, tweet promotion, identifying trending hash tags, customer service help, and more were behaving in an ethical manner.
Sure, but this is just a feature of the product in question, so if you have some metrics to measure such outcomes then the product developers themselves can do this checking as part of their development process. I'm not sure why this would need to be a separate division. Presumably Tesla's autonomous driving developers are also creating metrics to ensure they don't run over dogs and children, they don't need a separate ethics division to tell them this is important.