I was curious about the second Tweet on that page, which states:
>"Yep, the team is gone. The team that was researching and pushing for algorithmic transparency and algorithmic choice. The team that was studying algorithmic amplification. The team that was inventing and building ethical AI tooling and methodologies. All that is gone."
Does anyone if this team published any of their research on algorithmic transparency and algorithmic amplification? Could anyone say more about their work and what if any literature or tooling might be available to the public?
I interviewed for this team and the interview experience was so, so bad. The manager was extremely rude, and I was treated poorly in general.
I don't like Tesla, but Elon having trouble with this team in particular is no surprise.
The ethical AI folks who wrote the fairseq library at Microsoft have done more for AI Ethics and fairness than every single one of twitters AI ethics and responsibility folks.
>"Yep, the team is gone. The team that was researching and pushing for algorithmic transparency and algorithmic choice. The team that was studying algorithmic amplification. The team that was inventing and building ethical AI tooling and methodologies. All that is gone."
Does anyone if this team published any of their research on algorithmic transparency and algorithmic amplification? Could anyone say more about their work and what if any literature or tooling might be available to the public?