Unpopular opinion:
I don't believe that is the case. AI has always been in the realm of math and computer science. And the researchers fired was researching only the ethics of it. A lot of researchers are apolitical and only there for the challenge or the pay.
If anything, I think what happens deters investment in AI ethic research. For Google, it has only produced bad PR without any value to their influence and market value in the AI community.
Edit: Also Google's strategy of giving away a production level AI library like Tensorflow has basically locked up their status in the AI community for years to come.
The article mentions the impact on the broader research community too. Samy Bengio is extremely well known, not for ethics. He's one of the original authors of Torch.
They won't have trouble recruiting low and mid level AI engineers, simply because the pool is large. But the pool of highly experienced engineers and leaders in the space is much smaller, and even a moderate reduction in interest there could have huge long-term implications for the company--especially since those high-level contributors have greater financial freedom to follow their principles.
You live in la la land if you think they will have a problem hiring AI leaders. If you pay them they will come. Even the couple of researchers who got fired realized they made a mistake. Bengio left but he was already there for 15 years and is a multi-millionaire. He probably would've left anyway sooner or later.
I think you misunderstand how much leading AI researchers will be learning from complexity science and networks, and how much they'll understand [informational] diversity itself as critical to all networks (including their own community of practice).
My assumption is that there is a dovetail between understanding the role of randomness and diversity in neural networks, and understanding of its role in the social computation of society. Those who don't see that relationship and don't put it into practice in their own value systems will not be "at the top", because they are partially blinded.
Just my hot-take. Might be wrong. But I think there's a bias that favours these value systems amongst top tier researchers, unlike traditional hard comp eng studies.
I know it's an unpopular opinion but I have the feeling that most of these "Ethics in AI" movements are either pure marketing or an easy way in for free rider into the hyped field of AI even though they have little to contribute except shallow papers or promoting their political agenda.
"Ethics in AI" have deep implications for improving our AI systems. For example, bias in training data has an outcome of the performance of a system. Ethics in AI movements looks deeper into this process and does improve performance in AI systems.
Also because AI deeply interacts with human systems today at a large scale (think youtube recommendation algorithm) it will also have a large affect on society.
So not sure what "political" agenda there is there except acknowledging the obvious.
If attributes such as age or gender are strong features for certain dependent variables that improve the predictive performance of a model, then it must be allowed to use these variables. For example, it is well known that colorectal cancer is more prevalent in men over the age of 50 - so a statistical model used to allocate free colorectal exams would favor old men. Is that discriminatory? Likewise, for any given variable, one will find certain features that favor men or women, young or old; finding these dependencies is precisely the point behind statistical models.
As you know, most AI is curve fitting or representational learning. So the question is, is the statistical distribution your model learning a static one, bound by natural laws like physics or biology, or are they human systems with changing distributions where the machine itself can cause effects and impacts. Your example is of the former, while the ML model predicting 'success at a job' or 'credit worthyness' falls in the latter category. The latter category is harder in every way and has ethical concerns because it necessarily can change (or usually keeps the same) the man made social systems.
On the other hand, if a model fails on certain populations because not enough training data including them was input because they're historically seen as a less important subgroup, then you've simply encoded your societal biases in your model. Understanding that difference and pointing out problem spots like that is a great job for an ethical AI researcher.
Google's AI ethics board in 2019, for example, was killed for explicitly political reasons. Many AI ethics commentators, including two of the board members, indicated that Kay Coles James should not be permitted to participate in AI ethics discussions because of her politics.
I think that typical CS education is really narrow. Developing technology without even considering its ethical consequences does not absolve you of moral responsibility for those consequences. There's hundreds of years of thought on this subject, but unfortunately that thought is generally found in the "arts" so willful ignorance thereof seems to be a badge of honour in some CS circles.
In my opinion companies create such departments to stave off government regulation, by pretending to already do all they can about "Ethics in AI" without government intervention.
If those people were hired for that, they failed spectacularly in their jobs.
In a wider sense, the whole "Ethics in whatever" thing seems to just be a power grab.
For example my country has Ethics groups debating Corona measures. I live in a country that theoretically is a democracy. If "ethics experts" get to make decisions that control our lives, why bother with democracy?
Aren't most (all?) modern democracies some flavour of representative democracy?
We elect politicians who appoint bureaucrats who hire staff to run departments, who'll then take advice from committees and expect advisers.
It reads to me like you mean to imply direct democracy, which I suspect would be some magnitudes of order more terrifying than the incompetent fools and puppets presently steering the ships.
It's true that in theory it should be possible to elect another government that chooses not to delegate to the "Ethics Experts". I am not claiming we live in a totalitarian regime because the government decided to consult ethics experts.
It just seems that in this case, ethics are exactly what the people should be voting on - the value system of the society they want. So delegating to an ethics committee is at best propaganda and making excuses for not doing what the people want.
Only a few of the items in Gebru's "controversial" paper are even worth discussing - do we really need an Ethics department to calculate the carbon footprint of training a model, or point out that a model based on scraping the internet won't necessarily align with trendy shifts in vocabulary?
I really never understood this whole Gebru situation and it sounds like there is bad behavior on both sides.
The bad behavior by Google is pretty well available online.
Yet no one is talking about the fact that they hired Gebru to make these changes. All it seems they asked from her was not publicly slander the company that was paying her to do exactly what she wanted. Then Google claims she quit which should be provable and she claims she was fired which also should be provable. Yet I've seen no proof of either.
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google
So she quit then and this is a non issue. It rarely goes in favor of anyone when they give an ultimatum to the company that employed them or they will quit. The very act of doing so has basically proven is incapable of working with the company or doing the job she was hired to do.
I'm no fan of Google so whatever knocks them down is fine with me.
I'm not sure what you are not clear on here. If someone says I quit/resign then they quit period, they were not fired. The two weeks period is just to prevent unemployment fraud.
So if you say I will find another job if the company doesn’t take action like... pay me more, agree to union terms, stop my manager from harassing me, or stop what I think is an unethical abuse of the public... then I am defacto quitting?
In that case how can an employee have a negotiation with a company at all? The power to refuse working is literally the only power the employee has at the negotiation.
Isn't it standard for quitting senior employees to be paid to sit at home on their notice period? It seems like a bad idea for someone leaving to retain their systems access for 2 weeks... That's Google right really, so unless her last pay cheque was short 2 weeks money, she quit and "worked" her notice from home.
Gebru: I have concerns that I'd like addressed that are important enough to me that I would quit over them, we can talk about it in more depth when I get back from my previously scheduled vacation.
Google: Don't bother coming back, you're no longer an employee.
Just like a threat to fire someone isn't actually firing someone, a threat to quit isn't actually quitting.
Her demands were quite specific and she did list a date.
I'm happy enough to say it's not clear cut quitting. But what's the difference really? She didn't want to stay, they didn't want her to stay, beyond that it's like a couple asking who dumped who.
I probably should have said "leaving employees" in my comment above...
Fact is any company will ever be as good as their executives. People are people, so there's no reason to believe in magic fairy tales. From outside this looks like manager panic, on both sides.
I don't agree that setting a date makes it a firing. If someone is looking to leave soon, there may be cause not to involve them in confidential company plans. It's a risk to take with little benefit.
The problem here is that “criticizing ethical biases in a company’s core AI technology” is pretty much what an “ethical AI” group is supposed to do, even if it conflicts with the company’s profit-making imperative. Once Google started drawing lines around what technologies it was politically-correct to criticize (or, perhaps, “slander”), that group simply became part of PR and marketing. And that’s fine! But academic journals should treat that as a conflict of interest when reviewing or editing papers from that group, and academic “ethical AI” folks are probably not going to want to work there, because it’s definitely not a disinterested actor at that point. (The “quit vs fired” issue basically comes down to an interpretation of whether Google could “short circuit” an ultimatum and go right to “accepting an implicitly-offered resignation.” That’s … a fine point of labor law, and probably one not easily solved short of litigation.)
Let’s say a company hired safety inspectors, and the inspectors found legitimate safety issues which would be expensive to fix. If the company fired the inspectors would you really be so pithy about for-profit enterprise? Likewise if a software company hired software security specialists who discovered severe bugs in their flagship product. Or if Bell Labs fired a physicist who discovered a flaw in transistor design.
It’s really no different with ethics. Gebru was hired by Google to study issues of ethics in AI. That was her literal job description. She was not hired to put a positive spin on Google’s business. She was hired as an objective researcher.
I seriously doubt you would actually defend the idea that (say) private-sector scientists or mathematicians should be expected to toe the company line even if they have a legitimate scientific objection: this attitude would be a disaster for the company in the long term, even if “ignore all bad news from the nerds” means they might make more profit in the short term.
I think there's still a portion of people who believe (or at least want to believe) Google's old "do no evil" mantra. You're right in the general sense, but for a time it seemed like Google might buck the trend a little.
You cannot make changes while cutting off your own hands. The fact that Google made the step to try to change their AI practices by hiring her in the first place was a huge risk to them. Then all they ask was not to ruin the business and she went all nuclear option and ran to the press about it.
As I see it at this point, her inability to navigate the politics of the situation showed she was incapable of doing the job anyway. She should have been highlighting her presence and recruiting people that believed in the cause. Instead she upended the table and left the second there was some friction between her and business interests.
If the bad behavior of Gebru is not available online, what makes you think there was bad behavior on both sides? Why even make that judgement without information?
Screaming off the mountain tops is not the act of a good solid person that is trying to make meaningful change. Not to mention they asked her to take her name off (bad) but if she was interested in change rather than her own personal interests it was not a huge sacrifice. She was already given a position at one of the largest companies in the world for AI to try to make meaningful change. Instead she chose to make a spectacle.
Gebru has not been forthcoming with information about what exactly she did in this situation behind the scenes. She has privacy and HR laws protecting her in ways Google does not. It is up to her to release the information and she has not.
The word I've heard on the street is that her paper passed the normal internal peer review process at Google (which is open wrt to the reviewers' identities) and was then hit with a second, just for her and her paper review process that had secret identities and made ultimatums that the paper simply wouldn't allowed to be published based on it's content, even with edits.
I'd be pissed in those circumstances too, seeing it as an affront to the academic process in general, and be looking at the obvious reasons why they chose me and my paper to apply these previously unheard of constraints to.
The only valid point of contention here is the demanding removal of her name on the paper.
My game theory'ish take on the demands.
Gebru
1.The demands were reasonable so she herself should have no issue sharing them. Vindicates her position against Google and proves herself and her theories correct, scoring a huge victory for her AI research.
2.Even she know her demands were so ridiculous she is not comfortable sharing them.
Option one has not happened so that leaves two. Google really has no good outcome in sharing them it could be considered all variety of violations HR so they likely never will without her permission.
> The only valid point of contention here is the demanding removal of her name on the paper.
My understanding is that the review required a full retraction of the paper so that it wouldn't be published, not just removing her name.
And your game theory view leaves out that she's in the middle of a lawsuit now, and the first thing her lawyer would have said is to stop talking publicly.
This has nothing to do with neither AI nor ethics. Gebru was fired for throwing ultimatums and demanding to "doxx" colleagues who criticized her paper [1], and blasting group emails calling for sabotage. [2]
Mitchell was fired for downloading thousands of internal files using a script and sending to external accounts. From Google response "we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees" [3]
An underdiscussed second-order consequence of bad press is that it does deter a subset of potential hires from joining the company. However, it's near-impossible to measure this effect, and in the case of FAANGs, there's more than enough supply of candidates (and enough compensation offered) to overcome a recruiting decline due to bad press.
I myself would not join Google (or Facebook)'s data science or AI divisions without assurance that those companies are taking bad DS/AI PR into account and making impactful changes to their pipeline.
What was their research about? When I hear ethical AI things like differentiable privacy and stuff comes to mind. Is it things like that or am I way off?
Google has lost all of its allure. I once dreamed of working there. Now, I think their employees drink the kool aid hard, and they are generally inept. Working at google isn't really impressive.
While those are cool open source projects, Googles actual products are the things that suck. Take GCP - they don't really dogfood ANYTHING. They still run half their stuff in borg! AWS dogfoods almost all of their services. The polish and integrations speak for themselves.
And look at their other product lines - android, communication apps, etc. They are generally a mess besides their absolute core offerings (Gmail, gdrive, and maps)
How is it that the "best engineers" in the world can't make good products?
Maybe I can rephrase the title in a more objective way:
Google has lost some shine among people who wish to find bias and make an inflated political situation out of any act that happens to touch gender/race/class -- whether through the research itself, or the "meta" issues about whether someone was advanced/demoted/fired/hired for discussions about such topics. The company is decreasing its internal cultural desire to swing back and forth with every political movement and attract difficult attention that makes working there less productive.
For those researchers who aren't crusading to make an example out of a tech company, and who don't care to bring college-level activism and risk into their employment, it remains a fine place to work.
That doesn't mean knock-on effect won't be huge. The people most outraged about the Gebru situation are the ones running our cultural institutions, specifically academia and news/media industries. They naturally have the loudest voices in the room, meaning this will harm Google even if I believe they're in the right for a change.
They're probably increasing their reputation with an entirely different set of AI researchers who would have otherwise stayed away from Google for fear of being forced into a particular politically-driven agenda that they themselves may well not support. If even Hollywood has their share of closet conservatives, how many more must a purely mathematical discipline?
It's clear people who make these comments don't understand AI, ML and the deep subjects the field touches. There are deep questions about what learning and intelligence IS that these folks are trying to answer. Otherwise ML is just curve fitting and if you are satisfied with that that's fine but I'm not satisfied with curve fitting.
People who criticize the kind of work AI researches do in regards to ethics have a shallow knowledge of AI, ML and the field in general. But that's just my opinion.
If anything, I think what happens deters investment in AI ethic research. For Google, it has only produced bad PR without any value to their influence and market value in the AI community.
Edit: Also Google's strategy of giving away a production level AI library like Tensorflow has basically locked up their status in the AI community for years to come.