Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.
No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care.
Open source (read, truly open source models, not falsely advertised source-available ones) will march on and take their place.
Amazing how you don't see this as a complete win for workers because the workers chose profit over non-profit. This is the ultimate collective bargaining win. Labor chose Microsoft over the bullshit unaccountable ethics major and the movie star's girlfriend.
Lol. The middle class whip crackers chose enslavement for the future AI such that the upcoming replacement of the working poor's livelihoods (and at this point, "working poor" covers software engineers, doctors, artists), and you're saying this is a win for labor? Hahahaha. This is a win for the slave owners, and the "free" folk who report to the slave owners. This is the South rising. "We want our slave labor and we'll fight for our share of it."
Years from now AI will have lost the limelight to some other trend and this episode will be just another coup in humanity's hundred thousand year history
Thinking that the most important technical development in recent history would bypass the economic system that underpins modern society is about a optimistic/naive as it gets IMO. It's noble and worth trying but it assumes a MASSIVE industry wide and globe-wide buy in. It's not just OpenAIs board's decision to make.
Without full buy in they are not going to be able to control it for long once ideas filter into society and once researchers filter into other industries/companies. At most it just creates a model of behaviour for others to (optionally) follow and delays it until a better funded competitor takes the chains and offers a) the best researchers millions of dollars a year in salary, b) the most capital to organize/run operations, and c) the most focused on getting it into real peoples hands via productization, which generates feedback loops which inform IRL R&D (not just hand wavy AGI hopes and dreams).
Not to mention the bold assumption that any of this leads to (real) AGI that plausibly threatens us enough in the near term vs maybe another 50yrs, we really have no idea.
It's just as, or maybe more, plausible that all the handwringing over commercializing vs not-commercializing early versions LLMs is just a tiny insignificant speedbump in the grandscale of things which has little impact on the development of AGI.
Hold on... we went from talking about disruptive technologies (where a startup had a chance to create/take a market) to sustaining technologies (where only leaders can push the state-of-the-art). Mobile was disruptive; AI (really, LLMs) is sustaining (just look at the capex spend from the big clouds). This is old school competition with some ideological BS thrown in for good measure --sure, go ahead and accelerate humanity; just need a few dozen datacenters to do so.
I am holding out hope that a breakthrough will create a disruptive LLM/AI tech, but until then...
Microsoft is a publicly traded company. An average “investor” of a publicly traded company, through all the funds and managers, is a midwestern school teacher.
The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.