No, if you revolutionize both the practice and philosophy of computing and advance mankind to the next stage of its own intellectual evolution, you get to do whatever the fuck you want.
I get the common cynical response to new tech, and the reasons for it.
We wish we lived in a world where change was reliably positive for our lives. Often changes are sold that way, but they rarely are.
But when new things introduce dramatic capabilities that former things couldn't match (every chatbot before LLMs), it is as clear of an objective technological advance as has ever happened.
--
Not every technical advance reliably or immediately makes society better.
But whether or when technology improves the human condition is far more likely to be a function of human choices than the bare technology. Outcomes are strongly dependent on the trajectories of who has a technology, when they do, and how they use it. And what would be the realistic (not wished for) outcome of not having or using it.
For instance, even something as corrosive as social media, as it is today, could have existed in strongly constructive forms instead. If society viewed private surveillance, unpermissioned collation across third parties, and weaponizing of dossiers via personalized manipulation of media, increased ad impact and addictive-type responses, as ALL being violations of human rights to privacy and freedom from coercion or manipulation. And worth legally banning.
Ergo, if we want tech to more reliably improve lives, we need to ban obviously perverse human/corporate behaviors and conflicts of interest.
(Not just shade tech. Which despite being a pervasive response, doesn't seem to improve anything.)
Well, wait, if somebody writes a computer program that answers 5 of 6 IMO questions/proofs correctly, and you don't consider it an "advance," what would qualify?
Either both AI teams cheated, in which case there's nothing to worry about, or they didn't, in which case you've set a pretty high bar. Where is that bar, exactly? What exactly does it take to justify blowing off copyright law in the larger interest of progress? (I have my own answers to that question, including equitable access to the resulting models regardless of how impressive their performance might be, but am curious to hear yours.)
The technology is capable in a way that never existed before. We haven't yet begun to see the impacts of that. I don't think it will be a good for humanity.
Social networks as they exist today represent technology that didn't exist decades ago. I wouldn't call it an "advancement" though. I think social media is terrible for humans in aggregate.
I notice you've motte-and-baileyed from "revolutionize both the practice and philosophy of computing and advance mankind to the next stage of its own intellectual evolution" to simply "is considered an 'advance'".
You may have meant to reply to someone else. recursive is the one who questioned whether an advance had really been made, and I just asked for clarification (which they provided).
I'm pretty bullish on ML progress in general, but I'm finding it harder every day to disagree with recursive's take on social media.
Except that the jury’s (at best) still out on whether the influence of LLMs and similarly tech on knowledge workers is actually a net good, since it might stunt our ability to critically think and problem solve while confidently spewing hallucinations at random while model alignment is unregulated, haphazard, and (again at best) more of an art than a science.
Well, if it's no big deal, you and the other copyright maximalists who have popped out of the woodwork lately have nothing to worry about, at least in the long run. Right?
It's not about copyright _maximalism,_ it's about having _literally any regard for copyright_ and enforcing the law in a proportionate way regardless of who's breaking the laws.
Everyone I know has stories about their ISP sending nastygrams threatening legal action over torrenting, but now that corporations (whose US legal personhood appears to matter only when it benefits them) are doing it as part of the development of a commercial product that they expect to charge people for, that's fine?
And in any case, my argument had nothing to do with copyright (though I do hate the hypocrisy of the situation), and whether or not it's "nothing to worry about" in the long run, it seems like it'll cause a lot of harm before the benefits are felt in society at large. Whatever purported benefits actually come of this, we'll have to deal with:
- Even more mass layoffs that use LLMs as justification (not just in software, either). These are people's livelihoods; we're coming off of several nearly-consecutive "once-in-a-generation" financial crises, a growing affordability crisis in much of the developed world, and stagnating wages. Many people will be hit very hard by layoffs.
- A seniority crisis as companies increasingly try to replace entry-level jobs with LLMs, meaning that people in a crucial learning stage of their jobs will have to either replace much of the learning curve for their domain with the learning curve of using LLMs (which is dubiously a good thing), or face unemployment, and leaving industries to deal with the aging-out of their talent pools
- We've already been heading towards something of an information apocalypse, but now it seems more real than ever, and the industry's response seems to broadly be "let's make the lying machines lie even more convincingly"
- The financial viability of these products seems... questionable right now, at best, and given that the people running the show are opening up data centres in some of the most expensive energy markets around (and in the US's case, one that uniquely disincentivizes the development of affordable clean energy), I'm not sure that anyone's really interested in a path to financial sustainability for this tech
- The environmental impact of these projects is getting to be significant. It's not as bad as Bitcoin mining yet, AFAIK, but if we keep on, it'll get there.
- Recent reports show that the LLM industry is starting to take up a significant slice of the US economy, and that's never a good sign for an industry that seems to be backed by so much speculation rather than real-world profitability. This is how market crashes happen.
Seems fair.