> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.
One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.
"I would argue that there are very few benefits of AI, if any at all."
OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.
Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.
I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.
Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?
Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?
I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.
1. Here is the subset: any algorithm, which is learning based, trained on a large data set, and modifies or generates content.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.
I wish your parent comment didn't get downvoted, because this is an important conversation point.
"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"
I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.
Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.
> I think this (esp the sufficient number of bad actors) is vibes based on bad headlines and no actual numbers. In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin.
We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.
In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.
And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.
I think we'll have to wait and see here, because all the layoffs can be easily attributed to leadership making crappy over-hiring decisions over COVID and now not being able to admit to that and giving hand-wavy answers over "I'm firing people because AI" to drive different headline narratives (see: founders/CEO's talking outta their a**).
It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.
You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.
Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.
Just to add one final point: I included modification as well as generation of content, since I also want to exclude technologies that simply improve upon existing content in some way that is very close to generative but may not be considered so. For example: audio improvent like echo removal, ML noise removal, which I have already shown to interpolate.
I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.
> I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.
The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.
A few programmers being better off does not make an entire society better off. In fact, I'd argue that you shipping code 10x faster just means in the long run that consumerism is being accelerated at a similar rate because that is what most code is used for, eventually.
I spent much of my career working on open source software that helped other engineers ship code 10x faster. Should I feel bad about the impact my work there had on accelerating consumerism?
I don't know if you should feel bad or not, but even I know that I have a role to play in consumerism that I wish I didn't.
That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.
I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.
The TV show The Good Place actually dug into this quite a bit. One of the key themes explored in the show was the idea that there is no ethical consumption under capitalism, because eventually the things you consume can be tied back to some grossly unethical situation somewhere in the world.
i don't really understand this thought process. all technology has it's advantages and drawbacks and we are currently going through the hype and growing pains process.
you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.
True, but it is meaningful to understand whether the "quantity" advantages - drawbacks decreases over time, which I believe it does.
And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.
Rather myopic and crude take, in my opinion. Because if I bring out a net, it doesn't change the woods for others. If I introduce AI into society, it does change society for others, even those who don't want to use the tool. You have really no conception of subtlety or logic.
If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.
Nobody decides, but that doesn't mean we shouldn't discuss and figure out if there is an optimal point.
Edit: And I think you might dislike automobiles if you were one of the people living right next to a tyre factory in Brazil, which outputs an extremely disgusting rubber smell on an almost daily basis. Especially if you bought your house before the factory was built, and you don't drive much.
But you probably live in North America and don't give a darn about that.
I think this is pretty much how many Amish communities function. As for me, I prefer making decisions on how to use technology in my own life on my own.
Of course that makes sense. But for instance, with SOME technologies, I would prefer not to use them but still sort of have to because some of them become REQUIRED. For example: phones. I would prefer not to have a telephone at all as I hate them with a passion, but I still want a bank account. But that's difficult because my bank requires 2FA and it's very hard to get out of it.
So, while I agree in prinicple that it's nice to make decisions on one's own, I think it would be also nice to have the choice to avoid certain technologies that become difficult to avoid due to their entrenchment.
I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.
One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.