> this whole ageing conversation has been dominated by the Boomers, often blamed for the world’s problems or pitted against younger generations. A tad unfair.
I'm in my mid-fourties and until recently, it seemed logical & righteous to place blame at the boomers' feet. Today it seems a lazy and naive, ultimately serving to further atomize/neuter the next generation. I expect my comeuppance is imminent. Oh well.
not on principle (unless i was applying to work in HR) - recruiters are not the company ambassadors you seem to imagine but cogs in a small machine that you're not likely to encounter after onboarding.
Not necessarily. Imagine a health insurance provider even partially automating their claim (dis)approval process - it could be both lucrative and devastating.
A similar issue arises with health insurance. Using AI to evaluate claims is a huge efficiency play and you don’t have much ability to fight it if something goes wrong. And even if you can these decisions can be life or death in the short term and human intervention usually takes time.
I'm not entirely sure what you're getting at re: hype.
While there is undoubtedly a lot of hype around these tools right now, that hype is based on a pretty major leap in technology that has fundamentally altered the landscape going forward. There are some great use cases that legitimize some of the hype.
As far as concrete examples, see the sibling comment with the anecdote regarding health insurance denial. There are also portions of the tech industry focused on rolling these tools out in business environments. They're publicly reporting their earnings, and discussing the role AI is playing in major business deals.
Look at players like Salesforce, ServiceNow, Atlassian, etc. They're all rapidly rolling out various AI capabilities into their existing customer bases. They have giant sales forces all actively pushing these capabilities. They also sell to governments. Hype or not, it adds up to real world outcomes.
Public statements by Musk about his intention to use AI also come to mind, and he's repeatedly shown a willingness to break things in the pursuit of his goals.
worse for the consumer or the provider? if the llm is going to fundamentally do a "worse" job no matter what the incentive (whatever that is, maximising profit, maximising claims, whatever that may be) we will end up with the "more efficient" system in charge
the counterpoint to this (which i guess is the tenet of the original comment?) is that hype can shadow good judgement for a short period of time?
The current admin is unlikely to get the DOJ to do anything, so that leaves you trying to sue them yourselves, and they can afford more expensive lawyers and to gum up the proceedings until you go bankrupt or give up.
If you haven't already signed away your right to sue with an arbitration clause.
class action. Someone sues on behalf of all customers of the insurance company saying the insurance policy is fraudulent. The company is executing policies based on ai or rolling the dice. The insurance company owes all customers balance paid on policy and all costs associated with attempting to use fraudulent policy.
A gun is not smart or magical but is nevertheless a powerful tool that can be scary depending on who is holding it. Accordingly, I worry less about my occupation and more about the moral character of those wielding it. Further i worry about "smart" people who have not been acquainted with the dark side of human nature facilitating bad actors.
Statistically speaking though if you leave guns lying around everywhere or introduce guns as a service for only $10/month and tell everyone that guns will solve all your problems, then you’re going to end up with fuckwits with guns.
If you hold a gun and point it at me I'd be more scared of that then you holding AI and pointing it at me.
AI right now is not powerful and not scary.
But follow the trendlines. AI is improving. From 2010 to now the pace is relentless with multiple milestones passed constantly. AI right now is a precursor to something not so dumb and not so "not scary" in the future.
My issue with this line of thinking is not that it's wrong but it's being manipulated by silicon valley.
Open AI is not arguing that AI is harmless they are agreeing it's dangerous. They are using that to promote their product and hype it up as world changing. But more worrying they're advocating for regulations, presumably the sort that would make it more difficult for competition to come in.
I think we can talk about the potential dangers of AI. But that should include a discussion on how to best deal with that and consciousness of how fear of AI might be manipulated by silicon valley.
Especially when that fear involves misrepresentation - eg. AI being presented to the public as self directed artificial consciousness rather than algorithms that mimic certain reasoning capabilities
I think the acknowledgement of danger by various companies is definitely a marketing tactic to a degree, and it's important to see the actions of those companies for what they are.
But then there's whatever danger actually exists regardless of the business maneuvering.
I'm not saying this is what you're doing, but I've been in numerous discussions where where someone will point to this maneuvering and then conclude that virtually all danger is manufactured/nonexistent and only exists for marketing purposes.
> eg. AI being presented to the public as self directed artificial consciousness rather than algorithms that mimic certain reasoning capabilities
I think the fact that these tools can be presented in that way and some people will believe it points to some of the real dangers.
A gun’s purpose isn’t to be smart. The gun equivalent of this post would be “Why guns still aren’t very good at killing” and that would be a serious problem for guns if that were true.
When a toddler can pull the trigger and kill someone, you may argue guns are pretty good at killing. Key point being, people don't have to be good at guns to be good at killing with guns. Pulling the trigger is accessible to anyone.
How often does that actually happen? It's only when a fun owner was irresponsible, leaving a loaded gun in an accessible place for the toddler.
Similarly, AI can easily sound smart when directed to do so. It typically doesn't actually take action unless authorized by a person. We're entering a time where people may soon be willing to give that permission in a more permanent basis, which I would argue is still the fault of the person making that decision.
Whether you choose to have AI identify illegal immigrants, or you simply decide all immigrants are illegal, the deciding is made by you the human, not by a machine.
Not the OP, but my best guess is it’s an alignment problem, just like gun killing what the owner is not intending to. So the power of AI to make decisions that are out of alignment with society’s needs are the “something, something’s.” As in the above healthcare examples, it can be efficient at denying healthcare claims. The lack of good validation can obfuscate alignment with bad incentives.
I guess it depends on what you see as the purpose of AI. If the purpose is to be smart, it’s not doing very well. (Yet?) If the purpose is to deflect responsibility, it’s working great.
Nothing stiffles my creativity so quickly as intrusive thoughts of marketability. I can follow the muse or the outcome, not both. I wonder how much of a role this played in the displacement of painting here and if writing may be subject to a similar fate.
It's intuitive, that the pursuit of mastery demands sacrifice (in family life and beyond).
Unfortunately philosophical mastery would seem to require full participation in life (to fully appreciate the human experience), making this level of sacrifice a potentially self-defeating proposition.
I’d say it applies in a lot of other career paths too. Creating something of quality requires holistic life experience. Otherwise it leads to shallow self-absorbed outcomes. Maybe that’s why enshitification is all the rage nowadays, not just pure shortsighted fast-profit-asap greed.
The problem as applied to philosophy is particularly concerning to me since those errors become the framework for thought, but yeah its otherwise not terribly different from the architect who doesnt code, the professor who hasnt spent time in industry, the philanthropist who lives in a bunker, etc.
Yes. Philosophy should be read by people who want to deconstruct it and then argue with it. Unfortunately laymen love to take it by face value and try to apply it as-is.
I guess sort of as marketing. When people buy fancy brands hoping to repeat what was shown in the ads.
The idea is sound. The static seed is presumably so the results are repeatable, but it works for true randomness. (Assuming you were permitted to do it this way, which you wouldn't be.)