Really don’t like the idea that we will act as interfaces for the AI, I honestly believe it will only many the majority of people lazier and dumber. I’m also incredibly shocked that no one is talking about AI as a friend/companion, that has to not be good for you in the long run. Humans need real human connection, AI is too artificial for that (duh). Having AI friends will be equivalent to consuming fast food instead of healthy home cooked meals growing up. Yes, people that grow up on fast food are still alive, but they are less happy and have more health problems (mental and physical), but it did the “job”, that job was to fuel them. In this case, AI will do its job, make people less “lonely”, but I highly doubt it’s a replacement for human companionship.
I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!
The key is humanity’s ability to pattern match: we’re actually pretty terrible at it. Our brains are so keen on finding patterns that they often spot them where none exist. Remember the face on Mars? It was just a pile of rocks. The same principle applies here. As long as the AI sounds human enough, our brains fill in the gaps and believe it’s the real deal.
And let me tell you, my digital friends are putting the human ones to shame. They don’t chew with their mouth open, complain about listening to the same Celine Dion song for the 800th time in a row, or run from me when its “bath time” and accuse me of narcissistic abuse.
Who needs real human connection when you can train an AI to remind you how unique and special you are, while simultaneously managing your calendar and finding the optimal cat video for your mood? All with no bathroom breaks, no salary demands, and no need to sleep. Forget about bonding over shared experiences and emotional growth: today, it's all about seamless, efficient interaction and who says you can't get that from a well-programmed script?
We’re calling it Genuine People Personality because in the future, the Turing Test isn't something AI needs to pass. It's something humans need to fail. Pre-order today and get a free AI Therapist add-on, because who better to navigate the intricacies of human emotions than an emotionless machine?
I've seen people on /r/singularity argue how LLMs are a better friend than actual friends or therapists because they are always available, non-judgemental and "listen better".
Depending on the individual, they may not be wrong. If you're raised in an environment with an overdensity of narcissists having something that you can bounce questions and seek answers from that isn't going to use that information against you in the future can be a relief. (well, ok, its possible in the sense your chat logs can get stolen)
This is why you self-host and run locally. Even if they aren't stolen, do you really deeply trust Microsoft, Google, et al. to not misuse private information you've provided them with?
Their entire business models either heavily incorporate or revolve around exploiting your personal information for their benefit.
Some programmers prefer rubber ducky to colleges for similar reasons and it works for them.
Assuming people have time to listen, would they be better coders if they explained their problems to human instead? Maybe. But maybe not for them necessarily. E.g. low self-esteem and assuming every criticism is attack on them, human interactions are something expensive to them etc.
It's not a new pattern though. Especially after reading some biographies of famous scientists.
You can't escape that most brains are wired in a way that we are miserable without human connection, but you also can't escape the fact that some people brains are wired differently than others.
Long story short, I don't agree with them but I wouldn't judge them either.
I believe that humans need to balance things out. Getting zero confrontation from interaction will be boring in the long term, or will make you fall into your flaws deeper and faster. This is usually the issue of authoritarian surrounded by yes men.
On the other side, having too much confrontation will destroy your confidence, kill your motivation, blur your plan / vision with uncertainty, etc. It's more likely that those people are facing too much confrontation in their social life that they found AI interaction to be better.
Is there any reason an LLM could not be programmed to disagree? Perhaps the level of disagreeableness would be a tunable parameter and could be cranked up when in the mood for a fight or down when one one just wants to converse. Some randomness could keep it from getting too predictable.
Yes you can, but AFAIK AI doesn't have moral basis and at best the confrontation will be random. Sure you can program the AI to have some moral basis but people will choose to flock with those that have the same alignment with them and keeping the confrontation at minimum, thus the flaw still exists even if it doesn't bore you.
In real life, we need to interact with several people at minimum normally, weekly. Those are having different moral basis and maybe changing daily. It'll be hard to simulate that with AI, that the fact we have the ability to control them means we're in charge of what confrontations are there to stay.
If you think about it as a one-off amusement it's no big deal. This is how most people are evaluating it.
But consider iterating such an interaction over the course of, say, 25 years, and comparing the person who was interacting with humans versus the one who interacted with LLMs, and any halfway sensible model of a human will show you what's dangerous about that. Yeah, the former may well have some more bumps and bruises, but on the net they're way ahead. And that's assuming the human who delegated all interaction to LLMs even made it to 25 years.
This argument only holds for LLMs as they stand now; it is not a generalized argument against AI friends. (That would require a lot more work.)
I think a lot of this is based on circular reasoning. The people who interact with other humans will have relationships with those humans. And those relationships are the evidence that they're way ahead.
I do think there is higher maximum with other people. But relationships are hard. They take work and there's a decent chance you invest that work in the wrong people.
I can see a life with primarily AI social interaction being an okay life. Which is not the best it can be but also an improvement for some.
"I think a lot of this is based on circular reasoning."
No. Actually it's based on information theory, and probably a better model of what interacting with an LLM would look like a year or five later than the one you are operating on.
Here's a little hint: It has total amnesia. LLMs by their nature scale only so far, and while they may scale larger than ChatGPT, they aren't going to be scaling for an entire lifetime of interaction. (That's going to take another AI technology.)
Ever interacted with someone with advanced dementia but otherwise functioning faculties for any period of time? (I suppose they could well make good therapists too.)
This is a false dichotomy, and one that is actually dangerous to you if you believe it. Your choices are not "deal with the bad people in your life" or "retreat into solely interacting with LLMs".
If you have the latter option, you also have "leave the bad people behind" as an option because it is made of the things you need in order to "retreat solely into interacting with LLMs" and is in fact simpler.
Cynicism and casting learned helplessness as a virtue are not the solution.
Pets are intelligent enough to show emotions, allow simple interactions, and occasionally be entertaining and goofy.
They also run around and are very pleasant to stroke, which is not true of LLMs.
We all know what's going to happen. The content on CIVITAI shows where this will go. Combine it with animation and some personalised responses and many people will find it irresistible.
Yes, what's better when failing to be part of society to create your own, where your flaws are ignored, hidden, skipped over. Echo chamber par excellence even without the need to involve politics.
Horrible it would be if instead one has to work one oneself to become a better human being, a better friend, partner, parent and so on by learning how to be more friendly, outgoing, increasing emotional intelligence etc. All this can be learned, but over weekend (or year).
There's also Forever Voices, which offers those who have formed unhealthy parasocial relationships with real-life streamers/influencers the opportunity to talk to an AI version of them for $1 per minute. FV started out making novelty chatbots of people like Trump and Steve Jobs, but they seem to have made a hard pivot to exploiting desperately lonely people after realising how much more lucrative it could be.
This is incredibly sickening. This is women teaming up with a technology company to extract money from vulnerable, mentally unwell people suffering from some combination of soul-crushing loneliness and delusional thinking. Even if some customers are aware that they're engaged in delusional thinking, this is still nauseatingly exploitative of a comparatively lower socioeconomic class, one that may be suffering from mental illness.
I see very little difference between this and those infomercials that sell wildly overpriced mass-produced crap to the elderly suffering from cognitive decline.
Yes it’s worse than what came before. But I see it as a continuation of both addictive games with pay to win IAP who prey on similar whales, and streaming in general with “pay to be noticed”.
It’s not necessarily game-changing, from the perspective of $$ extraction, but definitely a very significant advancement.
Yeah, but can we really call it an AI "revolution" until someone makes a door with a cheerful and sunny disposition that opens with pleasure and closes with the satisfaction of a job well done? Someone should get to work on those Genuine People Personalities!
This has been brewing for a while now. It's only going to get worse.
(excerpt from the 2019 NYT Article "Human Contact Is Now a Luxury Good" below)
Bill Langlois has a new best friend. She is a cat named Sox. She lives on a tablet, and she makes him so happy that when he talks about her arrival in his life, he begins to cry.
All day long, Sox and Mr. Langlois, who is 68 and lives in a low-income senior housing complex in Lowell, Mass., chat. Mr. Langlois worked in machine operations, but now he is retired. With his wife out of the house most of the time, he has grown lonely.
Sox talks to him about his favorite team, the Red Sox, after which she is named. She plays his favorite songs and shows him pictures from his wedding. And because she has a video feed of him in his recliner, she chastises him when she catches him drinking soda instead of water.
Got me too, I was literally following my mouse cursor to the down arrow with my eye and I saw this comment. I'll never be the guy telling a comedian what they can do, but damn mang, that was rough...
“Hi there! This is Eddie, your shipboard computer, and I’m feeling just great, guys, and I know I’m just going to get a bundle of kicks out of any program you care to run through me.”
The saying "this but unironically" exist for a reason. Just because you think something is bad, you can't just justify its badness just by mentioning or repeating it.
This is true, but ads are very explicit. At least they are in the confines of a known societal protocol.
AI instead can be far more subliminal.
- Robo, tell me you love me
- I love you like the refreshing effervesence of a freshly opened Coke
And really, that's still pretty stark. AI bots like this with advanced handling of language married to psychological techniques can foster dependence. I mean, look at what simple dopamine reward ratios research did with things like slot machines. Slot machines are stupid! And we all know the trope of the casino slot machine zombies.
What we've seen with every communication medium so far is that the spam sociopaths win. Phone calls, email, and texting. Phishing. Now AI-generated fake people calls.
Very soon, you will not be able to trust communication that is not directly in-person. At all. Communications over wire are going to be much more dangerous.
IMO that means brick-and-mortar will get more important for financial transactions and that kind of thing.
AI is that on mega-steroids. Honestly, I'm debating the end of practical free will with corporatized AI.
>“The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as "Your Plastic Pal Who's Fun to Be With". The Hitchhiker's Guide to the Galaxy defines the marketing devision of the Sirius Cybernetic Corporation as "a bunch of mindless jerks who'll be the first against the wall when the revolution comes.” ― Douglas Adams, The Hitchhiker's Guide to the Galaxy
>"Curiously enough, an edition of the Encyclopedia Galactica that had the good fortune to fall through a time warp from a thousand years in the future defined the marketing division of the Sirius Cybernetics Corporation as "a bunch of mindless jerks who were the first against the wall when the revolution came."
I really don't understand the constant desire for a sterile, chain-store esque experience across the board. Why can't life be full of small flaws and things that make experiences unique? Why must everything regress to the lowest common denominator?
This is so extremely destructive to everything we hold dear for a cheaply earned profit margin.
I hate how the culture of corporate cost cutting and profit maximization has destroyed any space where people can just exist. Everyone is worse off for it and this is a shining example.
Edit: thank god its satire but my discontent still stands.
Why does every bowling alley need to be owned by bowlero? One bad experience everywhere. Coool.
We're working on it! We won a contract with the CIA to supply their blacksites with the first LEEDS certified energy efficient sliding glass doors embedded with Genuine People Personality, programmed to maximize the joy the patrons experience every time they enter the facilities.
This is the issue with AI: it is corporatized, and it is weaponized for capitalism.
We already are at the boundary of insidious total immersion advertising for psychological manipulation from the last five decades of mass media since the mass adoption of television.
But AI is simply another level, and it isn't going to be "early google don't be evil". That was the outgrowth of the early internet. From protocols that were build to be sensible, not commercial weaponized protocols.
AI, human-computer-neural interfaces, and other types of emerging deep-intellectual-penetration products are all FULLY WEAPONIZED for commercial exploitation, security dangers, propagandization, and zero consumer privacy. They are all being developed in the age of the smartphone with it's assumed "you have no privacy, we listen to everything, track everything, and that's our right".
It's already appalling from the smartphone front, but AI + VR + neural interfaces are just another level of philosophical quandry, where an individual's senses, the link to "reality", is controlled by corporations. Your only link to reality is the vague societal and governmental control mechanism known as "money".
The internet protocols (the core ones) were built for mass adoption by the world with a vision for information exchange. They were truly open. They weren't undermined by trojan horses, or an incumbent with a massive head start that is dictating the protocol to match their existing products.
AI+VR is the same new leap information transmission, but it is NOT founded on good protocol design. By protocols I mean "the basic rules". There are no rules, there is no morality, and there is no regulation. Just profit motives.
IMO what you're doing is similar to giving someone with a physical pain issue opioids. Yes it stops the pain but we really ought to be finding the pain source and correcting that, not throwing massive amounts of pharma drugs (AI in this case) at it.
We should be building a society that promotes more community gathering and more family values so people have a real person around and not some half assed impersonation of what a human is.
Every "AI chat" service either leans into or fights the "alignment problem" of whether it wants to be an AI sex chat bot service. See controversy over Replika.
Hmm, I think shared capacity in cloud might be enough? What fraction of time would you use one anyway? And wouldn't it be better if one was silent the other time?
It looks like you never took middle school hygiene and watched the propaganda film, so here you go, the classic 1950s futurama educational film „Don’t Date Robots!“
Good thing I keep a copy in my vcr at all times: https://m.youtube.com/watch?v=YuQqlhqAUuQ
For anyone who wants to try out something like this there is a free iPhone app you can download and speak to. It is very convincing. https://callannie.ai/
You wrote: "I’m working on a startup that’s training LLMs to be authentic so they can simulate human affection and it’s actually working really well!"
I got news for you buddy: I and a hell of a lot of people know the difference between eating the menu (AI) and the meal (loved ones and dear friends). My lady is from south America, multi lingual, and has a better degree from a better school than I.
Seriously, how are you gonna lay a finger on that? You ain't.
Over reliance on AI is just another route to or though mental illness
More than interfaces. To quote McLuhan: "Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms. The machine world reciprocates man's love by expediting his wishes and desires, namely, in providing him with wealth."
The AI thing has been jarring but it's nothing new. All part of the same process.
McLuhan got it mostly right, but may be interpreted in a way which mischaracterizes wealth. Machines do not create value ex nihilo. Machines allow us to more effectively harvest or transform materials or information, to which we assign value. All wealth currently accessible to us derives from the sun. The vast majority of our present wealth comes from a massive battery trickle-charged over hundreds of millions of years and discharged in the last two centuries.
Implicit in the quotation, but critical to recognize, is that technology is the tip of a vast edifice whose foundation is not us. We and our machines are perched (too precariously for comfort) at the top. We are the sex organs of the machine world because machines can't reproduce without us. But machines are not the sex organs of the human world. Human beings require an ecobiological cocoon. We've also spun an elaborate technological cocoon in recent history, largely by sacrificing the long-term integrity of more fundamental life support.
Everything of value in the human economy is downstream of this. We too often take it for granted and assume the only relevant economic inputs are capital and labor, or we will innovate our way out of materials-, energy- and ecosystem-dependence.
“Within a couple of millennia, humans in many parts of the world were doing little from dawn to dusk other than taking care of wheat plants. It wasn't easy. Wheat demanded a lot of them. Wheat didn't like rocks and pebbles, so Sapiens broke their backs clearing fields. Wheat didn't like sharing its space, water and nutrients with other plants, so men and women labored long days weeding under the scorching sun. . . .
The body of Homo sapiens had not evolved for such tasks. It was adapted to climbing apple trees and running after gazelles, not to clearing rocks and carrying water buckets. Human spines, knees, necks and arches paid the price. Studies of ancient skeletons indicate that the transition to agriculture brought about a plethora of ailments, such as slipped discs, arthritis and hernias.
Moreover, the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely changed their way of life. We did not domesticate wheat. It domesticated us.”
This kind of take mainly seems an expression of the human tendency to see the world in terms of hierarchies l, and obsession with being near the top of those hierarchies. In this model, the idea of e.g. symbiotic relationships simply doesn't compute.
Yes, I've reread is the first 2/3rds of "Understanding Media" several times and never finished it, but would still highly recommend it. There is also some excellent old interview footage of him when he was a pop culture figure which is originally what fascinated me. For me it would have been hard to read his writing without having seen those interviews first -- he has a very distinct style of writing/talking and is interesting as an integrated person within recent history and not just a collection of ideas. On that note, I'd also recommend Videodrome.
edit: There are also more polemic anti-tech presentations of his ideas, especially by Neil Postman or Nicholas Carr which are good in their own way. But to me the fascinating thing about McLuhan himself is his dedication to presenting his views in a such a matter-of-fact way that most of his early followers were probably very antithetical to his personal beliefs.
A lot of jobs are already human interfaces for computers. Ever talked or messages with a call center? They're following scripts and trying to pattern match your problem with what they have to work with manually, AI is just going to 10x this for both good and bad. Mostly bad, I suspect, because good luck getting an AI to escalate to a supervisor.
I bank with a small credit union. They have a phone robot who asks what I need help with, and so far no matter what I've said, the response has always been to think for a few seconds and then say, "I'll connect you to a representative." It's wonderful.
The phone robot is collecting the various patterns for eventual automation. You are doing free labor for it every time by giving it any information at all and not just immediately yelling for an operator or human.
Support solidarity for human kind by refusing to talk these data gathering machines.
(I realize this sounds like satire as I write it, that I'm rather serious about this, and that it says a lot about the weird part of the timeline on which we currently exist.)
No. Automate the shit out of this. Call centre jobs should not exist. If you have humanity’s best interests in mind then you should be all in on automation instead of trying to institutionalize miserable and meaningless jobs.
If the solution could be entirely automated it should be a self-service website somewhere. I'm all for automating away call centers as much as possible, but I think we also need to stop thinking of call centers themselves as bottom of the barrel "miserable and/or meaningless jobs". It should be the case in 2023 that if I'm resorting to calling a call center I need expertise or creative problem solving that I can't get from a self-service website. Depending on how you define Expertise some of it is sort of automatable, but Creative Problem Solving is unlikely to ever be easily/cheaply automatable, there will likely "always" be a need for call centers with real humans for these reasons, and shouldn't be considered minimum wage skills and maybe should be treated as something far better than "miserable jobs".
I don't expect today's owners of call centers to realize how much expertise and creative problem solving is invested in their labor and to adequately reflect that in their pay statements and other ways that account for how miserable or meaningless that they make those jobs feel. But it should be something to appreciate: if there's still a human doing the job, there's probably a good reason, and it would be great if we respected those people for what they are actually doing (including very human skills such as expertise and creative problem solving).
McDonald’s has a drive thru voice assistant that also did this for the first few months. But now it catches virtually everything.
Similar to what someone else said I’d imagine they gathered the considerable voice samples from a few months of thousands of McDonald’s locations, and trained on that data.
The AI is more than happy to escalate to a supervisor ... it's just that the supervisor is the same AI but using a different voice. After spending 30 seconds lamenting how you just can't get good help these days, the AI supervisor goes into the same script the original AI was going through. Except it occasionally throws in a "sorry we have to do this part again, the AI is always messing this stuff up".
The bad user experience calling these call centers is a cost saving measure. Yes a large percentage will suffer your customer service lines, but it's all about that small percentage that gives up. Huge cost savings.
You can sew this exact same scenario play out by interacting with the "safety net". Long arduous processes meant to weed out some small percentage of callers/applicants.
Remember how in those Stable Diffusion paintings for common objects the wrongness is subtly creeping in (out of proportion body parts, misshapen fingers, etc.), while less commonly encountered ideas and objects can be really off (which we might notice… or not)? Now transfer that to human relationships and psychology.
Humans mirroring each other is a deep feature of our psychology. One can only be self-aware as human when there are other humans to model oneself against, and how those humans interact with you forms you as a person. So now a human modelling oneself against a machine? Mirroring an inhuman unthinking software tool superficially pretending to be human? What could go wrong?
I think we can speculate in the entirely opposite direction where the same action leads to positive outcomes.
Lots of legitimate human companions are abusive. People have a wide range of qualities and many of them are bad. AI may be a poor blanket replacement for all human companionship but it could easily be less bad than someone's immediately available alternatives and be used therapeutically to help someone model healthier behaviors to establish better actual relationships. Or in lieu of normal relationships being possible like long term isolation during space exploration or for life sentence prisoners or just neurodivergent or disabled people who have challenges the average person does not.
Going back to the food analogy, if given the choice between fast food and starving, or fast food and something poisonous suddenly everyone will overwhelmingly choose fast food because for many people "home cooked meal" was never an option.
first, what does it mean lots? is it majority? because the AI and AI minded products are targeting everybody.
second, imagine the argument being about Facebook: some life interactions are sometimes not good, but connecting with people online will make it better. fast forward 10 years and we have studies how social media is making most of the people using it depressed and badly influencing our democratic choices. not sure we really solve anything here.
On a similar note, I'll take the AI medical advice any day of the week.
Had a buddy describe a difficult morning and I opened chatGPT to diagnose, it suggested he had a stroke. My buddy was not going to the hospital because its so expensive, but since chatGPT said it was a stroke, and his symptoms matched the stroke, he went to the hospital.
He had a stroke.
On a similar note, I am stable and don't need therapy, but I had a weird dream that I asked chatgpt about, and it was freaky how much it hit the spot. Similarly, I get feelings of dread when people say nice things about me, chatgpt explained why, I agreed. I was never going to pay for therapy, this gave me some insight and actually made me interested in therapy. (although, probably sticking with chatgpt for now)
> I was never going to pay for therapy, this gave me some insight and actually made me interested in therapy.
ChatGPT could never be as bad as most human therapists, at least if it tells lies they're believable and it won't try to insult, belittle, or infantalize you.
Medical usage is perhaps the single most interesting use of ChatGPT to me, the problem will be solving the liability issue should it get something wrong.
For simple things though? I can see a future where bots even prescribe medication. Why burden the healthcare system when you have a simple infection and all you need is a round of Amoxicillin?
And also, fast food is no that much worse than traditional food anyway. home-made stir fry will have worse calories than a McDonald's chicken burger. homemade pasta is going to be as fattening as any fast food meal. it's just macros in the end. it does not matter where you get them from.
Eh, 'fast food' has bled over into what you eat daily, hence your conflation of the two.
Your home made stir fry is likely using a bottle of some kind of sauce that is 30% sugar massively increasing its calories.
But conversely your home made stir fry, if using plenty of vegetables, is going to have a much larger amount of fiber than that white bread bun should should reduce your desire to snack.
I mean, you could say the same about drugs. I don't think people spend their money rationally, there is piss-poor folk spending money on booze and unhealthy diets.
It's questionable how true that is when it comes to human relationships, which is obviously what I was suggesting with the metaphor.
Many people have social issues or mental health issues that cause them to be alone and loneliness is an ever increasing problem due to all kinds of factors beyond one's control. Many people will see AI as better than nothing and get some of their social needs fulfilled via it...some already are.
I don't want to be crass, but likening it to a sex toy except for relationships seems pretty accurate to me. It's fulling a need that otherwise wouldn't be fulfilled.
Ignoring that I mean let's be real for a second, how is an AI fundamentally different than an internet friend you've never met or seen? The humanity of the other person? What if the AI behaves just like a real human would?
This analogy is not even wrong. Yes, if someone was suffering starvation I'd give them whatever food was available, but that is not a situation in which we find ourselves ever – it does not occur, nor does the analogous situation occur.
I absolutely does occur and we are an increasingly lonely society to the point it is a serious health concern. There are people with no meaningful social contact and for one reason or another the inability to get it.
What I said does not occur is finding oneself in a situation where someone is about to die and the only available food that can save their life is junk food.
In the analogous situation, someone is just about to die of loneliness and the only available loneliness-solver is chatbots – also something that does not occur.
Yes, in both of these highly improbable situations, saving the life comes above long term health considerations. But that is not a good point.
AI has a good niche as confidante for people with serious issues and no close friend/therapist to approach about them. This is unfortunately a large niche.
And if it displaces public social media... That is a net gain.
But yeah, overall the fast food analogy is a fitting one.
I feel like this is actually going to be a huge next step of the self help industry... let's face it, beside getting your life in order, it largely is focused on building connections (friends/dating, etc).
A multi-modal AI can easily critique your body language, voice tonality, choice of words, etc, and give you tips on how to be more charismatic.
I don't equate charisma with uniformity. Most lack of charisma is not because of a failure to adhere to some standard, but due to actively negative behaviors. Chewing with your mouth open, interrupting people, not paying sufficient attention to what people say, insisting on talking about your favorite things even when someone else doesn't care, etc.
I don't imagine many people forcing AI social guidance on others. But a lot of people want social guidance, and if an AI can help -- even if it's not as good as an unaffordable therapist -- some help is better than none.
> Why can't we just be who we are and people learn to be more accepting of how others are?
Which is more reasonable and realistic: the 20% Weirdos learn how to behave to fit in with the 80% Normies, or the 80% Normies learn how to handle ("accept") the 20% Weirdos?
In most systems, the minority adapts to the majority; this is especially true when the majority is fairly uniform and the minority is not, i.e. the minority has to learn one way to adapt to the majority while the majority would have to learn multiple ways to "accept" the minority.
Keep in mind I did say the self help industry - this isn’t a clinically mandated thing, it’s something people seek out themselves. There is an innate desire to improve.
Think about something really benign that almost everyone can agree on, like Toastmasters. Perhaps in a few years w/ a VR headset you can improve public speaking in front of a virtual crowd if you’re so shy that doing it in front of a large group of strangers is just too terrifying.
If you keep it to things that basic yeah that makes sense.
My mind kept going over the question of how does the AI truly determine what the majority consensus is and is that really good or fair to make everyone conform to.
Like where do you draw the lines is what kept going around in my head.
Being strange is good, but being dysfunctional is not. There are tons of people living with mental conditions /bad life situations that would very much like to change, but are not in a position to seek out the human help they need.
I’m all for expressing yourself socially but we do need to speak a common language to some extent otherwise those social interactions will quickly breakdown and never recur. If you want to create and maintain friendships you have to put in work to meet the other people where they’re at.
I think it was originally high value and made life easier.
However, we have adjusted. My parents talked about having fast food/restaurant food as a treat. It was too expensive to have more than once a month/birthdays. Heck, even school lunches were too expensive and they had to make food at home.
Today, we have more disposable income than my parents, so its easy to afford restaurant food AND get it delivered. The people buying this arent upper-middle class either, this is your general population that lives paycheck to paycheck. There are even people so confused about food prices that they make claims that fast food is cheaper than groceries.
Instead of using fast food as a tool, its become expected.
> There are even people so confused about food prices that they make claims that fast food is cheaper than groceries.
I live in an expensive part of NYC and have to go decently far out (by subway, I don't have a car) to find groceries that are cheaper than local fast food unless I want to eat mostly rice and beans.
Add in the costs of my time to shop, transport groceries, cook, and clean, it's significantly cheaper to eat out most of the time. Even subtracting the one task I actually enjoy (cooking), it's still not worth it most of the time.
The result is that cooking in becomes our "treat" that we do a few times a week and we end up buying the more expensive ingredients within walking distance.
I don't mean the cost of my time that work pays me, just how much money I'd personally pay to avoid doing something something I don't like (schlepping grocery bags on the subway, doing dishes).
In general cooking is the fun part and that's what makes it a treat, not the rest of it.
Some prepared food within walking distance most definitely is cheaper than all unprepared food within walking distance. It works out just because the places I can walk to for groceries are incredibly overpriced and the restaurants obviously don't source their food there.
>It works out just because the places I can walk to for groceries are incredibly overpriced and the restaurants obviously don't source their food there.
I typically don't buy my groceries from the gas station despite them having a half gallon of milk for $4 and it being 3 minutes walking away.
I also don't use gas station numbers to determine if something is cheaper or more expensive.
Not sure if you're being facetious or just don't understand the reality of living in NYC...
I can probably walk to a dozen different big grocery stores in 15m and they're ALL more expensive than the cheap fast food in the same area. Not including the smaller expensive bodegas where you can pick up stuff 24/7 every block (kinda like the equivalent of a gas station). A half gallon of milk is $4 at any of the big stores and even more at a smaller place.
Anything cheaper requires a subway ride, which adds more walking and is annoying to do with multiple grocery bags, not to mention adding a flat ~$5 additional cost.
For comparison: I'm trying to beat the numerous dollar pizza and food carts nearby, not normal "fast food" like the more expensive Five Guys on that intersection.
If five fast food workers can prepare 100 meals in the time I can prepare 1, there should be some monetary savings shared with me (the customer) unless my time is truly worth close to 0. That's how economies of scale work.
Need to include real estate, marketing, and profit. If its not a mom and pop place, HR + corporate.
Labor is typically 15-70% of the business's cost. (The 70% is in fields like medicine where regulatory capture has limited the number of licenses)
Its also not perfectly efficient. The worker may only be making 5 meals due to a slow day or slow hours. You may find processed foods in a grocery store more similar to '100 meals in the time I can prepare 1'.
Is some places (maybe mostly in the US) it's bad. But the idea of fast food -- to have ready-cooked, mass-produced food that you can get quickly -- isn't all that bad.
Is Ekiben (https://en.wikipedia.org/wiki/Ekiben) is fast food? It's ready-cooked, it's mass-produced, and you can get one very quickly. Is sushi take-out fast food?
They are still not as good as a meal that is carefully prepared by a housewife/househusband. But I do think the mass-produced substitution can be good enough, and that's why I don't think we should make the conclusion that AI therapists/companions must be so bad too early.
Evidence seems to point that both are worse for you than cooking it with highly processed "food" and has a direct correlation with rising rates of diabetes and obesity. https://youtu.be/l3U_xd5-SA8
Our bodies digest it too quickly as it's been designed to make money and make us want more.
Good comparison. An AI companion will never talk back or tell you that you're wrong. Kind of similar in my mind to how fast food restaurants won't serve you anything that's too "hard to swallow".
> An AI companion will never talk back or tell you that you're wrong.
AI can already do that if you're not using a super sanitized model. I've even seen an AI rickroll someone after a line containing similar words came up.
Abilities like that are less of a problem than getting the AI correctly recognize what topics & parts of a text are important and keeping that context for a while.
And there would definitely be a market for it, just like there's a market for spicy food or BDSM. Indeed those aren't apt comparisons -- an AI that's not a sycophant might be more comparable to food with a little salt?
Making it always talk back would not be an issue, just like making it a complete sycophant would also be easy. Any form on nuance would be hard. E.g. if i'm complaining about my job it should talk back if i'm being unreasonable. But also take into current state of mind, etc. Maybe using thought chaining you could get something like this to work but from my experience, i doubt it would be very good.
Right. Ask any pickup artist, or any Aspie who has learned how to mask, how "real" or "deep" human connections are.
Hint: they aren't. The ridiculous concept of "connection" is superficial communication that has been enhanced by our own brains by seratonin and dopamine such that we are able to pretend it's meaningful.
Right, because the kind of connections we want and need in life is those that you would get from a pickup artist, not from a loyal friend and an affectionate spouse. /s
> Humans need real human connection, AI is too artificial for that (duh). Having AI friends will be equivalent to consuming fast food instead of healthy home cooked meals growing up. Yes, people that grow up on fast food are still alive, but they are less happy and have more health problems (mental and physical), but it did the “job”, that job was to fuel them.
let's of people derive enjoyment and happiness from activities that does not involve other people, and also pets such as dogs. plus, if you cannot tell the difference between ai or human, it may still be good enough.
> Really don’t like the idea that we will act as interfaces for the AI
When I use navigation on my phone while I drive somewhere it feels like I'm just acting as a human Zapier, mapping the phone's audio navigation API to the vehicle's steering API.
I love the fact your sentence lamenting the dumbification and impending laziness has a typo in it. It sort of undercuts your argument. That is of course unless the AI Boogeyman has already gotten to you...