Hacker News new | past | comments | ask | show | jobs | submit login
Study: Consumers Actively Turned Off by AI (futurism.com)
182 points by 12_throw_away 3 months ago | hide | past | favorite | 192 comments



Just an anecdote but I had a small mobile app that would make people pay money for the premium features, and (surprising for me) many people happily paid. Although the app had very positive reviews, it wasn't growing fast enough to make it into a meaningful income.

So I told myself, why don't I add AI stuff on it like an AI assistant as a primary way of interaction? Everyone loves AI, it's the future!

Then the app retention tanked, as well as the install numbers and no purchase was made after that. People didn't even bother with leaving bad reviews.

Maybe we are in this strange situation where the people who make the products are so hyped about this new tech but the consumers are really hating it.

Like the artificial sweeteners maybe?

"0 calories and the same taste with the sugar? Why would anybody ever use sugar again, right? Lets short the sugar cane and corn fields and put all our money into artificial sweeteners production equipment and chemicals"


Sincerely, everyone in my real life I know bemoans AI hype. The only really love it has is in memes (things like voice swaping, singing politicians, things like that). I'm surprised people on HN have no exposure to such people.


How many people in your life "hate AI" but love that feature in Google Photos that lets you search your photos by a person's name?

People don't generally like or dislike the combustion engine. They like the ability to get from place to place faster.


Probably when writing "hate AI" here the meaning was hate the often useless text chat bot. Google Photo face recognition was there before the new hipe and is probably not designated talked by primarily as "AI" by the general public.


People also like privacy, and most of the time "AI" means "your stuff is going to our servers".

I don't need or want that from most applications, especially a photo viewer.


> most of the time "AI" means "your stuff is going to our servers".

It almost always does but only because it is offered this way, it is not a hard technical requirement. There is nothing that would prevent offering a standalone version running locally for customers with their hardware powerful enough.


Most everyone I know in real life do not care about that kind of privacy. They happily upload anything to anywhere on the internet. The people who do care tend to fall in one of two categories: More tech inclined (programmers), or the same kind of person who doesn't use online banking because they are afraid of losing their money.


Yes, but it is. It’s a well-known problem.

https://en.wikipedia.org/wiki/AI_effect

Again, you’ve gotta remember that originally, playing chess was considered an unfathomable expression of machine thinking. Now it’s a problem trivially solved in CS112 class.

It’s a lot less magic when you see behind the curtain.


It was never AI as people are using it today. Back in 2014 2015 2016 or whatever it was just facial recognition. In fact there's nothing smart about it. It detects the face and then you put a name to the face. It does nothing else.


I'm old enough to remember when the ability for machines to recognise faces was in the same category as flying cars: tech demos in the news, but you couldn't use them in any practical sense.


A constant theme in AI is that as soon as something is achieved, it becomes "no longer AI", just tech. That's kind of my point. Nobody cares what you call it or how it works, they care about the actual value they get out of the feature.

Also, back in 2014, it wasn't "just" facial recognition. It was the dawn of the new age of Deep Learning, bringing capabilities that had never existed before.


> love that feature in Google Photos that lets you search your photos by a person's name?

Is that even considered AI by the hype/marketing machine? It's been around longer than these new language models.


AI is more than these new language models.


Sure, but what car maker is yelling "Combustion engine in here, it's SO AMAZING!" - and would you trust one who is?


A lot of the ones with amazing combustion engines are a bit pricey. The Koenigsegg Gemera engine is cool.


Jeep Grand Cherokee Trackhawk has a combustion engine, it's SO AMAZING!


I bemoan AI but it's been disheartening as the hype spreads everywhere. It feels like everyone is building an AI startup.

Maybe I'm naive, but it doesn't feel like we're done building the boring stuff yet.


We're not done building the boring stuff or solving the hard problems either. One is, well, boring and the other is...hard. Easy enough to proxy out prompts to OpenAI for your next funding round, though.


https://news.ycombinator.com/item?id=41119245 - as I said last time, it's become a cheapness signal.


I'm a bit wary that something calling itself AI may be like Siri or some bad customer service bot.


No this is actually exactly 100% it. It's not a strange situation it is just simple irrational exuberance for AI. Consumers do actually hate it. They don't want to be around it and they don't want it anywhere.


> Maybe we are in this strange situation where the people who make the products are so hyped about this new tech but the consumers are really hating it.

This isn't that strange. There have been a number of things that people have tried to push upon consumers over the last few decades that just haven't landed. Notable examples: 3d TVs, metaverses.


There’s the issue that many products seem to consider AI to be a feature in itself, and market it as such.

People don’t care about it.

They just want to do whatever it was they wanted to do. It doesn’t matter wether the tool uses hardcoded empirical heuristics, a hand-crafted statistical model, AI, or leprechauns.


In my experience, the people in my friend group are wary of AI because of the potential it has to eliminate jobs. I think the generally negative connotation comes from this "replacement of human effort" interpretation. If it can be done cheaper and easier with AI, that will naturally supplant effort a human could have made.

Since non-techies are often the people being displaced in this scenario, it doesn't sit well with them when AI shows up in a product they like.

(For what it's worth, I mostly agree with this sentiment.)


The current wave of AI stuff is likely not indicative of how AI will be used in consumer-facing businesses in the future.

At the moment, it’s there for all to see, be it as a chatbot or whatever - but the real applications will be under the hood, not directly interacting with the consumer, doing everything from market microsegmentation to tailored recommendations to diagnostics, credit scoring, criminal profiling, fraud detection, and all of the things that currently rely on sub-optimal human-crafted algorithms or even hand-cranking.

So all in all, what consumers think of AI is almost neither here nor there, as they are not the primary market - business and government are.


> The current wave of AI stuff is likely not indicative of how AI will be used in consumer-facing businesses in the future.

wut. the first real deployments we've seen of LLMs are chatbots, esp. those aimed at consumers. hence all of the "sell me a Chevy for $1" memes.

> At the moment, it’s there for all to see, be it as a chatbot or whatever - but the real applications will be under the hood, not directly interacting with the consumer, doing everything from market microsegmentation to tailored recommendations to diagnostics, credit scoring, criminal profiling, fraud detection, and all of the things that currently rely on sub-optimal human-crafted algorithms or even hand-cranking.

That's already a thing, and is what FB and GOOG (or banks like Ant Financial, etc.) have been making money off of for years.


In other words, its job will be perpetual-bias-perpetuator! I cannot fucking wait.


Could be. One of the annoying things is that when the AI has a bias people agree with, they say "algorithms can't be biased!", yet when the AI has one they do disagree with "it's so unfair they've made it woke/racist [delete as appropriate]".

Reality is difficult, AI is difficult, statistics is difficult, everything is difficult. Accidental bias is almost inevitable no matter what, and that's one of the things GDPR is there for, to make sure errors can be fixed.


The EU AI act that went into force today also spells out some limitations.

I think it would have been fun to require companies using AI to prove/show exactly why it makes a determination or decision, upon request, if it's being used to make decisions where we say that such biases should not be perpetuated.

That this is burdensome or not yet possible with current LLM limitations is a problem to solve (or "tough shit") for the companies creating or using this technology; a necessary reckoning that they should have been forced to face from the onset.


>Like the artificial sweeteners maybe?

>"0 calories and the same taste with the sugar? Why would anybody ever use sugar again, right? Lets short the sugar cane and corn fields and put all our money into artificial sweeteners production equipment and chemicals"

No one ever thought this because low calorie sweeteners taste different than refined sugar. Even cane sugar tastes different than high fructose corn syrup. Also, they are all chemicals. Everything is. Stevia is even extracted from a plant, just like cane sugar or corn syrup.


It's hypothetical example for a product that is supposed to be a replacement to something we readily use but has an issue like being too expensive/rare/hard to make.

I don't know much about sweeteners, for me its this thing that you can use instead of sugar if you are concerned about calories.


It is incredibly obvious what is colloquially meant by “chemicals”. There’s no such thing as a fish. Blah blah blah. Don’t be tone-deaf in the name of scoring an extra point online.


It's obvious, but worthless. It's an entirely subjective metric, easily gamed by marketing, and correlates poorly with things that actually matter.


Other than to baselessly denigrate, it is not obvious to me what is meant by “chemicals”.


A working description might be some combination of:

- Synthesized or highly processed versions of or replacements for a normally natural or well established substance; often for the purpose of cost. E.g. high fructose corn syrup, artificial flavors, rayon, chrome tanned leather, etc.

- Synthesized or highly processed substances that promise to provide the advantages of a natural or well established substance but without any of the drawbacks. E.g. sweeteners without calories, waterproofing sprays that promise breathability, anti-stick coatings that don't require the care of cast iron, replacing metal with light weight plastics, etc.

For me it tends to come down to two rules of thumb that my life experience has given me:

- Highly processed products (particularly food) tend to be unhealthy.

- There is no free lunch. If some new material promises to eliminate the drawbacks of an existing material, then it probably has different drawbacks that you don't know about yet.


> There is no free lunch

This is just a heuristic, and honestly barely that. It's a belief.

Often there really is free lunch, and that's just progress. Sometimes I think people like to pretend otherwise because it's comforting to deny yourself of convenience.

Aspartame really is zero calories. And it really is safe. Yes, much safer than sugar. No, it doesn't magically make you gain weight. No, it doesn't cause cancer (feeding rats 500 coke bottles worth of aspartame doesn't count!). Yes, it's less carcinogenic than red meat.

From a health standpoint it's better in every single way. Of course taste is another thing all together, and subjective. Point being, yes things CAN just be better.


But low-calorie sugars provide a rationalization for overconsumption instead of changing behaviors. To me, every single one tastes terrible. But worst, they send a confusing signal to your gut because the taste of sweetness naturally corresponds to easy calories, but when they don't arrive, you don't get satiated. This dovetails with the rationalization, significantly contributing to the obesity epidemic. Free lunch what?


Are you sure your claim is true? I drink a 12oz low calorie sweetener soda maybe once a week, and use low calorie sweetened chocolate chips (Bake Believe, uses erythritol) for things like cookies or pancakes (also eaten only once in a while).

I eat no more than I otherwise would, and I avoid the spike in blood glucose.

I see a lot of negative claims around low calorie sweeteners, but no proof.

However, I see irrefutable proof of negative effects of sucrose and glucose and fructose most times I am in public, and in the diabetes and obesity statistics.

And I also have read credibly reports about the sugar industry corrupted research to avoid being in the crosshairs. As a result, I would not put it past them to try to vilify low calorie sweeteners either.

https://jamanetwork.com/journals/jamainternalmedicine/fullar...

Obviously, excess low calorie sweeteners is bad, just like everything else in excess. But the question is, is a low calorie sweetener soda nutritionally better than a sugar soda, or low calorie sweetener candy/popsicle/etc better than the sugar alternative.

So far, I would say the answer is an obvious yes, given what we know about the effects of excess simple carbohydrate consumption.


Claiming zero-calorie sweeteners, which by definition cannot contribute to weight gain as it's impossible, are "contributing to obesity" is a BOLD claim. I'm talking science-shattering hundreds of studies disproving bold.

Listen, you can't just make things up. Just because you feel as though something is too good to be true doesn't mean it actually is.

This is like those people who think Ozempic makes your eyes melt and stuff.

The problem is in capitalistic society "pain is gain" is ingrained since you left the womb. People often have trouble conceptualizing that you can get good results without suffering. People actually ENJOY the suffering because they think it directly translates into good stuff for them and their life.

It's not a healthy, or rationale, way to operate. Ensuring your own misery out of a desperate need to prove yourself and your work ethic is just self-destructive behavior. Sorry to be the one to tell you.


Heuristics exist because they are useful. It's not possible for an individual to research every new chemical that comes along. Even if they do, they don't have a good way of knowing if the research they are reading is reliable. So they need some rule for dealing with new chemicals.


1. Aspartame is not new

2. Aspartame is safe and there's not a scientist on Earth who claims otherwise.

3. While we may not know NEW stuff, we do know old stuff. Sugar quite literally kills people. An alternative could, AND DOES, save lives.


The people posting about chemicals-in-the-food and stuff like that in my area definitely don't see high fructose corn syrup as a chemical - "it comes from corn and corn is grown in fields" is all I ever got out of that discussion - but definitely do see stevia as one.

Maybe the definition is more of a regional thing. Or maybe it's just more BS. In any case I don't think it's remotely consistent or concrete.


Me neither but I bought mushrooms just the other night and I kid you not the cellophane-wrapped styrofoam packet was emblazoned with a "chemical free" sticker so clearly it's gotta mean something.


What is colloquially meant by 'chemicals'? I usually take it to mean 'a substance with a name that I don't understand and that I think isn't natural and that I want to imply is not good for you' and there is no definition beyond that. It is perfectly reasonable to point out that this is irrational.


> Maybe we are in this strange situation where the people who make the products are so hyped about this new tech but the consumers are really hating it.

I think everyone is tired of conversing with a chatbot over text by now. But there are ways of integrating AI without making the primary interactions a pain. I'm also a bit puzzled by why people think it's a good idea in the first place

Using a chatbox as a way to do primary interactions? Nope. But use that same AI to quickly summarize reviews for a product into a digestible format that I can easy glance at, or ignore? That last one actually saves me time, and doesn't harm my user experience, so why not?


Most LLM-based chat user interfaces are just so terrible.

They seem interesting on a surface level, because it feels like they should be able to help with whatever issue you have, and the demos make it look as if they do, but in reality, they almost never do anything. They just pretend to be a human who has the ability to actually do something.

When Microsoft showed Windows Copilot, what they showed made it feel like you could do things like tell it to "create a new user account with the name John and the password 285uoa29tu and put the picture of a dandelion as the profile picture," but you can't. It can't really do anything, other than give you (often misleading) advice on how you can do things yourself.

People have learned that. They have learned that these chat LLMs are just a facade used to waste their time, a simulacrum not of a human being, but of the outward appearance of a human being. It's another hurdle companies put in place for people to jump across before they can talk to an actual person who has the actual ability to do something for them.

So when people see "AI", they don't see "helpful tool," they see "an obstacle I have to get rid of to actually get anything done."

Who would want to pay for that?


I agree, IMHO people don't like to interact with AI, but they love it when the tedious work is done by AI.

The interaction part is cool at first until your curiosity about the tech itself vanishes. The current AI tech has some very useful use cases and its here to stay but its not replacing human interaction or the Human-Computer interface as I previously believed.

The AI pin, Rabbit and who knows who else failed in attempting to replace human interactions or screen UI with speech or text.

Chatbots look so deceivingly capable but they are completely useless when it comes holding authority and trust.

Every real human will always provide you with accurate information best to their knowledge and when they fail on it(sometimes in bad faith) its considered a big deal and can lead to anything from being angry with to not trusting that person ever again(therefore ignoring that person when possible) and in some cases imprisonment of the person.

AI lying is too cheap to warrant a human interaction. It's pure waste of time when it comes to doing anything consequential.


Maybe because chatbots in general, not just AI chatbots, are a terrible idea in the first place? And even in the cases where idea might be somewhat OK (instead of wading through tons of support documents or FAQs just ask a question and get redirected to the answer) it usually is implemented terribly.

Honestly, what's the point of even considering implementing it if the only interaction will be "I want to talk to a human"?


> But use that same AI to quickly summarize reviews for a product into a digestible format that I can easy glance at, or ignore?

You want to give them more plausible deniability in lying to you about reviews?


> Maybe we are in this strange situation where the people who make the products are so hyped about this new tech but the consumers are really hating it.

I’m surprised to learn that so many product makers are hyped about AI. I just assumed that everyone is doing it because they’re being forced to by the marketing department.

Of course there are some good uses of it, mostly on the pattern recognition side, but on the generative side, that type of use seems best confined to their own silos and not incorporated into every product.


My tech friends mostly find it interesting. On top of that some of them find it a little threatening, others (like me) are somewhat concerned about the ethics of hoovering up a significant proportion of human creativity and then charging money back to people for the output of models based on it.

My non-tech friends, particularly the more creative ones hate AI and will actively avoid it. They see a way for rich tech bros to put real people out of jobs. They see ahead of them an internet drowning in AI shit (on an internet already drowning in non-AI shit), further enshittifying their day to day interactions. And they see yet another ludicrous hype cycle.

My challenge to AI-using folks is the same as it was to blockchain people - make a product with compelling, amazing features. Don't mention AI, just sell it on how awesome it is and how amazing the capabilities are. Use AI/ML under the covers to achieve that awesomeness without trying to ride the hype wave. Then you'll have achieved something.

The difference is, of course, that I imagine that AI will (and to some extent already does) deliver on that...


I travel often in Europe and see these AI assistants on many websites and apps now. Two things generally happen when I have tried to use them. First, nothing actionable is possible. They can't actually do anything. No trip changes, refunds, connections, or baggage tracing. It takes a significant amount of time to get to to the response, "Sorry I can't help with that, please call xxx during business hours or visit our website at xxx." Second, they invariably end up as marketing funnels with upsells offered in place of solutions. I see that as the main source of anger from others in airports. They try to deal with something and end up in marketing loops.

When I see AI assistance as a travel feature I assume it is not only going to be useless, but actively disruptive to my experience.


“Yet another layer I have to fight through before I can speak to someone who may actually be able to do something useful”

Yup.


> they found that products described as using AI were consistently less popular.

Branding the mistakes LLM's get as hallucinations was sort of brilliant in my opinion, it was a good way to disguise the fact that LLM's are mostly just really lucky. In my anecdotal experience it hasn't worked out too well though, so maybe it wasn't that brilliant? Anyway, parts of how the AI impacts our business (solar energy + investment banking) has been through things like how Microsoft Teams was supposed to be capable of transcripting meetings with AI. Now, we're an international organisation where English is at best the second language for people, and usually the third, so this probably has an impact on it, but it's so bad. I know I'm not personally the clearest speaker, especially if I'm bored, but the transcripts the AI makes of me are so hilariously bad that they often become the foundation for the weekly friday meme in IT. Which may be innocent enough, but it hasn't been very confidence building in the higher ups who thought they wouldn't have to have someone write a summary for their meetings, and in typical top brass style didn't read the transcripts until there was some contract issue.

This along with how often AI "hallucinates" has meant that our top decision makers have decided to stop any AI within the Microsoft platform. Well every thing except the thing that makes power point presentations pretty. So our operations staff has had to roll back co-pilot for every non-IT employee. I don't necessarily agree with this myself, I use GPT quite a lot and while github co-pilot might "just" be fancy auto-complete it's still increased my productivity quite a lot as well as lowering my mental load of dealing with most snippets. That isn't how the rest of the organisation sees it though, they see the mistakes AI makes on areas where no-mistakes are allowed, and they consider it untrustworthy. The whole Microsoft debacle (I used chatGPT to tell me how to write "debaclable") where they wanted to screenshot everything all the time really sunk trust with our decision makers.


>but the transcripts the AI makes of me are so hilariously

My experience on AI transcripts is different: I use auto (AI) generated captions on YouTube for every video and while there sure are some mistakes with especially names and specialized words, in general it is highly understandable. So much so that that I miss the auto generated captions when they're not available.

Even if real captions are available, on occasion I have to swap out from the human made captions to the auto generated AI captions, because believe it or not in some cases the AI generated captions are actually better with less mistakes! I find that rather impressive from the AIs side.


> but the transcripts the AI makes of me are so hilariously bad

They improve significantly with microphone quality. I have a professional voice recording setup I use with Teams and the transcripts are usually around 90% accurate. A tool like AWS transcribe tends to get 95% on my voice work.


Important clarification: consumers are turned off by AI marketing material.

I'd be interested to see real studies into consumer satisfaction with AI features in existing products. My gut feeling is that people don't like (visible) AI in things they use, but that's biased by me reading online reporting about the failings of these features, I wouldn't be too surprised if it turns out people mostly like them.


I am certainly turned off by “AI” customer support. The ones I’ve encountered are actively terrible, and I’d rather click through a tree of first-level fix-it-yourself advice than get walked through it by a stunningly poor AI.

But sure, it would be spiffy if Siri was more capable, as long as it didn’t become susceptible to injection attacks from my photo album in the process…


When The Weather Network first included an AI chat function, one of their sample questions was about stargazing.

I asked the exact same question to see what kind of response it would give, and it said that it couldn't answer because weather does not affect star gazing. Really?

I took a screencap, because this was such an epic example of why companies should actually test things before giving public access to them.


Can you actually test a component that is, by design, a black box? ("by design" of the current models. Yes, the systems could be re-made traceable. But then it would also turn out on how much pirated material the LLM has trained, and hoo boy would the IP lawyers show up. That's the con: "if it's a blackbox, you can't prove whence the corpus, nyah nyah!")


I brought this up in a previous HN thread, it's what I call "probabilistic UX".

https://news.ycombinator.com/item?id=39954719

We don't have much experience building automated systems that are non-deterministic. Normally, in computer engineering, if we couldn't predict the results of an operation, we'd consider that operation buggy, broken or at best, flaky. Building consumer user interfaces for black box magic is a whole new ballgame.


"It sorta kinda works, most of the time, when it doesn't, Retry Reboot Reinstall^W^W, and if it's still broken, you're SOL - and thats by-design, we're not fixing the product." The word you're looking for is enshittification.


> We don't have much experience building automated systems that are non-deterministic

Our brains evolved to turn chaos into patterns that help us survive. Determinism is good from that point of view and nondeterministic behaviour is discarded as useless. Nature in general operates in a deterministic way, a banana tree doesn't randomly produce kittens. AI and LLMs in particular are nothing but misinformation and IP appropriation systems. No wonder people reject them.


I bet that if you included an expensive verification step by GPT 4 a lot of errors would go away.


It always circles back to "follow the money":

- the system could theoretically trace the inputs, but then IP lawyers would eat company alive; pretending the corpus isn't pirated, so it can be cheap.

- the system could verify and re-check, but that would require a massive compute increase; spouting bullshit, so it can be cheap

- indeed a lot of answers around here go with "let's waste money by throwing it at the system until a human likes the result" - not cheap, but profitable for the provider


Our "testing" involves sending it 50 prompts and seeing if it bullshits too much and revising the prompt until it bullshits to acceptable levels.

I suspect that's how the model makers test it as well, just fling crap at the wall and see which wall lets the most crap slide off of it (but never all of it)


This is called Fuzzing. And it is a major component of software testing. Especially in infosec, but I digress. The reasoning behind it is exactly the same.


Good call. A case that is actively in court right now, is based on a point that you can sorta remake Johnny Be Good with Sonu.

However, to do so, you have to already break copyright law by feeding it copyrighted lyrics.

I did their experiment, but instead of "Go, Johnny, Go, Go Go!" I put in lyrics like "Run, Forrest, Run, Run, Run" (a Gump reference) and described the music style accordingly.

It came out exactly like you'd expect. And the case is demonstrating a mirage where if you steal the song's actual lyrics, describe the style, you'll get something akin to what they might have written... except that there's no way to prove then that the model has even heard that original song before, because you gave it the lyrics and the style, and the rest is clearly bound to be similar as a result.


Every time I hear it contains AI it sounds like the features' outcome will be uncertain. Especially with more experience. If your feature were good you wouldn't have to mention AI to show it is awesome.

More than not AI is used as an excuse for the feature to be bad.


I tend to agree. If you asked about a feature which uses some kind of ML under the hood to deliver that feature you'd different results. I think companies should urgently pull back on mentioning AI at all if they want any credibility at this point.


Yup. "do you like that you can search for a name in your photo album" will get different answers then asking about AI.


That was never previously marketed as 'AI', though (I suspect because when CV started seeing this sort of application, the memory of the _previous_ AI winter was still fairly fresh, so everyone carefully avoided the term). In marketing, 'AI' tends to mean _generative_ AI these days.


That is a problem with Facebook, where people can tag you in images, even if you don't notice or approve it (you can opt out, but that's the opposite of what is optimal).

Funny to see that come up again, but in a different context.


There's lots of different implementations. For example Immich (https://immich.app/) is fully self-hosted and does local recognition. It doesn't have to be an online service stealing your data.


Okay then at what point do the users become reliable narrators? The AI cheerleaders can take comfort in that AI is in the hands of the average person now—but they also have to contend with any negative reactions as well.

AI isn’t some behind the scenes technology (any more). And companies with their “AI” gimmick often highlight it with a star or something. People can chat with these things as easily as doing a Google search.


I'd buy that people like "AI" features writ large; automatic image classification on your phone, say. However, I'm not convinced anyone much likes the existing applications of LLMs (mostly terrible obstructive chatbots, and blogspam).


> I'd be interested to see real studies into consumer satisfaction with AI features in existing products

mmm... I guess....

I mean, a feature that is not 'obviously AI' is just a feature; even if you normalized it against 'normal' features with no AI as part of them, surely the deviance from random noise would be negligible?

In a double blind test, you would have three cohorts; control, placebo and treatment (ie. AI).

Since the placebo and treatment groups would receive the same feature with or without AI, you'd be looking at absolutely nothing meaningful in the data you collect.

> I wouldn't be too surprised if it turns out people mostly like them.

How can users POSSIBLY have a positive / negative / ANY KIND of meaningful response to a feature based on the unknown backend implementation?

If you put a chatGPT interface on your product and don't put an AI label on it, I guess most people, not being completely stupid, will not meaningfully distinguish between it and the equivalent feature with "Artificial Intelligence" painted on the side. An AI chat bot is an AI chat bot. You're not fooling anyone by calling it an 'intelligent assistant' instead of 'chatGPT'. :P

I mean, I'm just guessing, you could study it. ...but, I guess it's probably not worth bothering.

It's probably more likely the take-away here is: Make your product amazing; if it uses AI, hide that fact that it uses AI if you can.


> would receive the same feature with or without AI,

I'm not sure how you'd do that, given many features wouldn't exist without AI. We don't know how to implement them that way. The only existing choices are "no feature" and "AI feature".


They're turned off on the lingo showing up everywhere, regardless of its relevance (or lack thereof). It's similar to when the Internet first started becoming commercial, and people were revolted by terms like "information superhighway" etc being thrown around so excessively.

Eventually everybody grew into it, and we can't imagine life without the Internet. AI is having that same moment right now. The breathless hype bombardment will eventually give way to normalcy just the same.


I think this is true, but it's worth expanding, because it's not just lingo. Personally, I am sick of seeing AI generated images in blogs, advertising campaigns, and in social media posts. There's a weird uncanny valley to them that always puts me off. I think subconsciously it makes me devalue whatever the image is attached to. I might end up being negatively biased toward a blog author, or refuse to engage with a company that is using AI based imagery.

Of course, it's likely that we'll get past that point, where AI generated images aren't in the uncanny valley, but until then it's off-putting.

Recently I had an email from someone praising my open source work and it was clearly AI generated, which felt completely insincere. And so, misuse of AI in situations where it makes the human being that interacts with it balk isn't good either.

As you say the language is annoying: "AI powered" or similar language is just meaningless to most people. People are not turned off by features in software that 'just work' because there's some AI running quietly behind the scenes. They're put off by bold claims that don't actually materialise and the human/AI interaction-boundary that can sometimes lead to more effort on the part of the human.

AI is a technique used by software engineers, we shouldn't need to talk about it at all (except to each other and maybe VCs). We don't market our apps as "Relational database powered", in time we won't mention AI at all.


To me the problem is not just being in the uncanny valley: even when they are realistic they are often way too mediocre! Not just the visuals but the content as well. It's always stuff that nobody would bother drawing or staging to take a photograph. And I associate it with laziness, low effort, low quality, throwaway content.

Same for AI generated text: it's super easy to see when something is AI generated and follows a pattern.

Those patterns get quite tiring to read after a while. In hotel websites, for example, there's often AI text that is obviously converted from existing tabular data. I would prefer to have proper UX for the data instead of the text. With text it's harder to scan for specific information or to compare between pages.


> even when they are realistic they are often way too mediocre!

100% agree and wish I’d also made that point in my post! It’s like it’s taken the average of human creativity, so everything is average. The unrelenting blandness is definitely offensive.


You describe it like an oxymoron: Extremely mediocre. Because it is based on a really big average.

Having said that, it is exactly why training an AI on AI output results in utter crap. This has been known since before AI, so it's funny to see it arising in this industry.


There are easy examples that prove both sides. Depending on your use, the current state of AI does some amazing things.

It also is being pigeonholed into areas where it is not yet ready. There are a LOT of examples out there on that.

A hobby of mine is using AI to find bizarre ways to code things that aren't the normal way of thinking of the problem. Obviously it isn't practical, but it is fun. I most recently got ChatGPT to generate Pi from a random length string of 5's like this:

~~~

    from math import sin, radians

    def fives(repeat: int) -> float:
        repeated = 1 / int('5' * repeat)
        value = sin(radians(repeated)) * (10 ** (repeat + 2))
        return value

~~~

Yes, it uses a repeating string of 5's to generate pi. Call it with "fives(18)" as an example. This is a cleaned up version of what ChatGPT gave me, but all the same, it works. A string of 5's can give you progressively more accurate estimations of pi - more 5's, more accurate pi.

Bizarre question to ask an AI, but playing like this is how eventually discoveries are made, since it is capable of coming up with working answers.

(Edit: corrected markdown copy-pasta weirdness. You can copy/paste but have to remove the extra indents HN adds. It's normal python, you know how to do it.)


This is interesting, but that's because you started with an interesting question, and probably also because you know how to code.

The problem with AI blogspam (and AI image spam) and is that most of it is not even interesting to begin with: the desired result and the prompts are already terrible, so even if it was custom drawn or written by a professional, it would be crap. With AI, because of the lower barrier to entry, people don't care about spending a few cents building garbage. This is why I doubt more quality AI will help. :/


Spam (and porn) industries are highly opportunistic, in that they exploit every new technology, often before more productive uses are found for them. Unfortunately this is just one of many that gets treated the same way.

The professionals using AI (and even then, a minority of them) actually spend the time to ensure they're getting quality results. Even AI requires some massaging to get what you actually want. It's a tool. Not a miracle. A carpenter doesn't just bang a nail once and expect things to hold together.

I'm not a particularly good coder. I'm just interested in the oddities that can be shown through code, and am just (barely) savvy enough to see it through. I only had a vague idea how you could get pi from a string of 5's, but it wasn't very hard to get ChatGPT to code it for me once I described it.

The same can be done with a randomish length string of 9's, but I'll leave that for others to discover.


> The problem with AI blogspam

If someone can't be bothered to write it, why would someone be bothered to read it?


This. Let an AI read it; I won't bother.


But...

This code just outputs almost-zero. It clearly just sets `repeated` to a number very close to zero, takes the sin of it - which will also be close to zero - and then multiplies that by a small number.

I don't think this is how discoveries are going to be made.

What am I missing here?


It outputs a very close estimation of pi every time. The slash-* is from how HN interprets the code (poorly). Here's a corrected one using a margin:

    from math import sin, radians

    def fives(repeat: int) -> float:
        repeated = 1 / int('5' * repeat)
        value = sin(radians(repeated)) * (10 ** (repeat + 2))
        return value
fives(18)

Cut/paste that into your python repl, and make sure the lines are intact without the HN extra indents. Change the 18 to any int you want (except it overflows at 308... easy to fix, but not worth bothering).

It might be close to zero... but it's much more like 3.14... when you run it.


Yes, that does fix it.

But using `sin`, and `radians` means that you equation to generate pi already has pi in it.


Sure. But make the connection with the 5s. It isn't just copying that pi. It is getting there in a very non-direct route.

You can do the same with 9s, for the same reason you might be picking up on if you don't trivialize that there is another pi in the mix. You can't escape it when you're talking about circular things, regardless.

Every proof or non-coincidental estimate of pi (ie: not 22/7) out there has pi embedded in the proof somehow. Dismissing it is like saying the Pythagorean theorem is bollocks because they also add up to the same number of degrees. Even though you're describing it in lengths, the degrees don't disappear. This code is very much like that. Just not triangular.


The thing is that in this particular case it does in fact just copy the pi. You're essentially calculating radians(x)*180/x, which by definition is equal to pi. The radians function is doing all the heavy lifting, unlike say using the Taylor series or some other approximation approach.


It is doing this: sin(555555555555555555), where the 5's are an x number of 5s that increasingly converges on pi, and calculates the correct decimal placement.

It's not (quite) as trivial as just copying the pi.


Nope, that is not what it does. First, sin(x) ~= x for small x. Second, 1000.../555... = 1.8. Neither of these have anything to do with pi. With these in mind, the program can be simplified as

  sin(radians(repeated)) * (10 ** (repeat + 2)) = radians(repeated) * 100 * 1.8 / repeated = repeated * pi / 180 * 180 / repeated = pi
The main and only reason you get pi in the end is that the radians function is defined as radians(x) = pi * x / 180, which requires knowing the value of pi to begin with.

So this program is basically an equivalent of

  multiply_the_argument_by_pi(555) / 555
except with a few layers of completely unnecessary math on top of it to obscure the magic trick.


If that was the case, any number other than 5 would also work. Try that.


Any number works if you adjust the multiplier to preserve the 1000.../555... = 1.8 property. For example: https://www.online-python.com/Um4qevAzo2


Ah, that's a missing link I hadn't thought of.

This was a result of converting degrees to radians. There is another way as well.

    from math import sin, acos

    def nines(repeat: int) -> float:
          repeated = 2 / int('9' * repeat)
          value = sin(repeated) * acos(repeated) * (10 ** repeat)
          return value

      nines(18)
I'm sure this can be edited to accept other values in the same way.

Took a while to convince me, but you got there.


Hint: it works just as well if you replace the acos argument with 0

    value = sin(repeated) * acos(0) * (10 ** repeat)
(keep in mind that that acos(0) = pi/2)


Lately I’ve been seeing Reddit ads with clearly AI generated marketing images and it just makes me assume the company is an unserious fly-by-night operation and/or a one person company.

The most embarrassing recurring ad I’ve seen lately is for some kind of device management IT solution with AI generated MacBooks where the Apple logo is completely mangled. No idea how you could overlook that or notice it but somehow think it doesn’t cheapen your company’s image.


> it just makes me assume the company is an unserious fly-by-night operation and/or a one person company

Exactly this. Any serious company would manage their brand seriously. Even if you're a one person bootstrapped startup, you should take your brand seriously and get some artwork commissioned. It isn't that expensive and makes a massive difference to how you're perceived.

Using AI generated images says to me (rightly or wrongly) that you take shortcuts. So, if you take shortcuts with your brand, what's your product like?


Whilst watching the Olympics I've seen an eToro ad a few times that is very clearly AI generated, it wasn't exactly a confidence builder in a financial platform.


Absolutely, AI's impact is clear-cut.

When it works seamlessly, it fades into the background, letting non-engineers enjoy our creations. But when it falters, it distracts and adds no value.

Our focus should be on delivering flawless creations that provide real value, regardless of the technology behind them.


The generations are so basic that they make the brand look cheaper.

You pretty much have to be at the cutting edge of generative ai to generate images, etc, but it still may have a shelf life.


This is absolutely true.

Sonu is an AI music creator. Regardless of music style, I can immediately identify it. There are some very obvious (to me, anyway) musical choices it makes with both notes, and their delivery.

Rick Beato (YouTube music personality, for lack of a better description) pointed out his kids can tell, but he can't. I think, at least with Sonu, the moment the sound clicks, you can't not hear it when it's there, whether the song is pop, metal, country, classical... it has that specific sound that can't be ignored when you notice it.


Try out udio. I was absolutely blown away by it.

It's clear that it has been trained on everyone's real output, ever, and their "You can't request the styles of real artists" is just a figleaf.

But the output is impressive.


The idea that you can insert words like "Doors" and hope it uses but not knowingly bits from "The Doors" is pretty much the opposite of a figleaf.

People often put things like that in their prompts, and confirmation bias tells them it worked when they run it a few times and get something that sounds like The Doors, even though the meaningless bit of prompt you gave it was simply ignored. The similarity is because of your words and description of the style. Period. Writing "Yngwie" in the prompt will get you arpeggios, but they sound nothing like him. Just as another example. Because the prompt is giving the AI genre cues only, and nothing so specific as actual Yngwie arpeggios, scales and ladder riffs.


Not convinced, sorry.

The “figleaf” is the service saying “Oh we can’t use artists in prompts, here, let me use a prompt that looks suspiciously like a set of tags from a specific music site instead” nod wink. And then the underlying model quite often fills in a well known voice and style.

And that’s if the service spots the artist was in there and substitutes anyway, which it doesn’t always.

So regardless, prompts may or may not be giving detailed Yngwie arpeggios to the model, but the model is clearly trained on them and can reproduce eerily similar music and voices to any given artist when given the right triggers.

It’s (IMHO) clearly plagiarism on a massive scale.

Great fun to play with though.

(To be crystal clear, my contention is not “they are doing some dodgy prompt stuff to make their model produce known artists”, it is that their model is very good at spitting out massively ripped-off voices and styles. It is their attempts to cover this with “we don’t allow you to specify an artist!” that are the figleaf over the nature of the model)


Udio seems to pull it off more often.

The other day I had someone mention to me their copywriting prompts were still looking basic.. and right away I knew his process was basic or dated.


Udio is trippy in its own way like this.

Sometimes it sounds like every singers voice that can fit a genre can be heard at once unless you specify it.


I disagree, partially. Yes, we are tired of the hype, but we are also tired of the focus on generative AI. I don't want AI to write stories, make pictures or be my girlfriend. Those are things best left to humans. I want AI to optimize my home or business's HVAC, help with database administration, tell me what I can cook with the random stuff in my fridge, maybe even help with project management. We're getting there, but the spotlight needs to turn from the "creative" tasks to the boring administrative tasks we don't want to do.

We're not sick of AI. We're sick of it being used for the wrong things.


Sure, but the companies have a different plan. The creative and skilled tasks are also where you have to hire the most expensive employees to do the job, and so cutting them is a dream for most investors. From a pure business standpoint the things "best left to humans" are the ones where the human is the cheapest option for the same result. Nothing more, nothing less.


You are correct. My comment could also be read as "we are sick of companies putting profits over people with this disasterously cancerous 'growth at all costs' mindset." The AI boom is just the latest boil on our collective behinds, with that in mind.


I 100% agree with you, it shows the limit of the system we've built and something might need to change


Your desires are exactly where I've been working. I've been considering calling the company "Boring AI: productivity tools for real work" or something like that. I've been integrating LLMs into the internal structure of applications, so you can actually tell a spreadsheet bot "hey, I need this DNA table used to color a graph using this address table of people in this area" and a spreadsheet with graph is generated in under a minute. Likewise, I've integrated a collection of 'bot attorneys that can provide serious legal advice, and is in use at an immigration law firm.


This sounds great. If you habe a development blog or github, etc, I'd love to follow along.

Also, random idea for the company name; EnnuAI. A play on "ennui" promoting something like "hey, we do the tedious and boring stuff so you don't have to!"

Not much of a marketer myself, but thought I'd throw it your way.


I don't have a dev blog. Takes too much time to keep it up to date, and my github repos are private. But I discuss that I'm doing in a few places: https://blog.blakesenftner.com/blog/13 https://www.linkedin.com/pulse/my-cat-really-concerned-ai-bl...


When all you have is a hammer, everything looks like a nail. This definitely applies with the current state of AI.


> Eventually everybody grew into it, and we can't imagine life without the Internet.

Because a lot of us are addicts.


An essential tool for communication, education, work, entertainment, and commerce... The Internet or AI am I talking about?


Aah if AI is gonna be that it hasn’t reached that point yet.


In 1994, you would have said that about the Internet. And it was only starting to look important when Y2K came about. Then everybody got it.


Ok.. I’m sure I will feel embarrassed in five years when AI is that important for saying that it wasn’t essential in 2024 (but could eventually be essential).


Guilty as charged.


...as soon as as the hype wave recedes, and vendors stop sticking the current-hot label onto everything. "MOUSE WITH AI" (*with a button that opens a browser window, how very innovative. But wait, it's a browser window pointed to an AI assistant!!!)


Guitar players know this as well. A lot of pedals and DAW (digital audio workstation) plugins electronically mimic old analog hardware. Everyone claims the software version is crap, not realizing just how much of their favourite music is using it.

I saw an interview with Tony Iommi (Black Sabbath) and right behind him was a rack with a Pod Pro amp simulator. I recognized it because I have one, and indeed, you can easily dial in his tone and typical amp sounds. But I'm left wondering how many Sabbath fans noticed that he's using digital modeling these days, or if they can even tell when he started doing that.

People don't like change. It takes a lot of marketing hype to create motion for change to happen. And most people won't even notice it happen. Just like with Sabbath albums.


Poorly used generative AI is obvious to spot though


The logitech app has a chatbot for some god-forsaken reason


That might have some use cases. I may not be the target audience, but I could see it. What I was trying to say is "a mouse button that opens up a hardcoded window!!!!" was all the hype in 1998, all it takes now is to retitle the window, and all is brand new. Three decades of PROGRESS :D


Internet was useable from early on and became more friendly to less technical users as time went by. AI generated bullshit output is of no use whatsoever and customers are turned off by being told that it's the future when this shit doesn't work. Remember messenger chatbots? Where are they now? Same fate awaits AI.


People were turned off by the text-only gopher and related protocols. It improved and we now have the modern web.

Messenger-style chatbots abound today. They're everywhere. As well, you have Twitter and Youtube bots that are the grown up version of the irc bots from days gone by.


AI is being pushed everywhere, and I couldn't hate it more. I don't want an AI assistant. I don't want to "talk" to a computer. I don't want a company diving mindlessly into the next trend just because they're afraid of being left behind. As others have said, it's a sign that a company doesn't really know what it's doing. I hear executives brag that they put all their emails through an AI assistant. This just tells me two things: they're apparently bad at articulating themselves, and no one is actually reading their emails.


"If you couldn't be bothered to write it - why should I be bothered to read it?"

- Some Internet Wag.

That's the thing, isn't it. AI output is so low-effort that it may as well not exist at all. Just send me the prompt instead, if you're going to do that.


https://i.imgur.com/k1dIVRr.jpeg

When an entire culture is built on constant pointless pretense, is it really surprising that there is considerable effort put into optimizing it?


I don't think it's disillusionment with the tech. Consumers barely touch the tech. It's disillusionment with the marketing

I think this goes beyond just consumers. If you listen in on a random companies' strategic planning... Any project with "ai" in its title is likely to be bullschtick.

"AI enabled" is our 2024 "now with electrolytes."


I wonder if the same sentiment applies to products using .ai TLDs.

The thing about AI is it reminds me of what Steve Jobs said about speeds and fees.

People care about "1000 songs in your pocket" not "30GB hdd". AI seems to be the "30GB hdd" and people don't always relate AI to how it's going to help them.


We keep saying "Any sufficiently advanced technology is indistinguishable from magic" while forgetting that technology only becomes good when we don't think about the technology anymore. It works like magic.

How many users know the difference between MFM and RLL hard drives? Now keep progressing that technology wise. People only know and care that they got bigger and faster. Do users care that the file system may or may not be self-defragging?


Every time I see a .ai TLD I just assume it's owned by a grifter.

It saves time.

*shrug*


AI becoming associated in consumers minds with cheap useless garbage is the best thing that could have happened

AI is the shovelware wii game of the 2020s

I wonder what this will do to the YC S24 batch... https://www.ycombinator.com/companies?batch=S24


Wow, basically every single company there incorporates AI into their pitch or even name somehow...

I mean I respect the grift I guess, but I'm not surprised in the least that people are quickly growing tired of it all. Some of the ideas I'm seeing on that list are just straight up idiotic, but since it leverages AI somehow it's suddenly a good idea, I guess...


AI is the shovelware wii game of the 2020s

Man, I wish shovelware games had been a temporary problem we grew out of. The Switch store is still riddled with them, and the PSN store has some obvious grift as well. All of the "Jumping <Food>" SKUs that are just achievement fodder, for example.


LLMs have solved the problem of blithering at scale. Unfortunately, this mostly benefits advertisers.


The use case for AI is spam.


It hasn't even gotten as uncanny as it could be yet.

Somebody must be fast-tracking straight to enshittification.


This does not surprise me, this is a new technology that people are wary of. People are far more aware of marketing techniques these days an know that when a device is advertised as having trendy feature X, it probably doesn't have anything to do with the meaningful aspects of the feature.

Consider how many atomic themed products there were in the 60's. We can be thankful that the only way most of these were actually atomic is in that they were made of atoms. The naivety if those days has gone. Information (true or not) travels so quickly now , everyone is a bit jaded. Many are outright nihilistic.

It's worth noting that people's opinion of AI in product is distinct to the actual AI itself.

I'm not in a position to find the reference right now, but I remember reading about a study which showed people felt that when given artworks to judge, the ones they were told were by AI were inferior. This was independent from whether or not each piece was actually by an AI or a human.


The fact that every C-suite in America simultaenously decided "We need AI in our product. Look, just jam it in somehow, you'll make it fit" is, I'm sure, unrelated.

AI is a keyword for investors currently, that's all. Like all the companies that previously sprouted a blockchain unrelated to their core product.


https://www.ycombinator.com/companies?batch=S24

Someone else linked it above, but after seeing this, yeah no wonder people are getting tired of it. Basically every single company in the batch mentions AI.


I've even seen AI being used as an argument for destroying more wilderness in order to build power plants – so we can "stay ahead of the AI revolution with green power", just like at some point blockchain was.


I really like something like GitHub Copilot and Copilot Chat, because they help with boilerplate and simple functions, as well as lower the barrier for doing some exploratory work and iterating. Something like Phind is even better in those cases where you care about looking into the actual sources that are returned, as opposed to just testing the output for your needs.

I also like general purpose chat/writing/instruct models and they're a nice curiosity, even the image generation ones are nice for either some placeholder assets, texture work, or some sometimes goofy art. HuggingFace has some nice models and the stuff that people come up with on Civitai is sometimes cool! Video generation is really jank for now, I wonder where we'll be in 20 years.

Overall, I'm positive about "AI", but maybe that's because I seek it out and use it as a tool, as opposed to some platform shoving it down my throat in the form of a customer service bot that just throws noise at me while having no actual power to do anything. There are use cases that are pleasant, also stuff like summarizing, text prediction, writing improvements, but there are also those that nobody asked for.


It means unreliable. I saw an ad for a cool scheduling app. Once they mentioned it‘s „AI powered“ (why??) I started worrying about it randomly missing events.


There is value way beyond the immediate service given in a human to human interaction; we're social animals after all and tend to derive a form of pleasure out of "being with others". In the same sense that granny went praying the rosaries Saturday evening before mass, because that also got her the coffee and cake with her friends ahead of that, and the chats afterwards. Just like the person at the till chatting with the cashier, the person talking to the surgery's receptionist (and not just about booking in the doc), or the talk with the pharmacist about how and when best to take the prescription medicine (and since we meet monthly for that anyway ... give me the gossip and the events in town as well) - this used to happen because, quite frankly, "efficient" communication and "efficient" human to human interaction is not what we seek. It'd be rude.

And that's where the whole chatbot, AI or no, thing falls a little. No matter how smarmy or even actually helpful that chat thing is, it doesn't shake my hand, won't do me favours, probably not take me to the shops after work, and ask me whether I need help or why I look sad. Nor complement me on my eyes.

I don't know; at least in me, the herd animal roars loudly enough that I feel unvalued whenever I am forced to interact non-physically. The distance can be felt. And to me, that is not a "protective distance" but an "excluding" one. Makes me more lonely.


Exactly. Beautifully written. Relatedly, a friend is working on an AI interviewer and when I tried her prototype, it felt extremely dehumanizing. "But it's efficient."


As a term, “AI” has become synonymous with “new”. And “new” has become synonymous with “good”. We are in an age where age is a liability. If it’s not new it’s old, and old isn’t good. And it’s all sad because it’s simply not true. But marketing teams wide and far are pushing this narrative. And I think companies are encouraging their marketing teams to do it because it’s an easy way to dress up a half-baked product, hiding it behind a buzzword.

But I don’t think it’s a signal to consumers as much as a signal to investors and competitors. Since when does the consumer care about which technology was used to build a product? Has a consumer ever said “wow this product is so good it must have been built with Rust!” The bottom line is it shouldn’t matter and it doesn’t. People need features, not technology, even though many of them confuse the two.


Once you look past the initial reaction of "oh cool a computer can do this!?" You quickly start running into the limitations. And if you're in a product team building some AI feature (aka calling an OpenAI API 90% of the time), you realize even more how laughably useless it is in actuality.

For example Notion. They have an AI feature, and it's hilariously useless. It can barely even summarize things, yet alone write the rest of the document for you. Yet they push it constantly onto you with no way of disabling the crap.

What I have noticed is that the marketing people are in love with it, because it lets them generate the useless drivel they spam people with easier than before. I suspect they never used their brains much, but now they don't even have to at all!

Coincidentally, scammers also love it for similar reasons...


Same here. I know a number of sales and marketing people that just _won't_ shut up about ChatGPT, trying to get me to use it more the way it's useful to them. I know for a fact that there's nothing in it for them promoting it to me, they're honestly just in love.

They do use it to write drivel. They write promotional posts, mass emails and LinkedIn requests, briefs for their conferences and events... The typing speed of one of these guys was about two words per minute, ChatGPT probably really does save him hours. I noticed he mostly uses voice input. When I say stuff like I want to think about what points to make he'd get confused, he'd just ask ChatGPT what points he could make. He doesn't even think about that part anymore.

I can certainly see how those kinds of people will push product teams to integrate assistant features and happily promote them. It really does seem to be life changing for them.

Personally, I can't relate though.


Think about trying to get customer service from any company now, we all end up talking to AI bots that don't actually help at all. No wonder the public perception is bad, our experience of AI is often based on these automated services which actually make our lives more frustrating.


I sat in on a weekly MS meeting where they spruik their products a few days ago.

AI/CoPilot/Chat-GPT was mentioned 100 times in the 34 mins I was in it. ( I was tallying, cause I'm sick of it)

And thats not including text on the presentation. just the times the words were said.


Did you tell the CEO? Tell the prick.


One thing I and probably many others notice about AI integration is it places incorrect operation on the user's plate. If the user gets a wrong answer, well just remember ai tends to do that. People are trying to find ways to make a truly intuitive ai interface that isn't just a text box you chat with. With adequate error recognition and correction integrating ai shouldn't be something the user notices other than the time it takes to process. This would be another issue as integrating ai just adds a time lag on those operations which can depress user sentiment in another way


It's quite simple. Consumers realize either consciously or not that saying "AI" means the added AI will make the product:

1. cheaper

2. worse

It's invariably used as a cost-cutting measure that is quick, cheap, and worse. Companies use AI art because they are too cheap to hire a real artist that would make better art. Companies use AI chatbots because they are too cheap to hire real customer service agents who could actually help people.

If a company slapped a label on a bag of chips that said "Now with fewer chips and less flavor!", I'm sure that would turn off consumers as well.


This is not at all suprisinging to me. In general, companies underestimate the intelligence of the consumer and are pretty bad at imagining being a consumer looking at their company. We are all being indundated with AI hype, and it's almost all telling companies "you can save money by getting AI to do a mediocre-to-ouright-shitty versions of this work instead of hiring people".

As a consumer, why would I choose a company optimizing for their margins at the expense of my experience? Touting AI has become a warning sign that the company is doing this.


Not surprising at all, there is a big trend of using AI for cutting costs while producing stuff with minimal quality. Once the initial hype is over, the consumers will hate how repetitive it is.


The cost is currently offloaded to the AI providers. OpenAI is apperently burning cash like crazy, in that their revenue is far lower than their expenses. For now, during its formative years anyway.

Once the initial hype is over, we will just take it for granted that the software we're using is doing something with AI. The costs will have to either be solved, or flipped to the consumer, though.

Right now, it's a free-for-all, and worth taking advantage of, if you can.


It's this simple: generative AI is poisoning culture.

Everyone knows this -- even the people pushing it. We know what it is doing to art, what it is doing to just being able to find a book on amazon, what it is doing to reviews, to website searches, to authenticity everywhere.

Splitting hairs about whether people think it's the terminology or the functionality is absurd. Stop making generative AI products that are the cultural equivalent of breaking into the community pool solely to piss in it.

People don't really want this. They have an instinct that a lot of AI products are little more than grift (trained by their experiences with "Web 3.0"). And this study is showing that.


"Everyone knows this", "People don't really want this".

No they don't. Phrasing like this is typical for people who want to persuade others that their opinions are somehow widely shared. It's a cheap rhetorical trick. Populists use this all the time. That doesn't make it any more true.

Some people might want that to be true (including you apparently). But most people haven't got a clue when they are using AI or aren't using AI and can't articulate what it is and are perpetually confused about anything technical.

You are probably right about everyone getting a bit worn out about everybody blabbing about the topic. That's because most of those people don't make any sense. And of course there are a lot of products that are pretty bad that advertise their supposed AI qualities. So understandably, people are a bit put off by that.

If you read a lot of stuff that uses phrasing like you just used, you are likely suffering from confirmation bias. That's a thing that causes people to genuinely believe they are right about something because they unconsciously avoid consuming information that disagrees with their views and seek out peers and sources of information that confirm them. Everything they see and hear confirms them in their biases. A lot of media sources actually prey on this by feeding lots of (AI generated, of course) content that deliberately manipulates groups like this.

All this study really shows is that people are tired of vague AI features that never really deliver on their promises. But they happily read a lot of stuff on twitter, linkedin, youtube, etc. blissfully unaware that they are consuming a lot of generated content that is fed to them by an AI that decides what they should consume next. All those supposed AI haters are hopelessly addicted to things like Tik Tok, Instagram, and what not.

The trick with selling AI is to not talk about it. People love magic. Just don't tell them how it works.


> But most people haven't got a clue when they are using AI or aren't using AI and can't articulate what it is and are perpetually confused about anything technical.

But you said...

> Phrasing like this is typical for people who want to persuade others that their opinions are somehow widely shared. It's a cheap rhetorical trick. Populists use this all the time. That doesn't make it any more true.

Is it only a problem when someone you disagree with does it, then? "Most people haven't got a clue" is also something populists say, I think you will find.

I tend not to write off whole swathes of people as non-technical. There's no such simple thing as a non-technical person, and people can like or dislike stuff without having a deep understanding of how it works. Generative AI is pretty widely distrusted and reviled; we know the damage it is going to do to trust and culture. It is poisoning the well of AI development. Studies like this shouldn't be a surprise.


The only thing that is going to stop generative AI is companies using it not making as much money as they expected. It's probably going to take a crash and another AI winter. Maybe I'm too blackpilled on AI already but I absolutely believe companies will just keep throwing money at it until it's literally existentially impossible to do so, just because the promise of the capitalist version of free energy (delusional as it may be) is just too great to resist.

The degree to which our culture is already largely controlled and manufactured by corporations and sold to us without our input or consent is a separate conversation we should have, collectively, because AI is just pissing harder into an already pissed in pool in that regard.


> because AI is just pissing harder into an already pissed in pool in that regard.

It is, and I agree.

But I suspect you agree with me that this is not how people feel.

If you see someone lying on the street, bloodied, and you don't help, expect to be judged by that person about as harshly as they judge the person who beat them up.

Corporate culture has never come for art and writing as dismissively and callously as generative AI does.

This isn't pop stars and TV. This is coming to the artist who has self-taught oil paintings, the photographer who has built their own artistic camera, the pencil sketch artist who has learned photorealistic sketching and saying "hey, I know your work is almost impossible to share with people already and sharing it is the thing that makes you feel good... but I'm going to make that worse by drowning it in autogenerated shit. Oh and hey, you know how you developed a style and sold a couple of things? I've trained my system so that your customers can just generate work in your style without even giving you money, so that should make you happy, right? Happy customers are good."

This is not a mere extension of corporate culture. This is a callous, systematically careless, dismissive kick in the face to people on the ground. And we all feel it, either on our own behalf as creative people, or for those artists we admire.


The only thing that is going to stop generative AI is companies using it not making as much money as they expected. It's probably going to take a crash and another AI winter. Maybe I'm too blackpilled on AI already but I absolutely believe companies will just keep throwing money at it until it's literally existentially impossible to do so, just because the promise of the capitalist version of free energy (delusional as it may be) is just too great to resist.

The degree to which our culture is already largely controlled and manufactured by corporations and sold to us without or input or consent is a separate conversation we should have, collectively, because AI is just pissing harder into an already pissed in pool in that regard.


I'd love to see more products use some machine learning in ways that fit their current UI paradigms.

For example:

* You enter an ambiguous search term into a search engine. It shows you some results, but it also shows you some buttons to filter by meaning. For example if you've entered "universal", it could give you an option to filter out all pages where "universal" appears as the name of the film studio, without a hack like excluding "universal films", but actually deciding based on context

* If you've touched up a few photos the same way, an image processing program could give you a suggestion for a touch-up for the next photo, along with options to tweak it

* In an email program, if you've moved a few emails to a folder, it could suggest a selection of other emails to move to the same folder, or maybe suggest to you a mail rule that would do it for you in future.

... and please, provide an option to switch it off.


Logitech's *mouse driver* now has AI in it[0].

[0] https://www.logitech.com/en-us/software/logi-options-plus.ht...


For me personally, when I see "AI" suddenly popping up on something I am subscribed to, 8/10 times, there is some sort of chatbot suddenly added there; nothing ground breaking, nothing cool(that didn't exist before), just a random chatbot!

Secondly, I am slightly burntout looking at all the "BlockChain", "distributed", "Crypto" for like last decade and now suddenly the same products(and products from same people) did a swap of "crypto" or "blockChain" with "AI". Needless to say, my mind is jaded from all those scammy blockchain peedlers that when I see an "AI" (e.g. TodoAI, like wtf? I just want to add some text and dates and want to check those when am done or just show me a notification if I forgot..), I just feel like someone is trying to scam me.

Also, I noticed that some products suddenly hiked the price after adding the "AI" in their featureset, which wouldn't feel scammy(because inflation happened), but if I see some crappification of the product post AI, it just leaves a bad taste that next time I see anything with "AI" as feature, I assume it is also some crap. I know, don't judge a book by fathers and children must not be judged by deeds/sins of their fathers but what can I say.

Also, most of the products are clearly proxying stuff to ChatGPT with a system prompt, you can actually sense it, if you have used OpenAI apis. Which causes me extra pain, because c'mon, you are selling me a proxy with a prompt and charging me like $5.99/month!


Anecdotal evidence but my reaction to seeing a Google product release mentioning anything related "Gemini AI" makes me subconsciously assume that they made the product worse by including a totally unrelated and barely working feature that nobody asked for. The only reason for most of these features is, I assume, that there's a requirement now for all PMs to ship something with "Gemini AI" in its title or they won't get a promotion ¯\_(ツ)_/¯


The good thing is that going by Google track record, the feature will live only a few years.


AI just means “annoying chatbots and low quality content” to most people


I think it's the association with "AI-generated slop content".

People associate the use of AI in an existing area as being a poor-quality facsimile of the real thing, whatever the "real thing" is. That, or an unnecessary addition causing annoyance (aka Clippy)

On the other hand, for genuinely new use-cases where AI is central and beneficial, I'd be surprised if there was a negative reaction. It is "new shiny thing" vs "cheap plastic imitation".

Reminds me of The Graduate "One word. Plastics."


There are no genuinely new use cases for AI


High speed of generating content is new.


SEO have farms existed previously, and as far as I can tell, their unwilling consumers weren't really asking for more at higher bandwidth but with even less accuracy.


For one predefined type of content maybe, but not on the kind of quality and versatility that appeared in the last year or two. You couldn't just randomly say "hey Google, code me up a PowerShell script that does x, y and z" and have a back-and-forth discussion. This is completely new.


But the content is _bad_. Nobody, or virtually nobody, wants an increased supply of bad content. There's already far more than enough of that.


I think, in some ways, it is just annoying for people to see it 'everywhere.' Some see it as a buzzword without substantial value


AI is not being marketed to consumers. AI is being marketed to investors.


I'm building a flashcard language learning app, and the Meta ad with the phrase "AI-powered" performs better than all other variations, so depends.


My first association with flash card apps is that it's extremely tedious to create the cards for what you're trying to memorize, so if AI can help with that, that may be worth it.


As far as I can tell, AI is only useful for:

1. Making really, really, REALLY, shitty "art"

2. Writing nonsensical and boring short stories or bland written-by-committee memos that make you sound like a soulless AI

3. Creating summaries that are pathetic compared to the first paragraph of any wikipedia article on literally any topic (also, they are often 100% wrong)

4. Acting as a useless, actually worse that useless due to being both useless and time wasting, wall between you and a human who can actually do something "assistant"

5. Looking at images and telling me if there's a cat or a fruit in them

6. Being a worse chatbot than ELIZA was 50 years ago

7. Writing code that, if it is anything more complex than something you can copy and paste from Stack Overflow and have work, you have to spend more time fixing than if you had just written it yourself

But it is very good at bombarding users with an infinite stream of garbage content that is cheap and effortless to create so it will eventually devour everything.


In just a few months every website that has had a customer support chat-plugin seems to have become 'AI Powered'. And all they seem to do is go on in a circular drivel.


I work for a company providing those kind of solutions and yeah, the mandate from above is that we need to build an AI bot that will replace real human CS agents. The "vision" from our CEO is that he wants to replace 95% of all human interactions with the bot, to which the dev team in unison exclaimed "what the fuck?".

It's literally just calling the OpenAI APIs with some overly complex system prompt. It is laughably useless, constantly lies, but some of our bigger (read: soul-less corpo) customers love it because they can get rid of a portion of their CS team even further than they've already been doing.

From what I've noticed, it's mostly marketers, spammers/scammers, and the C-level and their investor buddies that are excited about AI, and the not-so-cynic in me can't see it for anything else other than a way for them to fire people en-masse to make more money by automating their jobs away, regardless of if the quality is horrendous or not.

I think we're already reaching a point where regular people are kind of sick of all the AI shit, though. Who knows, maybe people start abandoning anything that smells of AI en-masse? It's nice to dream, anyway.


Many consumers have a working bullshit detector. And they are starting to understand that "AI" is usually a meaningless buzzword. And people don't like being bullshitted.

And even to those who think there is real meaning behind it, do you really think people want "intelligence" everywhere, artificial or not. People don't want their toaster to be intelligent, they just want it to toast. So what is an AI powered toaster? A toaster you have to argue with regarding how you want your bread toasted? And in many cases that's what happens when you replace a push button with an "AI assistant", so people are not wrong about it either.


Yeah I’d rather not be part of the experiment thank you. If something takes intelligence to do I want to talk to a human that can help do it.

AI isn’t going to magically help solve problems with technology. Frankly I’m tired of technology. I miss people.


[flagged]


The analogy between AI and Adult Diapers is not too far off.


AI is still far from profitable, except for scam.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: