Hacker News new | past | comments | ask | show | jobs | submit login

It really sounds like AI is a bubble if people ask how they can cram AI into their workflow rather than coming up with real problems they want to automate.

I suspect that a very large number of applications that could've been "normal" programs are going to be AI'd with no clear pros but many cons.




That's an unnecessarily dismissive take.

Let's assume for a moment that you're a non-technical 55-70yo manager or owner of a business -- perhaps someone like your mom, dad, uncle, or grandparents.

What they're really asking, "There's a whole lot of lip service right now about big leaps in AI capabilities. I've tried ChatGPT, and it's pretty cool. I have no idea what the limitations are, or how practical any given application is. You are technically inclined. Do you think any part of our business could benefit?"


I keep bringing this up, but this launch HN post about Roundtable: https://news.ycombinator.com/item?id=36865625

still shocks me. It's a company that uses AI to produce survey results. I'll let you read their pitch/description and decide for yourself, but I think it's very fair to say that this is a service to fabricate survey results to validate whatever idea it is you had beforehand. But even side-stepping that, they claim to have overcome bias in their datasets and refused to elaborate on 1) how they did that and 2) how they could prove that they did that.

As long as this community, which is far more technically sophisticated than the general public, isn't laughing companies like that out of the room, we're in serious trouble.

There was also this thread https://news.ycombinator.com/item?id=37259753 which was an individual's project to provide an AI therapist and while people here and there did mention the cons of having a program provide medical treatment, the overall sentiment wasn't at all negative.

I'm not even some AI luddite: I use and greatly benefit from some AI tools. But just like crypto, AI isn't the be-all-end-all technology. The difference is that where crypto is primarily a financial risk to people duped into using dubious-at-best-scams-at-worse products, AI will cause real, concrete harm, i.e., https://www.euronews.com/next/2023/03/31/man-ends-his-life-a...


> an individual's project to provide an AI therapist and while people here and there did mention the cons of having a program provide medical treatment, the overall sentiment wasn't at all negative.

Sorta. You're forgetting to compare it to an alternative. Compared to a licensed therapist that you can find, schedule, travel to and afford... for some people an imperfect something is better than the nothing they have now.


This is really the only part of the post where I too started pondering.

Wholeheartedly agree with the rest and am thankful for the link compilation.

IMHO, the thing with therapy for mental health problems is that it's mostly in such a sorry state that it's an insult to GPs point to call it a medical treatment. I only make that bold claim because of my own experience. As always, YMMV.

Doesn't make the AI thing less creepy, but I'd still carefully consider there's some real potential here.


For me, the idea of robots being necessary to provide needed therapy that otherwise would be unavailable is enough to induce existential despair. I really can't imagine it as a viable long term treatment. Much of the point of therapy is just having a human advocate who is invested in your wellbeing. AI therapy sends the message that human problems are not worthy of human attention.


I don't disagree. That said we still live in a world that has always known hunger and war, and those are even lower on maslow's hierarchy. So my point is simply that we are where we are and that anything that helps is worth doing rather than not doing because the more ideal alternative isn't happening.


> AI therapy sends the message that human problems are not worthy of human attention

Scratch 'AI therapy' and just put in capitalism and you have the answer to the problems we have.

Therapy is a job that people do to pay the bills after the 10th person they've seen that day.


>> Therapy is a job that people do to pay the bills after the 10th person they've seen that day.

You're contradicting yourself. Either therapists actually care, or they wouldn't do their job without capitalist incentives. Which is it?


You seem to only deal with black and white situations, reality rarely present clean cut scenarios like that.

For example I deal directly with customers quite often. The amount of "I give a fuck" is much higher at 8AM then it is at 4PM. To find people that can give out a continuous amount of caring at 40 hours a week is difficult. Then you tend to have some percentage of employees that "Don't give a fuck" at all and are there for a paycheck.

It turns out that if you take away that paycheck you'll find that 99% of your therapists and your engineers will go do something else. :)


Okay, so how would scrapping "capitalism" improve the availability of therapy?


First humans make errors too. Second, there's a lot of "generic" work flows that AI could enhance.

routing emails.

voice mail routing handling.

commenting code. easy enough to add something that strips comments and verifies the code is the same.

taking meeting notes.

going to google and pulling down notes from 10 websites.

writing a user guide for a module.

any kind of corporate-ese emails, memos etc.

generating test data.

I wouldn't allow this stuff to work with out human supervision for the most part, but this list of table types stuff seems to work pretty good and be worth time to ask it.


I'm not saying that AI hasn't some utility, I'm stating that the hype is gonna push for building AI'd applications that didn't need AI.

Few of the examples you make don't need ML at all (such as routing emails, voice mail, generating test data or pulling notes from 10 websites).


Have fun pulling notes from websites without ML. NLP just plain didn't work before ML. The thing is that with ML you are able to work with unstructured data, e.g. routing email without having a complicated system the person mailing you has to understand or hundreds of brittle, hand written rules that in the end won't fire anyway because somebody spelled a word wrong.

I think ML/AI is just not ready for a lot of applications because it's not good enough, but our models are still improving every month. Impossible to know what it's like in 5 years.


My 2 cents, this is like the early days of PCs or the internet. It's obvious to all that there is immense value, but it's unclear where that value is going to stick. Compaq ended up going bust after figuring out how to get around IBM bios licensing.

In hindsight, the advent of pc, mobile, and the internet were economically transformative. However PC vendors generally faired poorly, Mobile was dominated by a small number of behemoths - and the internet is dominated by giant consumer tech firms and a dizzying array of B2B firms.

It's pretty tough to pick out the winners today, I thought consumer plays were fools errands up until the phi-1.5/WebLLM papers. Now it's looking like we'll have GPT-3.5 like behavior in the browser, on common consumer hardware by year end.


Yup. Well said.

As for the cons, a nice way to put it would be that adding a black box into the workflow makes everything after it undefined behavior. And there's no shortage of C-Suite guys with itchy trigger feet.


It's been that way for years, but with data-science and machine learning.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: