Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed — however it’s interesting that unlike the internet, computers or smartphones the older generation, like the younger, immediately found the use of GPT. This is reflected in the latest Mary Meeker report where it’s apparent that the /organic/ growth of AI use is unparalleled in the history of technology [1]. In my experience with my own parents’ use, GPT is the first time the older generation has found an intuitive interface to digital computers.

I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted. Marcus et al can keep screaming into their echo chamber and it won’t change a thing.

[1] https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...



> I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted.

Where else would AI haters find an echo chamber that proves their point?


It's wild -- I've never seen such a persistent split in the Hacker News audience like this one. The skeptics read one set of AI articles, everyone else the others; a similar comment will be praised in one thread and down-voted to oblivion in another.


IMO the split is between people understanding the heuristic nature of AI and people who dont and thus think of it as an all-knowing, all-solving oracle. Your elder parents having nice conversations with chatgpt is nice aslong it doesnt make big life changing decisions for them, which happens already today.

You have to know the tools limits and usecases.


I can’t see that proposed division as anything but a straw-man. You would be hard-pressed to find anyone who genuinely thinks of LLMs as an “all-knowing, all-solving oracle” and yet, even in specialist fields, their utility is certainly more than a mere “heuristic”, which of course isn’t to say they don’t have limits. See only Terrance Tao’s reports on his ongoing experiments.

Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude? I spoke with a handyman the other day who unprompted told me he was building a side-business and found GPT a great aid — of course they might make some terrible decisions together, but it’s unimaginable to me that increasing agency isn’t a good thing. The interesting question at this stage isn’t just about “elder parents having nice conversations”, but about computers actually becoming useful for the general population through an intuitive natural language interface. I think that’s a pretty sober assessment of where we’re at today not hyperbole. Even as an experienced engineer and researcher myself, LLMs continue to transform how I interact with computers.


> Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude?

Depending on the decision yes. An LLM might confidently hallucinate incorrect information and misinform, which is worse than simply not knowing.


Yup. Exactly this. As soon as enough people get screwed by the ~80% accuracy rate, the whole facade will crumble. Unless AI companies manage to bring the accuracy up 20% in the next year, by either limiting scope or finding new methods, it will crumble. That kind of accuracy gain isn't happening with LLMs alone (ie foundational models).


Charitably, I don’t understand what those like you mean by the “whole facade” and why you use these old machine learning metrics like “accuracy rate” to assess what’s going on. Facade implies that the unprecedented and still exponential organic uptake of GPT (again see the actual data I linked earlier from Mary Meeker) is just a hype-generated fad, rather than people finding it actually useful to whatever end. Indeed, the main issue with the “facade” argument is that it’s actually what dominates the media (Marcus et al) much more than any hyperbolic pro-AI “hype.”

This “80-20” framing, moreover, implies we’re just trying to asymptotically optimize a classification model or some information retrieval system… If you’ve worked with LLMs daily on hard problems (non-trivial programming and scholarly research, for example), the progress over even just the last year is phenomenal — and even with the presently existing models I find most problems arise from failures of context management and the integration of LLMs with IR systems.


Time will tell.


My team has measurably gotten our LLM feature to have ~94% accuracy in widespread reliable tests. Seems fairly confident, speaking as an SWE not a DS orML engineer though.


Yeah, I've had similar results. Even with GPT-o1, I find almost all errors at this point come from the web search functionality and the model taking X random source as an authority. It's interesting that I find my human intelligence in the process is most useful for hand-collecting the sources and data to analyze -- and, of course, for directing the process across multiple LLM queries.


I think there are two problems:

1. AI is a genuine threat to lots of white-collar jobs, and people instinctively deny this reality. See that very few articles here are "I found a nice use case for AI", most of them are "I found a use case where AI doesn't work (yet)". Does it sound like tech enthusiasts? Or rather people terrified of tech?

2. Current AI is advanced enough to have us ask deeper questions about consciousness and intelligence. Some answers might be very uncomfortable and threaten the social contract, hence the denial.


On the second point, it’s worth noting how many of the most vocal and well-positioned critics of LLMs (Marcus/Pinker in particular) represent the still academically dominant but now known to be losing side of the debate over connectionism. The anthology from the 90s Talking Nets is phenomenal to see how institutionally marginalized figures like Hinton were until very recently.

Off-topic, but I couldn’t find your contact info and just saw your now closed polyglot submission from last year. Look into technical sales/solution architecture roles at high growth US startups expanding into the EU. Often these companies hire one or two non-technical native speakers per EU country/region, but only have a handful of SAs from a hub office so language skills are of much more use. Given your interest in the topic, check out OpenAI and Anthropic in particular.


Thanks for the advice. Currently I have a €100k job where I sit and do nothing. I'm wondering if I should coast while it lasts, or find something more meaningful


The timeless dilemma! I don't know how old you are, but just don't let the experience break your internal motivation as it can be hard to recover -- and remember however much 100k is and however much you save, it's still many years at that salary to genuine retirement. I don't know what equity packages are like these days at OpenAI/Anthropic, but especially if you're interested in the topic and have strong beliefs about how AI should play out in the world it's worth considering rolling the dice on a 2-4 year sprint. I imagine SA type positions at either of those companies are some of the most interesting roles, especially working in legacy industries/government [1], since you'd get to see firsthand how/where it's most effective (to say nothing of honing your language skills). Good luck regardless!

[1] https://openai.com/careers/solutions-architect-public-sector... for example - listed salary is 2x your current in the US, not sure what the salary is like in the EU.


I think of the two camps like this: one group sees a lot of value in llms. They post about how they use them, what their techniques and workflows look like, the vast array of new technologies springing up around them. And then there’s the other camp. Reading the article, scratching their heads, and not understanding what this could realistically do to benefit them. It’s unprecedented in intensity perhaps, but it’s not unlike the Rails and Haskell camps we had here about a dozen years ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: