Hacker News new | past | comments | ask | show | jobs | submit login

I think you’re on to something. I find the sentiment around LLMs (which is at the early adoption stage) to be unnecessarily hostile. (beyond normal HN skepticism)

But it can be simultaneously true that LLMs add a lot of value to some tasks and less to others —- and less to some people. It’s a bit tautological, but in order to benefit from LLMs, you have to be in a context where you stand to most benefit from LLMs. These are people who need to generate ideas, are expert enough to spot consequential mistakes, know when to use LLMs and when not to. They have to be in a domain where the occasional mistake generated costs less than the new ideas generated, so they still come out ahead. It’s a bit paradoxical.

LLMs are good for: (1) bite-sized chunks of code; (2) ideating; (3) writing once-off code in tedious syntax that I don’t really care to learn (like making complex plots in seaborn or matplotllib); (4) adding docstrings and documentation to code; (5) figuring out console error messages, with suggestions as to causes (I’ve debugged a ton of errors this way — and have arrived at the answer faster than wading through Stackoverflow); (6) figuring out what algorithm to use in a particular situation; etc.

They’re not yet good at: (1) understanding complex codebases in their entirety (this is one of the overpromises; even Aider Chat’s docs tell you not to ingest the whole codebase); (2) any kind of fully automated task that needs to be 100% deterministic and correct (they’re assistants); (3) getting math reasoning 100% correct (but they can still open up new avenues for exploration that you’ve never even thought about);

It takes practice to know what LLMs are good at and what they’re not. If the initial stance is negativity rather than a growth mindset, then that practice never comes.

But it’s ok. The rest of us will keep on using LLMs and move on.




I've been sold AI as if it can do anything. It's being actively sold like a super intelligent independent human that never needs breaks.

And it just isn't that thing. Or, rather, it is super intelligent but lacks any wisdom at all; thus rendering it useless for how it's being sold to me.

>which is at the early adoption stage

I've said this in other places here. LLM's simply aren't at early adoption stage anymore. They're being packaged into literally every saas you can buy. They're a main selling point for things like website builders and other direct to business software platforms.


Why not ignore the hype, and just quietly use what works?

I don’t use anything other than ChatGPT 4o and Claude Sonnet 3.5v2. That’s it. I’ve derived great value from just these two.

I even get wisdom from them too. I use them to analyze news, geopolitics, arguments around power structures, urban planning issues, privatization pros and cons, and Claude especially is able to give me the lay of the land which I am usually able to follow up on. This use case is more of the “better Google” variety rather than task-completion, and it does pretty well for the most part. Unlike ChatGPT, Claude will even push back when I make factually incorrect assertions. It will say “Let me correct you on that…”. Which I appreciate.

As long as I keep my critical thinking hat on, I am able to make good use of the lines of inquiry that they produce.

Same caveat applies even to human-produced content. I read the NYTimes and I know that it’s wrong a lot, so I have to trust but verify.


I agree with you, but it's just simply not how these things are being sold and marketed. We're being told we do not have to verify. The AI knows all. It's undetectable. It's smarter and faster than you.

And it's just not.

We made a scavenger hunt full of puzzles and riddles for our neighbor's kids to find their Christmas gifts from us (we don't have kids at home anymore, so they fill that niche and are glad to because we go ballistic at Christmas and birthdays). The youngest of the group is the tech kid.

He thought he fixed us when he realized he could use chatgpt to solve the riddles and cyphers. It recognized the Caesar letter shift to negative 3, but then made up a random phrase with words the same length to solve it. So the process was right, but the outcome was just outlandishly incorrect. It wasted about a half hour of his day. . .

Now apply that to complex systems or just a simple large database, hell, even just a spreadsheet. You check the process, and it's correct. You don't know the outcome, so you can't verify unless you do it yourself. So what's the point?

For context, I absolutely use LLM's for things that I know roughly, but don't want to spend the time to do. They're useful for that.

They're simply not useful for how they're being marketed, which is too solve problems you don't already know.


>We're being told we do not have to verify. The AI knows all.

Where are you being told all of these things? I haven't heard anything like it.


An example that might be of interest to readers: I gave it two logs, one failing and one successful, and asked it to troubleshoot. It turned out a loosely pinned dependency (Docker image) had updated in the failing one. An error mode I was familiar with and could have solved on my own, but the LLM saved me time. They are reliable at sifting through text.


Hostility and a few swift kicks are in order when the butt scratchers start saying their stochastic parrot machine is intelligent and a superman.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: