Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me, the problem always seemed like people who use ChatGPT and alike default into the "I don't care" mode, and copy-paste blindly.

Personally I think this is the root cause of most sloppy AI code. If you just look at the code that was generated, and you don't think "I would've come up with that", then probably the code is wrong.



One thing is to see senior engineers turning to brainrot, and seemingly overnight forgetting how to do basic programming, to the point where if ChatGPT is down they suddenly have no idea how to do work, and another thing is having an entire generation of junior engineers never having learned programming in the first place, because they finished uni via prompting, somehow got a job via prompting, and now are all getting fired en-masse for obvious reasons, creating a huge void in the job market, and very disappointed seniors (those who haven't succumbed to brainrot just yet).

I'm not sure how to feel about any of this. On the one hand it clearly shows yet again how gullible people are. I wonder if senior engineers (those who can actually solve novel problems) value in the job market will go up as a result of this? Or will the market be saturated with so much AI-enabled waste that it brings the entire fields salary down as a whole? I feel bad for the end consumer who has to tolerate lower and lower quality products year over year, as the general software engineering practice seemingly burns to the ground, and becomes a chinese sweatshop churning out counterfeit sneakers.


This past year or so I've had ChatGPT open a lot. It's been super useful as a explore a "new" [1] field in a lot more depth.

Interestingly though I don't get it to write code. It's no good at the language I write in, so useless there.

As a "tutor" though it's been really useful. I'm asking a lot of (probably simple) questions, and the answers are "right enough". Occasionally I'm not sure why something is failing, and it's usually helpful there too.

So, less brain-rot and more "helpful senior who helps me along".

[1] the work I'm doing is related to SQL, which I've used here and there before, but not to the depth or degree I am now. I don't need it to write SQL, but rather to answer more general questions and to compare SQL databases, discuss effeciency and so on.


But that there is the difference. You are using it to put knowledge in your brain, whereas what I'm seeing most people do is entirely skip that part, and just being a sort of clipboard-like vessel that takes information from ChatGPT and puts it into a code editor, with no intermediate thought or analysis - meaning that the brain also stores none of it, because it never actually really thought about any of it.

And if there is any thinking involved, it's not in trying to figure out the code, but trying to figure out why the AI can't figure out the code. It's a subtle difference, but results in a huge change in quality.


I use them as "research assistants" when google or the documentation fails me. But I always treat it as a less reliable stackoverflow - there's no guarantee what I'm reading is correct and rarely includes caveats like "doesn't work before version X" or "must have Y flag enabled."

I've particularly enjoyed converting terse documentation into a .md file and feeding into the LLMs context window, then using the LLM to "query" the underlying document.

Where it's always fallen short for me is code generation, and frankly that feature just doesn't interest me.


Agree; I'm seeing the same people that don't unit test or define types using ai to program, and ... even though Ai (CoPilot) can write 90% of the happy path and setup mocks... It's still too much for the developer (and exec team) that doesn't care to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: