Hacker News new | past | comments | ask | show | jobs | submit login

Except lots of engineers now sling AI generated slop over the wall and expect everyone else to catch any issues. Before, generating lots of realistic code was time consuming, so this didn’t happen so much.



Those engineers are doing their job badly and should be told to do better.


Is the time to develop production code being reduced if they stop slinging code over the wall and only use AI code as inspiration?

Is the time to develop production code reduced if AI gen code needs work and the senior engineer can't get a clear answer from the junior as to why design decisions were made? Is the junior actually learning anything except how to vibe their way out of it with ever more complex prompts?

Is any of this actually improving productivity? I'd love to know from experts in the industry here as my workplace is really pushing hard on AI everything but we're not a software company


Empirically yes. Putting aside AI code review for a second, just AI IDE adoption increases rate of new PRs being merged by 40-80%. This is at larger, more sophisticated software teams that are ostensibly at least maintaining code quality.


What do you think is the mechanism driving this improvement PR merge rate?

Based on my experience it's like having an good quality answer to a question, tailored to your codebase with commentary, and provided instantly. Similar to what we wanted from an old google/stack overflow search but never quite achieved.


I would definitely expect more PR's being merged if you skip the learning and review steps. How is this surprising to anyone?

The more interesting discussion is about long term consequences and if this is a viable path forward.


Agreed. To put it another way, a few years ago, you could copy/paste from a similar code example you found online or elsewhere in the same repository, tweak it a bit then commit that.

Still bad. AI just makes it faster to make new bad code.

edit : to be clearer, the problem in both copy/paste and AI examples is the lack of thought or review.


I hesitate to be this pessimistic. My current position - AI generated code introduces new types of bugs at a high rate, so we need new ways to prevent them.


That's the "outer loop" problem. So many companies are focused on the "inner loop" right now: code generation. But the other side is the whole test, review, merge aspect. Now that we have this increase in (sometimes mediocre) AI-generated code, how do we ensure that it's:

* Good quality * Safe


I agree 100%

One piece of nuance - I have a feeling that the boundary between inner and outer loop will blur with AI. Can't articulate exactly how, I'm afraid.


> how do we ensure that it's:

> * Good quality * Safe

insert Invincible “That’s the neat part... ” meme


If you copied an pasted from a similar code example, tweaked it, tested that it definitely works and ensured that it's covered by test (such that if the feature stops working in the future the test would fail) I don't mind that the code started with copy and paste. Same for LLM-generated snippets.


How is this bad?

It's not. You're still responsible for that code. If anything copy & pasting & tweaking from the same repository is really good because it ensures uniformity.


Sorry, I should have been clearer - copy/paste is of course fine, as long as you review (or get others to review). It's the lack of (human) thought going into the process that matters.


im currently debating if thats something i should be doing, and putting more into getting the gen ai to be able to quickly iterate over beta to make improvements and keep moving the velocity up


In my case it's management pushing this shit on us. I argued as much as I could to not have AI anywhere near generating PRs, but not much I can do other than voice my frustrations at the billionth hallucinated PR where the code does one thing and the commit message describes the opposite.


Where are you seeing this? Are there lots of engineers in your workplace that do this? If so, why isn’t there cultural pressure from the rest of the team not to do that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: