Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The bottlenecks today are:

* understanding the problem

* modelling a solution that is consistent with the existing modelling/architecture of the software and moves modelling and architecture in the right direction

* verifying that the the implementation of the solution is not introducing accidental complexity

These are the things LLMs can't do well yet. That's where contributions will be most appreciated. Producing code won't be it, maintainers have their own LLM subscriptions.



I still think there is value in external contributors solving problems using LLMs, assuming they do the research and know what they are doing. Getting a well written and tested solution from LLM is not as easy as writing a good prompt, it's a much longer/iterative process.


> assuming they do the research and know what they are doing.

This is the assumption that has almost always failed and thus has lead to the banning of AI code altogether in a lot of projects.


[flagged]


Some months back I would have agreed with you without any "but", but it really does help even if it only takes over "typing code".

Once you do understand the problem deep enough to know exactly what to ask for without ambiguity, the AI will produce the code that exactly solves your problem a heck of a lot quicker than you. And the time you don't spend on figuring out language syntax, you can instead spend on tweaking the code on a higher architecture level. Spend time where you, as a human, are better than the AI.


I don't know, I've had good experiences getting LLMs to understand and follow architecture and style guidelines. It may depend on how modular your codebase already is, because that by itself would focus/minimize any changes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: