Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree about search. While LLM can give you an answer faster, good doc (eg. MDN article in CSS example) will :

- be way more reliable

- probably be up to date on how you should solve it in latest/recommend approach

- put you in a place where you can search for adjecent tech

LLM with search has potential but I'd like if current tools are more oriented on source material rather than AI paraphrasing.



One of my tricks is to paste the docs right into the context so the model can’t fuck it up.

Though I still wonder if that means I’m only tricking myself into thinking the LLM is increasing my productivity.


I likr this approach. Read the docs, figure out what you want, get LLM to do the grunt work with all relevant context and review.


I have found LLMs to be 95% useful on documented software, from everything eg Uniswap smart contracts to plugins in cordova to setting up Mac or Linux administrative tools.

The problem for a regular person is that you have to copypasye from chat. That is “the last mile”. For terminal commands that’s fine but for programming you need a tool to automate this.

Something like refactoring a function, given the entire context, etc. And it happening in the editor and you seeing a diff right away. The rest of the explanatory text should go next to the diff in a separate display.

I bet someone can make a VSCode extension that chats with an LLM and does exactly this. The LLM is told to provide all the sections labeled clearly (code, explanation) and the editor makes the diff.

Having said all that, good libraries that abstract away differences are far superior to writing code with an LLM. The only code that needs to be written is the interface and wiring up between the libraries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: