Hacker News new | past | comments | ask | show | jobs | submit login

its best to tell them how you want the code written.



At that point isn't it starting to become easier to just write the code yourself? If I somehow have to formulate how I want a problem solved, then I've already done all the hard work myself. Having the LLM just do the typing of the code means that now not only did I have to solve the problem, I also get to do a code review.


Yes the fallacy here is that AI will replace eingineers any time soon. For the foreseeable future prompts will need to be written and curated by people who already know how to do it, but will just end up describing it in increasingly complex detail and then running tests against it. Doesn't sound like a future that has that many benefits to anyone.


Personally I found it quite fun to give specification and have ChatGPT find me a Python code that implements it: https://chatgpt.com/share/6777debc-eaa4-8011-81c5-35645ae433... . Or the additional polygon edge smoothing code: https://chatgpt.com/share/6773d634-de88-8011-acf8-e61b6b913f...

Sure, the green screen code didn't work exactly as I wished, but it made use of OpenCV functions I was not aware of and it was quite easy to make the required fixes.

In my mind it is exactly the opposite: yes, I've already done the hard work of formulating how I want the problem solved, so why not have the computer do the busywork of writing the code down?


There's no clear threshold with an universal answer. Sometimes prompting will be easier, sometimes writing things yourself. You'll have to add some debugging time to both sides in practice. Also, you can be opportunistic - you're going to write a commit anyway, right? A good commit message will be close to the prompt anyway, so why not start with that and see if you want to write your own or not?

> I also get to do a code review.

Don't you review your own code after some checkpoint too?


why leave the commit message for the human to write? have the LLM start off and add relevant details it missed.


Because the commit message is pure signal. You can reformat it or as useless info, but otherwise, generating it will require writing it. Generating it from code is a waste, because you're trying to distil that same signal from messy code.


Spend your cognitive energy thinking about the higher level architecture, test cases and performance concerns rather than the minutia and you’ll find that you can get more work done with the less overall mental load.

This reduction in cognitive load is the real force multiplier.


Admittedly some people are using AI out of curiosity rather than because they get tangible benefit.

But aside from those situations, do you not think that the developers using AI - many of whom are experienced and respected - must be getting value? Or do you think they are deluded?


What if I want to discover a new better way to write code?


You can ask it for alternative methods and even to document their pros and cons.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: