Hacker News new | past | comments | ask | show | jobs | submit login

"(I personally don't mind coding boilerplate stuff, especially since I can learn how the framework works that way)"

^ Isn't that what folks used to say about programming in assembler? How much time do I want to spend learning frameworks (beyond what I already know) vs. how productive do I want to be?




I fully sympathise with this analogy and I think I have used it before myself. But there is a tremendous difference in practice. A compiler doesn’t produce randomly different code each time you run it, while an LLM, no matter how good will. At which point if something breaks, you have to take the reins.


Note that openai added the "seed" parameter to get deterministic results in the last release.


That doesn’t help with the issue I put forward. Even if the seed is identical, the output is not deterministic based on the input. A tiny change in the input could result in no change in the output, a little change in the out, a medium change in the output, or a totally new output.


This assuming they'll keep providing access to the model you were using in perpetuity.


Probably, which is exactly my problem.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: