Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

100% agree. There is a bigger point too: People assume LLM capabilities are like FLOPs or something, as if they are a single number.

In reality, building products is an exploration of a complex state space of _human_ needs and possible solutions. This complexity doesn't go away. The hard part of being an engineer is not writing JavaScript. It is building a solution that addresses the essential complexity of a problem without adding accidental complexity.

The reason this is relevant is that it's just the same for LLMs! They are tripped up just like human engineers into adding accidental complexity when they don't understand the problem well enough and then they don't solve the real problem. So saying "just wait, LLMs will do that in the future" is not much different than saying "just wait, some smarter human engineer might come along and solve that problem better than you". It's possibly true, possibly false. And certainly not helpful.

If you work on a problem over time, sometimes you'll do much better than smarter person who has the wrong tools or doesn't understand the problem. And that's no different for LLMs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: