Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know how useful this is, but my immediate reaction to the animation on the front page was "that's literally worse then the alternative".

Because the example given was "change the color of a component".

Now, it's obviously fairly impressive that a machine can go from plain text to identifying a react component and editing it...but the process to do so literally doesn't save me any time.

"Can you change the current colour of headercomponent.tsx to <some color> and increase the size vertical to 15% of vh" is a longer to type sentence then the time it would take to just open the file and do that.

Moreover, the example is in a very "standard" format. What happens if I'm not using styled components? What happens if that color is set from a function? In fact none of the examples shown seem gamechanging in anyway (i.e. the Confluence example is also what a basic script could do, or a workflow, or anything else - and is still essentially "two mouseclicks" rather then writing out a longer English sentence and then I would guess, waiting substantially more time for inferrence to run.



On the one hand, this isn’t a great example for you because you already knew how to do that. There’s probably no good way to automate trivial changes that you can make off the top of your head, and have it be faster than just doing it yourself.

I’ve found LLMs most useful for doing things with unfamiliar tooling, where you know what you want to achieve but not exactly how to do it.

On the other hand, it’s an okay test case because you can easily verify the results.


I agree with the process not saving any of our time. however aren't examples supposed to be simple?

Take it from Aider example: https://github.com/Aider-AI/aider It asked to add a param and typing to a function. Would that save us more time? I don't think so. but it's a good peek of what it can do

just like any other hello world example i suppose


Examples are supposed to be simple when they illustrate a process we already know works.

With AI the challenge is that we need to convince the reader that the tool will work. So that calls for a different kind of example.


If you don't know how implement it how can you be sure LLM will do it correctly?

If the task is not simple then break it into simple tasks. Then each of them is as easy as color change.


Not how it works. That it works.


No, how it works.


Yeah the fact that just composing the first prompt would take me longer than just doing the thing is my biggest blocker to using any of these tools on a regular basis


Which is also assuming it gets it right the first prompt, and not 15 minutes of prompt hacking later, giving up and doing it the old fashioned way anyway.

The risk of wasted time is higher than the proposed benefit, for most of my current use cases. I don't do heaps of glue code, it's mostly business logic, and one off fixes, so I have not found LLMs to be useful day to day at work.

Where it has been useful is when I need to do a task with tech I don't use often. I usually know exactly what I want to do but don't have the myriad arcane details. A great example would be needing to do a complex MongoDB query when I don't normally use Mongo.


Cursor + Sonnet has been great for scaffolding tests.

I'll stub out tests (just a name and `assert true`) and have it fill them in. It usually gets them wrong, but I can fix one and then have it update the rest to match.

Not perfect, but beats writing all the tests myself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: