Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is so sparce with any details it's basically useless.

Does telling the AI to "just be correct" essentially work? I have no idea after this article because there no details at all related to what changed the type of prompts etc





> there no details at all related to what changed the type of prompts etc

He gave you the exact text he added to his agents file. What else are you looking for?


This is absolutely infuriating for me to see: People keep posting shit like "do it right or i will kill a puppy, you can get bitcoin if you are right" then never any testing where they change one word here or there and compare what does and doesn't work vs the dumb shit they are saying

The article is stating what inputs they used and the output they observed. They stated they saw more tokens used and more time spent before returning an answer. That seems like a data point you can test. Which is maybe not the zoom level or exact content you’re looking for, but I don’t feel your criticism sticks here.

> testing where they change one word here or there and compare

You can be that person. You can write that post. Nothing is stopping you.


The point is.. a decent article would have included all of that.

You're missing the forest for the trees with your response


I don’t think you realize that you’re making demands on an author who doesn’t owe you anything. And what you’re asking for is quite difficult. I think it’s okay to frame it as a wish or a desire. “I wish more articles include …”

Or as frustration on a community who keeps upvoting things you consider “not a decent article.”

Or as an opportunity “Has anyone investigated a scientific framework for modifying these prompts…”

> a decent article would have included all of that.

My retort that you can write such an article suggests that it’s more difficult than you might realize and the absence of such articles in our feeds suggests what you’re asking for might be impossible (providing hard, empirical data about a soft, non deterministic system). If you have an article that acts as a bright spot for how to write such an article, that would also be a helpful comment.


You're still missing the point I'm making

Not demanding anything I'm just telling you what my interpretation of article is

You're acting like I'm personally calling out the author and not his work

Essentially giving him constructive feedback


Nothing about your these words reads as constructive to me:

> This is absolutely infuriating for me to see: People keep posting shit like "do it right or i will kill a puppy, you can get bitcoin if you are right" then never any testing where they change one word here or there and compare what does and doesn't work vs the dumb shit they are saying

It doesn't seem like your goal was to convey feedback in a way that was digestable, nor was in a way that was satisfiable (my previous critique).


I haven't tried this, but I can say that LLM's are very good at picking up patterns. If you tell it that it's wrong a few times in a row, it will learn the pattern and go into "I am repeatedly wrong" mode. Perhaps even making mistakes "on purpose" to continue the pattern.

Amusingly, humans are known to do exactly that. Including the "mistakes on purpose", even though they might not realize that they're doing it.

> Does telling the AI to "just be correct" essentially work?

This forces the LLM to use more "thinking" tokens, making the AI more likely to visualize any mistakes in the previous outputs. In most APIs, this can be configured manually, producing better results for complex problems, at the cost of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: