Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah, yes! The universal and uncheatable LLM! Surely nothing can go wrong.


Perfect is the enemy of good. Current LLM systems + "traditional tools" for scanning can get you pretty far into detecting the low hanging fruit. Hell, I bet even a semantic search with small embedding models could give you a good insight into "what's in the release notes matches what's in the code". Simply flag it for being delayed a few hours, till a human can view it. Or run additional checks.


I can't wait to read about your solution.


You don't need to be a chef to tell that the soup is too salty.


As i wrote "not perfect". But better than anything else or nothing.


The Politician's Syllogism[0] is instructive.

[0] https://en.wikipedia.org/wiki/Politician's_syllogism


OK, we are here now on reddit or facebook?

I thought we discuss here problems and possible solutions.

My fault.


I'm not sure why everyone is so hostile. Your idea has merit, along the lines of a heuristic that you trigger a human review as a follow-up. I'd be surprised if this isn't exactly the direction things go, although I don't think the tools will be given for free, but rather made part of the platform itself, or perhaps as an add-on service.


I don't think "we should use AI to solve this" is a solution proposal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: