Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As you check more and varied sources, you gain confidence in the result, even if you never get to 100%.


Ok but what if we devise a system which does exactly that? Sounds suspiciously like an LLM to me. I’m as skeptical as anyone about LLMs as AGI they do sound like an aggregate of sources so the only question is how trustworthy those sources are.


I think the issue stems from LLMs today are avoiding IP infringement. Spitting out individual words or characters is not infringing until the words form a sentence which is potentially large enough to resemble existing works.

Then there’s the other side like Perplexity where they spit out sentences and reference the sources. So they’re being sued because the infringement is obvious.

What is the path to a trustworthy LLM if you’re not allowed to repeat protected data without a legal hurdle?

AGI while a cool idea is irrelevant because that tech does not exist.


So wouldn't an aggregate, like an LLM, be the best tool here?


No because an aggregate can be made of many sources, maybe you restrict your sources to those only you trust, but not every source is correct all the time. So you’re still stuck with needing to fact check.


Sure but the weak link is Sam Altman and the likes, not the tech.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: