Hacker News new | past | comments | ask | show | jobs | submit login

As an addendum, to not (I hope) distract from the point about spam -- I don't at all object to using an LLM for inspiration, or editing, or summarisation. So long as there's no claim that the output contains more information than the input. Any statement of fact in the output should be present in the prompt or validated by a human before publishing, or it's suspect. And if it's published without disclosing the lack of validation, it's unethical.



> And if it's published without disclosing the lack of validation, it's unethical.

How is this different than people posting things that they didn't test themselves?

   try

   {code block}

   Hope this helps
Answers that are clearly not an answer are edited to be pretty rather than down voted and flagged because they're wrong ( https://stackoverflow.com/a/76402243 ).

The problem isn't an LLM (though that just provides more scale) - its that incorrect information isn't removable / actionable on SO. If the person tried to answer a question it remains up.


Who says it's not different?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: