Hacker News new | past | comments | ask | show | jobs | submit login

I think you're getting to the heart of the behind-the-scenes business decision: Let AI responses proliferate on the platform -- heck, maybe even allow a bot to submit the questions to various LLM's and post the response as a normal user -- then make the volunteer community figure out whether the response was good or not, without letting the mods interfere in the process.



...the mods are part of the volunteer community, and their role largely already is more about keeping the bad apples out than establishing whether a particular response is good or not. Where do you get the idea from that the mods are incapable of this while "the volunteer community" is (or why do you think that's what SE thinks)?


I was implying that I think SE is doing this because it’s the only thing I can think of that would explain what appears to be an insane change of policy. How else would you explain it? What else would be the goal except to test AI responses in the wild as a sort of meta experiment? I’m just trying to connect the dots here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: