Hacker News new | past | comments | ask | show | jobs | submit login

I’m not the OP, and I genuinely don’t like how we’re slowly entering the “no text in internet is real” realm, but I’ll take a stab at your question.

If you made an LLM to pretend to have a specific personality (e.g. assume you are a religious person and you’re going to make a comment in this thread) rather than “generic catch-all LLM”, they can pretty much do that. Part of Reddit is just automated PR LLMs fighting each other, making comments and suggesting products or viewpoints, deciding on which comment to reply and etc. You just chain bunch of responses together with pre-determined questions like “given this complete thread, do you think it would look organic if we responded with a plug to a product to this comment?”.

It’s also not that hard to generate these type of “personalities”, since you can use a generic one to suggest you a new one that would be different from your other agents.

There are also Discord communities that share tips and tricks for making such automated interactions look more real.




These things might be able to produce comparable output but that wasn't my point. I agree that if we are comparing ourselves over the text that gets written then LLM's can achieve super intelligence. And writing text can indeed be simplified to token predicting.

My point was we are not just glorified token predicting machines. There is a lot going on behind what we write and whether we write it or not. Does the method matter vs just the output? I think/hope it does on some level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: