It isn't so much about vetting participants than about moderating the discourse , which is hard to do right, and usually relegated to the few people who are willing to do it for free , which is bad quality (like reddit)
Ai can be used to moderate discussions however , if it is trained to remove low effort in an unbiased way
> It isn't so much about vetting participants than about moderating the discourse
Vetting, moderating ... none of them scale well when done manually. The solution I see is that everyone have their own AI filters, customised as they see fit. You can include or exclude specific people and topics, make it as diverse or narrow as you like, allow challenging opinions or not. One of the filters can be to detect AI bots. Don't make the world conform to you, be selective and just skip the bad parts.
I think people are going to trust their own AI tools that are running on their own computers more than they trust other people and especially other AI. We already know we can't face the future onslaught of information with the old methods, we need help. User controlled AI is going to be our first line of defence and our safe space, "a room of one's own" where there is no tracking and thought policing.
With the advent of large language models we already have that, indirectly - the LM is a synthesis of everything, but we let it generate only conditional to our intentions. All the view points are in there for us to reach, it depends on us how we relate to them.
AI should be like a cell membrane separating outside from the inside. It should keep the bad stuff out, take the necessary nutrients and nurture the life within.
Then maybe we should reduce the scale of the megaphones. Current networks are just unpleasant because it's mass hysteria on global level. Scale isn't everything.
I think future forum owners will be continuously finetuning their AI filters to fit their community standards.
I follow really interesting people for their thoughts about one topic (say, Movies), but whose tweets about topics like politics I really dislike (not necessarily things I disagree with, usually I just dislike the tone).
Shouldn't be too hard to fine tune a LLM to do this, definitely something worth a try.
I don't think the discussion between carefully vetted participants needs to be moderated. We go for moderation too quickly these days. Of course if someone goes crazy they should be banned but in less extreme scenarios a well chosen group should behave, just like it's but necessary to moderate a social gathering in real life.
What is the underlying purpose of an open forum? Is it the openness itself, or something that openness allows? I think it’s the latter: allowing people with common interests who wouldn’t otherwise connect in real life to interact meaningfully.
If that’s the case, 100.00% open might not be optimal if 99.9% open yields better results for the resulting community.
different forums can have different purposes. Personally i prefer the public kind, which expose me to different ideas. The common interests i prefer to discuss in real life
The purpose of the internet hasn't been to be an open forum since sysadmins were blocking the talk.* hierarchy and kicking entire nodes off of usenet for spam or abuse.
Ai can be used to moderate discussions however , if it is trained to remove low effort in an unbiased way