> I think "Right problem, wrong solution". The reason people mock bad ideas on Twitter is probably because Twitter has a character limit. Refuting bad ideas often takes longer than 240 characters.
It's not even the right problem.
The problem is not mocking people, disagreeing with them or whatever (incidentally the twitter character limit doesn't preclude external links or tweet chains, which are extremely common at least in science twitter). It's that doing so through a retweet (or a boost) is a signal to followers, and that can send hordes harassing a single person, because it's very easy for followers to click on the RT/boost and reply, and they're not going to see that thousands of people already did exactly that.
And the person at the other end of the pipe has to deal with a flood of burning garbage.
And more generally that kind of crap is easily weaponisable, twitter provides few tools to deal with it, and they don't usually apply their own rules. I don't know that Mastodon is any better.
Fundamentally, social platforms are only good so long as they've not reached their eternal September yet, or are actively prevented from doing so through active and unforgiving moderation — which can have its own issues.
> And more generally that kind of crap is easily weaponisable, twitter provides few tools to deal with it, and they don't usually apply their own rules. I don't know that Mastodon is any better.
The key difference here is that, once we've noticed that this lack of context (ie..."has someone already made the point about $G_OUTGROUP_MEMBER_23049 or $G_OUTGROUP_234 that I'm going to make?")...we can actually change the code to Mastodon/whatever fediverse server/client to provide solutions to it(see, for example, pleroma's "did you type 'open source' when you mean 'free software'?). Technical measures could be considered and tested in small instances and then scaled up as they are found to work/not work.
With birdsite you just throw up your hands and say "well, the corporation doesn't think it's a priority. ¯\\_(ツ)_/¯ "
Is this really such a big problem, especially for the average user, to make it a key feature of the platform?
Sure, occasionally an otherwise obscure person might find themselves at the end of an unexpected shitstorm, but otherwise it generally affects more prominent users, who are well aware that anything mildly controversial can turn into lots of angry replies.
Is that even a real problem? Isn't it actually valuable information to know that people get really angry about this-and-that? Aren't researchers able to use this data to learn something about human behavior?
Seems to me like people want to blame technology for something that's really amplified human nature. A rehashing the old story of the "basically good" human corrupted by some foreign evil.
> Sure, occasionally an otherwise obscure person might find themselves at the end of an unexpected shitstorm, but otherwise it generally affects more prominent users, who are well aware that anything mildly controversial can turn into lots of angry replies.
And that's supposed to be a good or even neutral thing… how?
> Is that even a real problem?
Very much so.
> Isn't it actually valuable information to know that people get really angry about this-and-that?
Not really, and that isn't even relevant to the issue of it actively harming people.
> Seems to me like people want to blame technology for something that's really amplified human nature. A rehashing the old story of the "basically good" human corrupted by some foreign evil.
You're the only one talking about "basically good human". The amplification of human nature is the very issue at hand, "human nature" has not evolved in that context and while it may work at small scales it demonstrably does not at the scales we're involved in. As a result it is ethically and functionally necessary for the tool to mitigate human nature since they're the ones amplifying it to downright and outright harmful levels.
It might be human nature and interesting from some perspective, but that doesn't mean it's good or fun to be a part of. It may be human nature for everyone in Thunderdome where a sign says "only one man leaves alive" to choose to fight brutally to death, but that doesn't mean we should continue or promote hanging out in Thunderdome over alternatives that encourage other parts of human nature instead.
>Sure, occasionally an otherwise obscure person might find themselves at the end of an unexpected shitstorm, but otherwise it generally affects more prominent users, who are well aware that anything mildly controversial can turn into lots of angry replies.
People often join social networks to follow, interact with, or become prominent people, so it is a problem for them if all of those prominent people are continually sacrificed.
It's not even the right problem.
The problem is not mocking people, disagreeing with them or whatever (incidentally the twitter character limit doesn't preclude external links or tweet chains, which are extremely common at least in science twitter). It's that doing so through a retweet (or a boost) is a signal to followers, and that can send hordes harassing a single person, because it's very easy for followers to click on the RT/boost and reply, and they're not going to see that thousands of people already did exactly that.
And the person at the other end of the pipe has to deal with a flood of burning garbage.
And more generally that kind of crap is easily weaponisable, twitter provides few tools to deal with it, and they don't usually apply their own rules. I don't know that Mastodon is any better.
Fundamentally, social platforms are only good so long as they've not reached their eternal September yet, or are actively prevented from doing so through active and unforgiving moderation — which can have its own issues.