Do we really want corporations enforcing unconfirmed reports? If the legal system can't handle the situation, why should we expect a private corporation to?
Being arrested and convicted of a crime is a much higher bar than what is required to ban somebody.
I absolutely want private companies to curate their community of users. This is actively happening, and for some content and jurisdictions it is legally required to happen. If you get a strong signal that someone is a bad actor in your community you should remove them.
I agree, even a handful of reports in a short period could have been orchestrated as a payback.
However surely you could agree that there is a reasonable line somewhere.
If, over the course of several months, multiple people with seemingly no connection to each other report the same problematic person, then is there ANY reason to not issue a ban?
I feel like "just issue a ban" trivializes the complexity of this: banning one account does basically nothing, and they can just create a new account. Multiple people often have the same name, so you can't just ban everyone with that name. It's trivial to take new pictures, etc..
That leaves, what, asking a private company to do facial recognition scans on all new users? Requiring them to present official government ID ala the recent EU laws?
If your safety system is "we'll have to wait till this person rapes several women over the course of many months" then it is meaningless to begin with. And they can then create another profile in seconds on any of the dozens of other apps out there. So no one is safer.
The only reasonable line is - act on the first report (and every single other report), and work closely with the police. But if the victim doesn't want to involve the police then what can you even do?
Best I can do is act on it and eat a massive defamation lawsuit. If you think the $1 Billion Alex Jones was ordered to pay for saying parents lied about dead kids was a lot, imagine how big it would be if someone accused or insinuated a bunch of people of being rapists based on unvetted reports.
I wonder about these kind of crime sprees. It the person wishing to be arrested or something?
A cardiologist life is not usually falling apart. so I wonder why this sort of madness would be a thing. Are they thinking nobody would believe the women?
Ah, but "sugarpimpdorsey" says that "echo chambers like lobsters and cesspools of the deranged like BlueSky" ban him.
Is it your contention that lobsters and bluesky are run by the same or allied cliques? Perhaps it is more likely that someone who chooses that username might repeatedly act in ways which confirm to them that everyone else is a jerk?
>When a young woman in Denver met up with a smiling cardiologist she matched with on the dating app Hinge, she had no way of knowing that the company behind the app had already received reports from two other women who accused him of rape.
This is clearly worse than false positives. They have a big user database that law enforcement does not.
> Even after a police report, it took nearly two months for Matthews to be arrested — the only thing that got him off the apps. By then, at least 15 women would eventually report that Matthews had raped or drugged them. Nearly every one of them had met him on dating apps run by Match Group.
I could understand not banning users or being too conservative in general, but match group bans lots of users without any communication. I know people banned without any reason, and you can see so many reports on reddit. So they could probably just automate banning on even single report.
I feel like "auto-ban on a single report" gets weaponized as soon as people figure it out, and just encourages people to get better at creating alt-accounts to evade the bans?