> If they were to give out their evidence, then their evidence collection methods would become known and would no longer be effective.
I would like to dispute this. Of course, there is a cat-and-mouse game between popular online services and fraudsters, but the argument "if we show you the methods we use to spot them, they won't become effective" is a flawed argument. Sure, it helps a little, but after some time many of these just become public knowledge anyway.
I know if I like too many photos on Instagram, they will block me temporarily, and if I repeat it within certain period, they can ban me for a few days and so on. Having these thresholds and other rules spelled out would be helpful to users. They would know what to avoid, and if they misbehave, they can be rightfully punished. Giving blows out of the thin air is simply unfair.
It's also a rather unconvincing argument when there are so many blatant instances of service abusers getting away with it on platforms that can afford very talented employees. In short, whatever it is they're doing is already quite ineffective. While in theory it could be a little more ineffective if we knew what they were doing, it's also possible that they could be a lot more effective if they changed what they're doing and were transparent about it. A hierarchical reputation system (vouching or invite-style) would solve many issues in many domains, for instance; its main downside is during hyper-growth phases where you need onboarding to be as frictionless as possible. But for a big established company like ebay, I think requiring a new account to be vouched for by an existing account which takes on some risk if the new one turns abusive would be quite doable.
At least in your IG example the ban is finite. I don't want the law to be used so bluntly but I'd really prefer if all bans had to be time limited, even if only technically where due to exponential scaling for repeat offenses the time exceeds expected human lifespans.
> I know if I like too many photos on Instagram, they will block me temporarily, and if I repeat it within certain period, they can ban me for a few days and so on. Having these thresholds and other rules spelled out would be helpful to users
It would be far more helpful to spammers, who could then set all their bots to send threshold - 1 likes and invitations than the average user who rarely ever considers liking enough stuff to trigger it (and is able to take the hint and just not like stuff as much if they do get a warning). Plus in practice it's probably not just a simple threshold, but a function weighted by timing and topics and relatedness of accounts and which is completely unintelligible to the average person (but potentially informative to more advanced spambot developers).
Do you not think these limits are being tested and shared already? I ran into a temporary ig ban when getting rid of a number of people I followed. When I searched for answers the limits were everywhere being discussed.
Before bug bounty programs this was the reason given for not disclosing security issues. All it did was keep the issues underground not fixed and allowed security bugs to exist forever.
I would like to dispute this. Of course, there is a cat-and-mouse game between popular online services and fraudsters, but the argument "if we show you the methods we use to spot them, they won't become effective" is a flawed argument. Sure, it helps a little, but after some time many of these just become public knowledge anyway.
I know if I like too many photos on Instagram, they will block me temporarily, and if I repeat it within certain period, they can ban me for a few days and so on. Having these thresholds and other rules spelled out would be helpful to users. They would know what to avoid, and if they misbehave, they can be rightfully punished. Giving blows out of the thin air is simply unfair.