Hacker News new | past | comments | ask | show | jobs | submit login

IANAL either, but my guess is that itch.io has precisely 0 plausible legal recourse here.

The strongest case would be something along the lines of breach of contract via the domain registrar, but your standard internet contract has a term in it that amounts to "we get the right to fuck you", so I assume that applies here, so no breach of contract actually exists. This also kills every claim that's dependent on breach of contract, so tortious interference is also dead.

Fraud will fail because itch.io itself isn't being defrauded at the very least. Business disparagement, and anything else along the lines of defamation, is going to fail because you need something like actual malice--specific knowledge of falsity--there, and that's essentially impossible to prove, not without somebody admitting that they knew all along everything was false.

Tortious interference is dead for several reasons. First, you need an underlying tort, which, as detailed above, probably doesn't exist. Next, you need specific knowledge of the contract being broken. Finally, you need intentionality here: it's not "I did something that caused the contract to be broken", it's "I did something to cause the contract to be broken." Outside of somebody jumping up and down shouting "I'm tortiously interfering with your contracts," it's basically impossible to prove tortious interference.






> you need something like actual malice--specific knowledge of falsity

IANAL, but as I understand it the definition of malice also includes "reckless disregard for the truth". I'm sure a good lawyer can argue that not having human lawyers review, investigate, and confirm computer-generated abuse reports before sending them to outsiders constitutes a reckless disregard for the truth.


A lawyer might argue that, but it's not going to be a compelling argument. Recklessness is generally a conscious disregard of the consequences; as applied to defamation-like claims, it's generally seen as "you specifically voiced doubts about the truth". Failing to vet automated abuse reports is going to be at best negligence (and I'm dubious of even that, because given the nonbinding nature of abuse reports, it's not clear there is even a duty to candor in abuse reports that one can be negligent of), absent internal complaints about "the accuracy of these things is total shit".

Knowing that you'll absolutely generate false positives and not providing a method of automatically fixing those would be a deliberate action, not recklessness or negligence. I wouldn't be surprised to find code that pings staff based on account size.

It seems like providers could remove themselves from the situation by just giving their clients a "fair use!" response button?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: