Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't this within the context of hacking a system, rather than making changes to a system you yourself own?


From: https://www.lemonde.fr/pixels/article/2025/02/07/le-parquet-...

Deepl translation of the relevant part:

> At the heart of this investigation lies a legal innovation. Mr. Bothorel's alert is largely based on a recent analysis published on February 6 by legal scholar and law professor Michel Séjean. In the specialist journal Dalloz, he argues that under French law, distorting the operation of a recommendation algorithm on a social network can be punishable by the same penalties as computer hacking. According to this analysis, manipulating a platform's algorithm without the users' knowledge would be punishable under Article 323-2 of the French Penal Code, which punishes “hindering or distorting the operation of an automated data processing system”.


Thanks for the context.

(Machine translation also)

> Obstructing or distorting the operation of an automated data processing system is punishable by five years' imprisonment and a fine of €150,000.

I would be really worried if that got applied to people working on systems they own. Take it down because of an issue? Obstruction. Make a change? Distortion.


No need to be worried about that. French law (like most others I presume?) is all about intent.

e.g. to be convicted of trespassing, it has to be proven you knew you were trespassing, or at least that you reasonably should have known.

So no, you wouldn't be convicted because you accidentally took down your own system, etc.

At the end of the day, regardless of whether the letter of the law will allow it or not, what is clearly being investigated here, is a supposed (and somewhat documented) intent at influencing the French people through a distortion of the Twitter/X algorithm.

Until now, all the "social media" platforms have essentially been regulated like hosting services, under the assumption that they have a fairly neutral stance toward the content they host. Hence they're not directly held responsible for what they display.

But if it turns out their algorithms aren't so neutral, it begs the question of whether they should be regulated like legacy medias, hence be held responsible for what they publish.


I think this is the plan here. I don't know if I agree or disagree with it yet, but basically what I think will happen if the investigation find proof of active partisanship in the algorithm (or just boosting Elons tweets):

Criminal intent probably won't be found (or without enough evidence), so this investigation won't result in a lawsuit. However, depending on the findings Twitter might have to be considered as a publisher, not as an hosting platform, and this would make twitter liable for published user content.

Once Twitter is considered as a publisher, all hell break loose for other algorithm-based social media companies.


>Once Twitter is considered as a publisher, all hell break loose for other algorithm-based social media companies.

As it should. They stretched the excuse of "just hosting" past the breaking point.


I don't have any particular legal understanding, but yeah that's my understanding too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: