Why are these service providers being punished for what their users do? Specifically, these service providers? Because Google, Discord, Reddit, etc. all contain some amount of CSAM (and other illegal content), yet I don't see Pichai, Citron, or Huffman getting indicted for anything.
Hell, then there's the actual infrastructure providers too. This seems like a slippery slope with no defined boundaries where the government can just arbitrary use to pin the blame on the people they don't like. Because ultimately, almost every platform with user-provided content will have some quantity of illegal material.
Dotcom got extradited (which was declared legal much later). Durov landed in a country that had an arrest warrant out for him.
I hope his situation isn't similar to Dotcom's, as Dotcom was shown to be complicit in the crimes he was being persecuted for. Convicting the megaupload people would've been a LOT harder if they hadn't been uploading and curating illegal content on their platform themselves.
As a service provider, you're not responsible for what your users post as long as you take appropriate action after being informed of illegal content. That's where they're trying to get Telegram, because Telegram is known to ignore law enforcement as much as possible (to the point of doing so illegally and getting fined for it).
> the operators of the messenger app Telegram have released user data to the Federal Criminal Police Office (BKA) in several cases. According to SPIEGEL information, this was data from suspects in the areas of child abuse and terrorism. In the case of violations of other criminal offenses, it is still difficult for German investigators to obtain information from Telegram, according to security circles.
> two popular chat services have accused each other of having undisclosed government ties. According to Signal president Meredith Whittaker, Telegram is not only “notoriously insecure” but also “routinely cooperates with governments behind the scenes.” Telegram founder Pavel Durov, on the other hand, claims that “the US government spent $3 million to build Signal’s encryption” and Signal’s current leaders are “activists used by the US state department for regime change abroad.”
I suspect most Tor exit nodes are controlled by the US government and/or its allied governments. It doesn't make much sense for anybody else to run an exit node because your IP gets banned by much of the internet and you get unwanted visits from law enforcement.
"What kinds of people operate tor exit nodes and why do they do it" is one of those questions that I know I'm not even supposed to admit being curious about, let alone ask, in the company of people who are most capable of accurately answering.
According to the more detailed news sources I can find about this, it seems he knew the French were looking for him. I don't know if he knew about the contents of the warrant, but it does seem he knew the authorities were planning to arrest him.
From what I can tell the warrant has been out for longer, but he was arrested when the airport police noticed his name was on a list. There's not a lot of information out there, with neither the French authorities nor Telegram providing any official statements to the media.
The Sud-Ouest article must have been updated because the version currently online does not mention that at all. Quite the opposite, the article quotes an official that was surprised that Durov would come to Paris anyway even though he knew he was under an arrest warrant in France, and another source says that he might have decided to come in France anyway because he believed he'll never be held accountable.
really? we seal warrants in the US all the time - we don't want people who we are trying to apprehend to always know ahead of time we are trying to apprehend them
You're somewhat mistaken. In the U.S., you aren't owed a warning that the cops are looking for you, especially if you're a flight risk. That was never part of it.
There are also valid reasons the other way, like consulting an attorney to challenge the warrant or prepare a defense before it gets executed, disrupts your life and prevents you from clearing your name because you're being incarcerated without bail. It's hard to investigate the charges against you from a cell.
Or the ability of journalists to inform the public of what the government is getting on with in their name. If the government is investigating their critics they have no right to keep it a secret.
That inconvenient bill of rights keeps us a step or two behind the rest of the anglosphere in decent to tyranny, but only for so long. It just takes a handful of dishonest judges to claim some right actually means something entirely different.
> Because in your eyes it is so gradual the difference between it's happening slowly and not happening at all is imperceptible and impossible to prove.
It's extremely straightforward to prove. You look at the laws that have been passed and the court opinions issued in the last 30-60 years.
Fuck around and find out. If he legitimately ignored legal French documents forcing him to share information, as the French have declared, he's got got.
You don't step foot on a country with an extradition treaty, even less so the country itself, where you're flouting their warrants for your company's data.
Despite having lots of treaties agreeing to extradition in principle, the UAE is somewhat notorious for never extraditing anybody anywhere in practice.
1) There was an order signed recently. He has not physically left NZ yet.
2) He's not convicted, he hasn't been in front of a judge for the charges against him
> Convicting the megaupload people would've been a LOT harder if they hadn't been uploading and curating illegal content on their platform themselves.
This is just a gimmick to bamboozle judges and the public. The ploy is to claim that someone is guilty of serious offense A because you proved they committed less serious offense B, even though the offenses have different elements and penalties.
They use the ploy because any large organization by definition has a lot of people in it and copyright infringement is pretty common, so by the law of large numbers somebody in the company is probably doing it even if the company doesn't want them to and then the prosecutors want to claim that the company as a whole is doing something wrong and has to be shut down. Which doesn't make any sense when another company is just going to provide the same perfectly legal service and the users are going to use it for the exact same thing.
Moreover, the obvious way for companies to prevent this -- indeed, the thing Megaupload's replacement started doing after the original was shut down -- is to encrypt everything so their employees have no access to it. Which I have no objection to, but if courts and prosecutors like to be able to issue a subpoena and actually get something back, they might want to reconsider turning the ability of a company to access data into a liability.
Watch his interview with Tucker Carlson and you’ll see. He doesn’t acquiesce to government requests for moderation control, censorship, and sharing private user data so they target him. He refuses to implement backdoors as well. In stark contrast to western social media companies.
When an authoritarian govt is calling for the release of someone who runs a "private" messenger, it suggests they have a back door. Otherwise they tend to oppose all private messaging.
No, there is no logical link between the two events. Russian govt can protest that for propaganda reasons: to make a point that Western governments are restricting freedom of speech.
They're hitting that Uno Reverse card. Tbf, the US does a LOT of the stuff that we openly criticize Russia and China for. Which, I would hope that people have enough insight to recognize that this is a bad thing across the board. The only people who get hurt and face consequences from this kind of a thing are the citizens.
This is a key perspective people fail to take into account. We've been conditioned by movies, books etc to think everyone fits into these black and white "good and bad" categories.
Most western countries do horrific things we do not find acceptable, but when we do find out we hand wave it away because they're the "good guys".
They don't tend to care until large enough quantities of people start listening despite whatever filters (i.e. de-ranking social media posts) and countermeasures (i.e. cable news assets) are put in place before it gets to that point. Then they very likely have the ability to label it as misinformation and find a legal reason to prosecute under a number of broad categories: https://www.thefederalcriminalattorneys.com/false-informatio...
It came very close to this during Covid, and maybe once or twice since then.
You're free to say what you want, and everyone is free to ignore you if what you say doesn't jive with "common sense".
No. What would be illogical is to assume that because Russia might be motivated to protest for the sake of propaganda, that it is not also, or instead, motivated by not wanting to lose access to a hypothetical backdoor.
I don't completely buy the fact that he was arrested because he didn't cooperate with authorities. World Police forces have an history of infiltrating criminal groups and gaining their trust; planting backdoors isn't the only way they can investigate people.
Also, this way they're yelling loud to these people "hurry! pick another platform!".
And then, he is also on Putin's wanted list; his arrest could one day turn him into a valuable bargaining chip.
Also now they have added “because people watch football matches illegally on Telegram”. So they are going to throw everything at kitchen sink at Durov, probably also national security issues because anti-French political groups use Telegram in Africa.
It is still not backdoor, sorry, you are completely mistaken.
They came - tried to come - in the front door openly (the expression of back door means completely different, just look it up and you will see) to catch criminals, doing well known and prominent criminal activity, but the Telegram decided to protect the criminals instead. You can try to smear in whatever imaginative reasons behind when the reason are in the front of your face, like it or not, it does not matter if you like it or not! Also how much people like the Telegram because 'it is soo user friendly and pretty', not in pair with serious crimes committed and aided there, completely not!
Also it is still the investigative phase but the suspicion is warranted completely.
I seriously do not understand low moral people shielding those helping criminals, do you really not knowing what are you doing, seriously, just because there is a - misleadingly presented - popular service there? Really? Very worrying the moral state of social media user masses.
Telegram publishes open-source clients that can run on open-source platforms. Signal does not offer any client that doesn't depend on proprietary code (either iOS or Google Play Services) and is aggressive about taking down third-party builds that remove that dependency. I'd say there's a lot more reason to assume Telegram is not wilfully backdoored than Signal (though I'd trust Wire or Matrix ahead of either of them).
We have no real way to check for backdoors in Signal either. Signal is not transparent about what code their servers are running, and you are not allowed to start your own server with a known version. They do not allow for independent distribution of reproducible builds on F-droid, or any other application store that does not identify you. They will take steps to lock out any independent implementations of the client from their servers. That the code for their client is released is good, but not good enough.
Huh, I was going to point out that the Signal server isn't Free Software either, since for a while it wasn't being published, but it seems they have gotten back into publishing it.
while it's amazing for them to keep maintaining it, as the person mentioned down the thread, it's hard to know what they are actually running, right? and it's not a lot of work to patch this or clone/branch as necessary before deploying. Oh well, i already resigned that a part of my life will be run by someone else by now.
Publishing server code provides no assurance of anything (although it is still nice, for other reasons) since nobody can know if what they (for any "they") run in production is the same as the public source.
Open client code and documented protoccols are much more important. If you can compile your own client from open source code and it works fine, then you can know for sure what you're sending to the server.
If you bothered to look, you would find that both of the examples given are open-source servers. You might then deduce that you misunderstood the comment to which you replied.
You cannot audit the system/service logs for those servers, neither can you audit the hardware running those servers, nor the internet providers who can snoop on the traffic et al... That's the argument behind "Open source server" in case it wasn't clear.
This might be where the misunderstanding is. This software is indeed server software that anyone can run, and the global network consists of servers run by many independent entities, in many cases with full control of the hardware. One of these entities can be you, and it is completely possible to run from home.
The integrity of your conversation with someone would then depend on both your endpoints, clients, and the respective server.
Just like email, but for chat. There is no single gatekeeper who is allowed to use the network.
No misunderstanding at all. The argument is very clear.
> global network consists of servers run by many independent entities
This is not the case for all the popular chat apps including Signal which uses centralized servers which they run themselves. They clearly see little benefit from this distributed independent server model.
And even that doesn't mean the server is open source.
As I explained earlier if you cannot audit the physical server you are connected to, claiming it's open source is useless. FYI that's literally how the term open source was used in this context!
> The integrity of your conversation with someone would then depend on both your endpoints, clients, and the respective server.
Client to client verification simply works and eliminates the need to also "verify" the server which if compromised introduces an even higher risk of contamination in the trust model (too many co-dependent functions are delegated to the server), not to mention collusion in establishing integrity of yet another device that we need to trust.
Not sure what part of my comment amused you so much.
An IM platform server can be open sourced. Just like any kind of software.
It's just a matter of publishing your code and, preferably making it possible to verify that the service your users are connecting to is build using the same published code.
How could you possibly verify what code they are running server-side?
Typically, the way it goes is that you implement e2ee such that even a fully compromised server cannot read the clients messages, publish the client's source code, and build it yourself or use reproducible builds. That ladt part is where you can criticize Signal. Whether they publish the server code is mostly irrelevant unless you want to run a separate messenger infrastructure.
> unless you want to run a separate messenger infrastructure.
Or if you S2S federate with the upstream server. Which is a core differentiator of XMPP and Matrix. Signal server(s) notably supported proper federation during their initial growth-phase but famously closed it off ("The ecosystem is moving").
Similar story as Google [Chat/Talk/Hangouts], which did federate over XMPP before they closed that down years ago.
Which government? There has been a lot of mysterious deanons of protesters in Belarus in 2020. You know, the kind of deanon where armed people break down you door and you're going to be beaten and tortured for several days in the very least.
In practice it is very easy to deanon using social engineering.
It is enough to open a shared link to expose your IP. A lot of people would click something like "Belorussian protestors got deanonized" or "10 ways to keep you safe" in a group chat. Just get it a catchy title. And this link is specially crafted to lead to the exposer server.
Who would watch an interview being held by a crazy person and take it at face value? Anyone with half a brain would avoid watching or listening to Tucker Carlson like the plague.
This distinction gets lost in these discussions all of the time. A company that makes an effort to comply with laws is in a completely different category than a company that makes the fact that they’ll look the other way one of their core selling points.
Years ago there was a case where someone built a business out of making hidden compartments in cars. He did an amazing job of making James Bond style hidden compartments that perfectly blended into the interior. He was later arrested because drug dealers used his hidden compartment business to help their drug trade.
There was an uproar about the fact that he wasn’t doing the drug crimes himself. He was only making hidden compartments which could be used for anything. How was he supposed to know that the hidden compartments were being used for illegal activities rather than keeping people’s valuables safe during a break-in?
Yet when the details of the case came out, IIRC, it was clear that he was leaning into the illegal trades and marketing his services to those people. He lost his plausible deniability after even a cursory look at how he was operating.
I don’t know what, if any, parts of that case apply to Pavel Durov. I do like to share it as an example of how intent matters and how one can become complicit in other crimes by operating in a manner where one of your selling points is that you’ll help anyone out even when their intent is to break the law. It’s also why smart corporate criminals will shut down and walk away when it becomes too obvious that they’re losing plausible deniability in a criminal enterprise.
What do you mean "look the other way?" Does the phone company "look the other way" when they don't listen in to your calls? Does the post office "look the other way" when they don't read your mail?
That guy who built the hidden compartments should absolutely not have gone to jail. The government needs to be put in check. This has gotten ridiculous.
If the police tell them illegal activity is happening and give them a warrant to wiretap and they are capable of doing so but refuse then yeah they’re looking the other way. That’s not even getting into things like PRISM.
If you know your services are going to be used to commit a crime, then yes, that makes you an accessory and basically all jurisdictions (I know basically nothing about French criminal law) can prosecute you for that. Crime is, y'know, illegal.
I'm appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.
Why aren't all gun manufacturers in jail then? They must know a percentage of their products are going to be used to commit crimes. A much larger percentage than those using Telegram to commit one.
> I'm appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.
The usual metaphor is child pornography, but let's pick something less outrageous: espionage. If a spy uses your messaging platform to share their secrets without being detected & prevented, that's using the service to commit a crime. Now, if you're making a profit from said service, that doesn't necessarily make you a criminal, but if you start saying "if spies used this platform, they'd never be stopped or even detected", that could get you in to some serious trouble. If you send a sales team to the KGB to encourage them to use the platform, even more so.
Gun manufacturers have repeatedly been charged with crimes (some are currently in court). I'd argue that messaging platforms have, historically, been less likely to be charged with crimes.
The second amendment gives weapon makers some extra protection in the US, but they do have to be very careful about what they do and do not do in order to avoid going to jail.
> They must know a percentage of their products are going to be used to commit crimes. A much larger percentage than those using Telegram to commit one.
Do you have the stats on that? I don't, but I'm curious. While I don't doubt the vast majority of people using Telegram aren't committing a crime, I know that the vast majority of people using guns also aren't committing a crime.
> I'm appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.
That's because you're assuming facts not in evidence and painting the broadest possible argument. Obviously we don't know the details yet, but it's not unlikely that this situation was a bit more specific.
Consider:
F: "We want you to give us the chat logs of this terrorist"
T: "OK!"
F: "Now we need you to give us the logs from this CSAM ring"
T: "No! That's a violation of their free speech rights!"
You can't put your own moral compass in place of the law, basically. That final statement is very reasonably interpreted as obstruction or conspiracy, where a blanket refusal would not be.
You are right; the arrest might be legal and even morally justifiable.
However, I still argue that wanting to provide secret communication (which Telegram actually doesn't do) is not abetting crime or helping it more than any other product.
In fact, in my humble opinion, it's the opposite: Private communications are a countermeasure against the natural tendency of governments to become tyrannical, and thus maintaining one is an act of heroism.
> Private communications are a countermeasure against the natural tendency of governments to become tyrannical, and thus maintaining one is an act of heroism.
That's an easy enough statement in the abstract, but again it doesn't speak to the case of "Durov knowingly hid child porn consumers from law enforcement", which seems likely to be the actual crime. If you want to be the hero in your story, you need to not insert yourself into the plot.
The answer to this charade is that to "prove" that you're not doing anything wrong you need to secretly provide all data from anyone that the government doesn't like. Otherwise you go to jail.
If his was really true for banks there would be a large number of bankers in jail. This number being close to zero, I guess the courts are very lax at charging bankers for crimes.
Banks are a terrible example for this thread's argument. Banking is essentially the end result of what happens when businesses kowtow to the invasive demands of the government, implement ever-more invasive content policing, becoming de facto arms of the bureaucratic state.
A bank will drop you if they even think you might be doing something (demonstrably on paper) illegal. When opening an account, some of the very first questions a bank asks you are "where did you get this money" and "what do you do for work" - proactively making you responsible for committing to some type of story. All of the illegality you're trying to reference is happening under a backdrop of reams of paperwork that make it look like above board activity to compliance departments. Without that paperwork when shit does hit the fan, people working at the bank do tend to go to jail. But with that paperwork it's "nobody's fault" unless they manage to find a few bank employees to pin it on.
Needless to say, this type of prior restraint regime being applied to free-form communication would be an abject catastrophe.
Banks do a massive amount of tracking and flagging. Even putting a joke “for drugs” in a Venmo field can cause issues. Plus reporting large transactions. There was a massive post on HN yesterday about how often banks close startup accounts due to false positives.
> the real criminals continue doing their business everyday
Any source for that? Media loves to blame banks for everything, but when you go into the details it always seems pretty marginal (e.g. the HSBC Mexico stuff).
It cannot be marginal because drug traffic, just as an example, moves billions of dollars every year. They certainly have schemes and someone in the banking system must be complying with these schemes. Every time the officials uncover one of these schemes, banks are miraculously not charged of anything and they don't even give back the profits of the illegal operation.
If you provide a service that is used for illegal behavior AND you know it’s being used that way AND you explicitly market your services to users behaving illegally AND the majority of your product is used for illegal deeds THEN you’re gonna have a bad time.
If one out of ten thousand people use your product for illegal deeds you’re fine. If it’s 9 out of 10 you probably aren’t.
> If one out of ten thousand people use your product for illegal deeds you’re fine.
This logic clearly makes the prison of someone like the owner of Telegram difficult to justify, since 99.999% of messages in telegram are completely legal.
If 10,000 people out of 10 million are doing illegal things and you know about it or you are going out of your way to turn a blind eye then you’re gonna have a bad time.
Keep in mind that as soon as you store user accounts you keep user data, which is perhaps a trivial form of eavesdropping, but clearly something law enforcement takes an interest in.
Try to deposit 10k to your bank account and then, when they call you and ask the obvious question, answer that you sold some meth or robbed someone. They will totally be fine with this answer, as they are just a platform for providing money services and well, you can always just pay for everything in cash.
And even then you don’t have to tell them it’s illegal. Just what you earned. Frankly they don’t care where it came from as long as you report and pay.
No, you have to specify where it came from. You don't have to say what crime you committed, but you'd list the income under "income from illegal activities".
Suppose you knit mittens and sell them for cash out of your garage. The IRS expects you to report and pay taxes on the income. How do they check that the sum you specified is correct?
Not sure how it works in the US. In Germany you are supposed to have a cash register or issue an invoice on each purchase, and sometimes (though really rarely given lack of personnel) they can randomly check of your reported numbers make sense together.
It's not clear how that sort of thing would even help, it seems like just a trap for the unwary. If you're an honest person selling your mittens and paying your taxes without knowing you're supposed to have a cash register, you could get unlucky and get in trouble for innocuous behavior. If you're a drug dealer then you get a cash register and ring up all your drug sales as mitten sales. Or, if someone wanted to report less income, they would have a cash register and then use it to ring up less than all of the sales. Whether or not you have the cash register can't distinguish these and is correspondingly pointless.
If you are directly aiding and abetting without any plausible attempt to minimize bad actors from using your services then absolutely.
For example, CP absolutely exists on platforms like FB or IG, but Meta will absolutely try to moderate it away to the best of their ability and cooperate with law enforcement when it is brought to their attention.
And like I have mentioned a couple times before, Telegram was only allowed to exist because the UAE allowed them to, and both the UAE and Russia gained ownership stakes in Telegram by 2021. Also, messaging apps can only legally operate in the UAE if they provide decryption keys to the UAE govt because all instant messaging apps are treated as VoIP under their Telco regulation laws.
> For example, CP absolutely exists on platforms like FB or IG, but Meta will absolutely try to moderate it away to the best of their ability
Is this true? After decades now of a cat and mouse game, it could be argued that they are simply incapable. As such, the "best of their ability" would be using methods that don't suit their commercials - e.g verifying all users manually, requiring government ID, reviewing all posts and comments before they're posted, or shutting down completely.
I understand these methods are suicidal in capitalism, but they're much closer to the "best of their ability". Why do we accept some of the largest companies in the world shrugging their shoulders and saying "well we're trying in ways that don't impact our bottom line"?
If you are a criminal lawyer who is providing defense, that is acceptable because everyone is entitled to to a fair trial and defense.
If you are a criminal lawyer who is directly abetting in criminal behavior (eg. a Saul Goodman type) you absolutely will lose your Bar License and open yourself up to criminal penalties.
If you are a criminal lawyer who is in a situation where your client wants you to abet their criminal behavior, then you are expected to drop the client and potentially notify law enforcement.
> If you are a criminal lawyer who is directly abetting in criminal behavior
Not a lawyer myself but I believe this is not a correct representation of the issue.
A lawyer abetting in criminal behaviour is committing a crime, but the crime is not offering his services to criminals, which is completely legal.
When offering their services to criminals law firm or individual lawyers in most cases are not required to report crimes they have been made aware of under the attorney-client privilege and are not required to ask to minimize bad actors from using their services.
In short: unless they are committing crimes themselves, criminal lawyers are not required to stay clear from criminals, actually, usually the opposite is true.
Are you talking about Brian Steel? He was held in contempt because he refused to name his source that informed him of some misconduct by the judge (ex parte communication with a witness). That's hardly relevant here, the client wasn't involved at all as far as anyone knows.
any plausible attempt to minimize bad actors from using your service
I mentioned criminal lawyers because their job is literally to "offer their services to criminals or to people accused of being criminals" and they have no obligation whatsoever to minimize bad actors from using your service, in fact bad actors are usually their regular clientele and they are free to attract as many criminals as they like in any legal way they like.
Helping a criminal to commit a crime it's an entirely different thing and anyway it must be proved in a court, it's not something that can be assumed on the basis of allegations (their clients are criminal, so they must be criminal too).
That's why in that famous TV drama Jessy Pinkam says "You dont want a criminal lawyer, you want a Criminal. Lawyer.".
The premise of this story is that Telegram offers a service which is very similar to safe deposit boxes, the bank it's not supposed to know what you keep in there hence they are not held responsible if they are used for illegal activities.
In other words most of the times people do not know and are not required to know if they are dealing with criminals, but, even if they did, there are no legal reasons to avoid offering them your services other than to avoid problems and/or on moral grounds (which are perfectly understandable motives, but are still not a requirement to operate a business).
Take bars, diners, restaurants, gas stations or hospitals, are they supposed to deny their services?
And how would they exactly should take actions to minimize bad actors from using your service?
If someone goes to a restaurant and talks about committing a crime, is the owner abetting the crime?
I guess probably not, unless it is proven beyond any reasonable doubt that he actually is.
It doesn't matter if it's true or false it only matters what the justice system can prove.
> The premise of this story is that Telegram offers a service which is very similar to safe deposit boxes, the bank it's not supposed to know what you keep in there hence they are not held responsible if they are used for illegal activities.
This is the issue. Web platforms DO NOT have that kind of legal protection - be it Telegram, Instagram, or Hacker News.
Safe Harbor from liability in return for Content Moderation is expected from all internet platforms as part of Section 230 (USA), Directive 2000/31/EC (EU), Defamation Act 2023 (UK), etc.
As part of that content moderation, it is EXPECTED that you crack down on CP, Illicit Drug Transactions, Threats of Violence, and other felonies.
Also, that is NOT how bank deposit boxes work. All banks are expected to KYC if they wish to transact in every major currency (Dollar, Euro, Pound, Yen, Yuan, Rupee, etc) and if they cannot, they are expected to close that account or be cut off from transacting in that country's currency.
> That's why in that famous TV drama Jessy Pinkam says "You dont want a criminal lawyer, you want a Criminal. Lawyer.".
First, it's Pinkman BIATCH not Pinkam.
And secondly, Jimmy McGill (aka Saul Goodman) was previously suspended by the NM Bar Association barely 5 years before Breaking Bad, and was then disbarred AND held criminally liable when SHTF towards the finale.
At least in case of Section 230, distributirs that do not moderate do not need it because they do indeed have that kind of legal protection - see Cubby v. CompuServe for an example. Section 230 was created because a provider that did moderate tried to use this precedent in court and its applicability was rejected, and Congress decided that this state of affairs incentivized the wrong kind of behavior.
This is precisely why Republicans want to repeal it - if they succeed, it would effectively force Facebook etc to allow any content.
> This is the issue. Web platforms DO NOT have that kind of legal protection - be it Telegram, Instagram, or Hacker News.
e2e encryption cannot be broken though
> Safe Harbor from liability in return for Content Moderation is expected from all internet platforms as part of Section 230 (USA), Directive 2000/31/EC (EU), Defamation Act 2023 (UK), etc.
I have no sympathy for Durov and I don't care if they throw away the keys, but what about Mullvad then?
I guess that a service whose main feature is secrecy and anonymity should at least provide anonymity and secrecy.
> CP, Illicit Drug Transactions, Threats of Violence, and other felonies
you understand better than me that the request is absurd all of this is in theory, in practice nobody can actually do it for real, the vast majority of illicit clear text content are honeypots created by agents of various agencies to threaten the platforms and force them to cooperate. nothing's new here, but let's not pretend that this is to prevent crimes.
also: the allegations against Telegram are that they do not cooperate, but we don't actually know if they really crack down on CP or other illegal activities or not, because if they don't, the reasonable thing to do would be to shut down the platform, what does arresting the CEO accomplish? (rhetorical question: they - I don't want to throw names, but i think that the usual suspects are involved - want access to and control of the content, closing the platform would only deny them access and would create uproar among the population - remember when Russia blocked Telegram?)
also 2: AFAIK Telegram requires a phone number to create an account, it's the responsibility of the provider to KYC when selling a phone number, not Telegram's.
also 3: safe deposit boxes are not necessarily linked to bank accounts. I pay for a safety deposit box in Switzerland but have no Swiss bank account.
So my guess is EU wants in some way control the narrative in Telegram channels where the vast majority of the news regarding the war in Ukraine spread from the war front to the continent.
> First, it's Pinkman BIATCH not Pinkam.
Sorry. I'm dyslexic and English is not my mother tongue, but the 4th language I've learned, when I was already a teenager.
> was previously suspended by the NM Bar Association
that was the point. TV dramas need good characters and a criminal lawyer who's also a criminal is more interesting than a criminal lawyer who's just a plain boring lawyer that indulges in no criminal activity whatsoever.
> operating in a manner where one of your selling points is that you’ll help anyone out even when their intent is to break the law
is it what happened here?
in my view Durov is the owner renting his apartment and not caring what people do inside it, which is not illegal, someone could go as fare as say that it is morally reprensible, but it's not illegal in any way.
It would be different if Durov knew but did not report it.
Which, again, doesn't seem what happened here and it must be proven in a court anyway, I believe everyone in our western legal systems still has the right to the presumption of innocence.
Telegram not spying on its users is the same thing as Mullvad not spying on its users and not saving the logs. I consider it a feature not a bug, for sure not complicity in any crime whatsoever.
As far as I can see. CP is probably the fastest way to get a channel and related account wiped on telegram in a very short time. As a telegram group manager. I often see automated purge of CP related ad/contents, or auto lockout for managers to clear up the channel/group. Saying telegram isn't managing CP problems is just absurd. I really feel like they just created the reason for other purpose.
Read the founder exit letter. whatsapp is definitely not e2e encrypted for all features.
You leak basic metadata (who talked to who at what time).
You leak 100% of messages with "business account", which are another way to say "e2e you->meta and then meta relays the message e2e to N reciptients handling that business account".
Then there's the all the links and images which are sent to e2e you->meta, meta stores the image/link once, sends you back a hash, you send that hash e2e to your contact.
there's so many leaks it's not even fun to poke fun at them.
And I pity anyone who is fool enough to think meta products are e2e anything.
> with "business account", which are another way to say "e2e you->meta and then meta relays
actually its a nominated end point, and then from there its up to the business. It works out better for meta, because they aren't liable for the content if something goes wrong. (ie a secret is leaked, or PII gets out.) Great for GDPR because as they aren't acting as processor of PII they are less likley to be taken to court.
Whatsapp has about the same level of practical "privacy" (encryption is a loaded word here) as iMessage. The difference is, there are many more easy ways to report nasty content in whatsapp, which reported ~1 million cases of CSAM a year vs apples' 267. (not 200k, just 267. Thats the whole of apple. https://www.missingkids.org/content/dam/missingkids/pdfs/202...)
Getting the content of normal messages is pretty hard, getting the content of a link, much easier.
iMessage is not on the same playing field as Whatsapp and Signal. Apple has full control over key distribution and virtually no one verifies Apple isn't acting as a MitM. Whatsapp and e2e encrypted messenger force you to handle securely linking multiple devices to your account and gives you the option to verify that Meta isn't providing bogus public keys to break the e2e encryption.
For iMessage, Apple can just add a fake iDevice to your account and now iMessage will happily encrypt everything to that new key as well and there's zero practical visibility to the user. If it was a targeted attack and not blanket surveillance then there's no way the target is going to notice. You can open up the keychain app and check for yourself but unless you regularly do this and compare the keys between all your Apple products you can't be sure. I don't even know how to do that on iPhone.
never thought about using csam image hash alerts as a measure of platform data leaks (and popularity as i doubt bots will be sharing them). that's very smart.
and show that fb eclipse everyone by a insane margin it's scary!
about your point on business accounts, the documents i reviewed included dialog tree bots managed by meta. not sure if not having that change things... but in that case it was spelled out that meta is the recipient
Its more a UX/org thing. In iMessage how do you report a problematic message? you can't easily do it.
In whatsapp, the report button is on the same menu that you use to reply/hide/pin/react.
Once you do that, it sends the offending message to meta, unencrypted. To me, that seems like a reasonable choice. Even if you have "proper" e2ee, it would still allow rooting out of nasty/illegal shit. those reports are from real people, rather than automated CSAM hashing on encrpyted messages. (although I suspect there is some tracking before and after.)
Its the same with instagram/facebook. The report button is right there. I don't agree with FB on many things, but this one I think they've made the right choice.
Telegram is for the most part not end-to-end encrypted, one to one chats can be but aren't by default, and groups/channels are never E2EE. That means Telegram is privy to a large amount of the criminal activity happening on their platform but allegedly chooses to turn a blind eye to it, unlike Signal or WhatsApp, who can't see what their users are doing by design.
Not to say that deliberately making yourself blind to what's happening on your platform will always be a bulletproof way to avoid liability, but it's a much more defensible position than being able to see the illegal activity on your platform and not doing anything about it. Especially in the case of seriously serious crimes like CSAM, terrorism, etc.
End-to-end encrypted means that the server doesn’t have access to the keys. When server does have access, they could read messages to filter them or give law enforcement access.
If law enforcement asked them nicely for access I bet they wouldn't refuse. Why take responsibility for something if you can just offload it to law enforcement?
The issue is law enforcement doesn't want that kind of access. Because they have no manpower to go after criminals. This would increase their caseload hundredfold within a month. So they prefer to punish the entity that created this honeypot. So it goes away and along with it the crime will go back underground where police can pretend it doesn't happen.
Telegram is basically punished for existing and not doing law enforcement job for them.
Maybe they didn't ask nicely. Or they asked for something else. There's literally zero drawback for service provider to provide secret access to the raw data that they hold to law enforcement. You'd be criminally dumb if you didn't do it. Literally criminally.
I bet that if they really asked, they pretty much asked Telegram to build them one click creator that would print them court ready documents about criminals on their platform so that law enforcement can just click a button and yell "we got one!" to the judge.
> There's literally zero drawback for service provider to provide secret access to the raw data that they hold to law enforcement.
That's not true. For one things, it is expensive. For another, there's a chance people will find out and you'll lose all your criminal customers... they might even seek retribution.
> I bet that if they really asked, they pretty much asked Telegram to build them one click creator that would print them court ready documents about criminals on their platform so that law enforcement can just click a button and yell "we got one!" to the judge.
You seem to believe, without having looked at the publicly available facts of the matter, that the problem is law enforcement didn't say "pretty please". The fact of the matter is that they've refused proper law enforcement requests repeatedly; if anyone has been rude about it, it's been Durov.
The chats are encrypted but the backup saved in the cloud isn't. So if someone gets access to your Google Drive he can read your WhatsApp chats. You can opt-in to encrypt the backup but it doesn't work well.
Meta seems to shy away from saying they don't look at the content in some fashion. Eg they might scan it with some filters, they just don't send plaintext around.
Yes, WA messages are supposed to be e2e encrypted. Unless end-to-end encryption is prohibited by law in your jurisdiction, I don't see how that question is relevant in this context.
The receiving end shared your message with the administrators? E2e doesn't mean you aren't allowed to do what you want with the messages you receive, they are yours.
Nope, it didn't even arrive on their end, it prevented me from sending the message and said I wasn't allowed to send that. So they are pre screening your messages before you send them.
isn't meta only end to end encrypted in the most original definition in so much that it is encrypted to each hop. but it's not end to end encrypted like signal.. ie meta can snoop all day
If a service provider can see plain text for a messaging app between the END users, that is NOT end-to-end encryption, by any valid definition. Service providers do not get to be one of the ends in E2EE, no matter what 2019 Zoom was claiming in their marketing. That's just lying.
What has E2EE got to do with it? If you catch someone who sent CP you can open their phone and read their messages. Then you can tell Meta which ones to delete and they can do it from the metadata alone.
I'm more disturbed by the fact that on HN we have 0 devs confirming or denying this thing about FBs internals wrt encryption. We know there are many devs that work there that are also HN users. But I've yet to see one of them chime in on this discussion.
I find it pretty ridiculous to assume that any dev would comment on the inner workings of their employers software in any way beyond what is publicly available anyway. I certainly wouldn't.
Why not? If I think my employer is doing something unethical, I certainly would. That would be the moral thing to do.
This tells me most of the people implementing this are either too-scared of the consequences, or they think what they're implementing is ethical and/or the right thing to do. Again, both are scary thoughts we should be highly concerned about in a healthy society that talks about these things.
One other potential explanation: FB and these large behemoths have compartmentalized the implementations of these features so much that no one can speak authoritatively about it's encryption.
You are talking about a company whose primary business idea it is to lock up as much of the world's information as possible behind their login.
The secondary business idea it to tie their users logins to their real world identities, to the point of repeatedly locking out users who they live under threat and refuse to disclose their real name.
For Reddit it is a bit documented how some power-mods used to flood subreddits with child porn to get them taken down. It was seemingly done with the administration's best wishes. Not sure if it still going on, but some of these people are certainly around, in the same positions.
That’s disgusting but certainly effective to take down something very quickly.
I was very disappointed to hear that UFO related subreddits take down and block UFO sightings. What’s the whole point of the sub if they censor the relevant content.
This is unrelated to main thread but since you brought up UFOs and censorship. Isn't it a disgrace what Wikipedia has done to the trove of "list of UFO sightings"?
Those listings were great and well documented up until about 2019 or so. They've been scrubbed heavily.
Yes it is. I don’t recall when and if I check out the list of UFO sightings on Wikipedia but I’m very aware of the problem.
In the English wiki it’s a group “Guerilla Skepticism” which dominates the field on esoteric content and much more.
In Germany we have the same situation and very likely every language has the same issue.
The bigger pictures is that the whole content from Wikipedia gets fed into the AIs and then it answers you practically the strongly moderates censored misleading content from Wikipedia.
The very disappointing thing is that nobody can’t to anything about the mods in Wikipedia, they dominate the place.
I've actually given up trying to post on Reddit for this reason. Whenever I've tried to join in on a discussion in some subreddit that's relevant(eg r/chess) my post has been autoremoved by a bot because my karma is too low or my account is "too new". Well how can I get any karma if all my posts are deleted?
Even those who farm accounts know the simple answer to your question. You have to spend a little time being civil in other subreddits before you reveal the real you. Just takes a few weeks.
The comments I made were quite serious and civil. Not sure what you mean. They were autodeleted by a bot. I wasn't trolling or anything.
I'm not particularly interested in spending a lot of time posting on reddit. But very occasionally I'll come across a thread I can contribute meaningfully to and want to comment. Even if allowed I'd probably just make a couple comments a year or something. But I guess the site isn't set up for that, so fuck it.
Sounds like you glossed over the phrase “in other subreddits”, which is the secret sauce. The point of my phrasing was not to suggest that you aim to be uncivil, but to highlight that the above works even for those who do aim to. So, surely, it should work for you, too.
I can see how it's frustrating, but the communities you're trying to post in are essentially offloading their moderation burden onto the big popular subreddits with low requirements -- if you can prove you're capable of posting there without getting downvoted into oblivion, you're probably going to be less hassle for the smaller moderator teams.
That's silly. I gotta go shitpost in subreddits I have no interest in as some sort of bizarre rite of passage? I'd rather just not use the site at that point.
Actually, HN has a much better system. Comments from new accounts, like your throwaway, are dead by default, but any user can opt in to seeing dead posts, and any user with a small amount of karma can vouch those posts, reviving them. Like I just did to your post.
It's simpler, the US wants to control the narrative everywhere and in everything, just like in the 90s and 00s. Things like Telegram and Tiktok and to some extent RT, stand in the way of that.
But why don’t they arrest them for allowing it to happen? Phone calls should be actively moderated to block customers who speak about terrorist activity.
Because the telcos _cooperate_ with law enforcement.
It's not whether the platform is being used for illegal activity (all platforms are to some extent, as your facile comment shows). It's whether the operator of a platform actively avoids cooperating with LE to stop that activity once found.
I know. That’s obviously true, but I hate that it happens and it makes no sense to me why more people aren’t upset by it. What I’m trying to get at is that complying with rules that are stupid, ineffective, and unfair is not a good thing and anyone who thinks these goals are reasonable should apply them to equivalent services to realize they’re bad. Cooperation with law enforcement is morally neutral and not important.
The real goal is hurting anyone that’s not aligned with people in power regardless of who is getting helped or harmed. Everyone knows this but so many people in this thread are lying about it.
> anyone who thinks these goals are reasonable should apply them to equivalent services to realize they’re bad
AFAIK these goals _are_ applied to equivalent services. It's just that twitter, FB, Instagram, WhatsApp, and all the others _do_ put in the marginal amount of effort required to remove/prohibit illicit activity on their platform.
Free speech is one thing, refusing to take down CSAM or drug dealing operating in the open is always going to land you in hot water.
I don’t agree that internet platforms deserve to be in their own special category which is uniquely required to police bad content. The only reason it happens is because it’s not politically or technically feasible to do it when the message comes through another medium.
I think it’s wrong on social media for the exact same reason it’s wrong to arrest power companies if a guy staples printed CSAM to a utility pole. Same thing for monitoring private phone calls. We know that AI can detect people talking about terrorism on the phone and cameras can monitor paper ads and newsletters in public spaces, but nobody would advocate for making this a legal requirement because it’s insane. The fact that nobody cares is proof that the public does value privacy and free speech. Why are so many of them tricked into thinking the internet is an exception?
I want people to commit to their beliefs and either admit they want surveillance wherever it’s technically feasible or give up and recognize that internet surveillance is also wrong. No more of this “surveillance is good but legacy platforms are exempt” waffling. Very frustrating and only serves the interests of people who already have power
From what I've read the arrest wasn't related to lack of proactive moderation, but the lack of, or refusal to do, reactive moderation i.e. law enforcement say "there's CSAM being distributed on your platform here" and the owner shrugs
> for the exact same reason it’s wrong to arrest power companies if a guy staples printed CSAM to a utility pole
That seems like a bad analogy. A closer one would be that I rent the pole space to people who I am told by law enforcement are committing serious crime in the open, using the pole I am renting to them. Additionally, I am uniquely capable of a) removing the printouts b) passing on whatever information I have about those involved (maybe zero, but at least I say that). The issue is refusing both. I don't feel they are egregious requests.
(this is not a tacit approval of digital surveillance)
I don't think it's a crime not to report a crime, at least not where I live. But facilitating a crime, which is something you could accuse telegram of is.
CSAM is different - in the US, as well as france, the law designates the service provider as a mandatory reporter. If you find CSAM and don't report the user who posted it to authorities (and Telegram have phone numbers of users) then they are breaking the law.
On top of that, if you can be shown to benefit from the crime (e.g. by knowingly taking payment for providing services to those that commit it), that presumably makes you more than just a bystander in most jurisdictions anyway.
It is only for specific crimes not all crimes and there are exemptions when you don’t have to report the crime in Germany. For example family members don’t have to report if they try to convince the other party not to do it. Priests and other religious figures don’t have to do it. Lawyers, physicians, therapists etc. are also exempted.
It is also only for upcoming not yet accomplished crimes. Crimes already happened don’t have to be reported.
Also it has to be proven that you received the plan in a plausibel manner.
That link you posted is 1) about very specific crimes (treason, murder, manslaughter, genocide etc.) and 2) it applies only when you hear about a crime that is being planned but which has not been committed yet (and can still be prevented).
You're technically right (I think). However, I believe if you witness a murder and know the murderer and the police asks you: "Do you know anything about X murder?" Then I think you're legally required to tell the truth here.
If someone says I need a cab for after I rob a bank and you give them a ride after waiting then you’re almost certainly an accessory. If they flag a random cab off the street then not.
It doesn’t extend to police questioning, i also pointed out it’s a different thing when you are in a court.
For the police an innocent bystander can turn into a suspect real fast.
The English common law tradition has a crime called “misprision”. Misprision of treason is the felony of knowing someone has committed or is about to commit treason but failing to report it to the authorities.
It still exists in many jurisdictions, including the UK, the US (it is a federal crime under 18 U.S. Code § 2382, and also a state crime in most states), Australia, Canada, New Zealand and Ireland.
Related was the crime “misprision of felony”, which was failure to report a felony (historically treason was not classed as a felony, rather a separate more serious category of crime). Most common law jurisdictions have abolished it, in large part due to the abolition of the felony-misdemeanour distinction. However, in the US (which retains that distinction), it is a federal crime (18 U.S. Code § 4). However, apparently case law has narrowed that offence to require active concealment rather than merely passive failure to report (which was its original historical meaning)
Many of the jurisdictions which have abolished misprision of felony still have laws making it a crime not to report certain categories of crime, such as terrorism or child sexual abuse
If you're the witness to a murder and you're subpoena'd to court and refuse to testify then you are committing contempt of court. There was a guy in Illinois who got 20 years (reduced to 6 on appeal) for refusing to testify in a murder.
Contempt of court usually has no boundaries on the punishment, nor any jury trials. A judge can just order you to be executed on the spot if you say, fall asleep in his courtroom. Sheriffs in Illinois have the same unbridled power over jail detainees.
i think in actual practice you will rarely get contempt for refusing to testify or taking the fifth for questions that could only tenuously implicate yourself in practice.
Usually if you let the prosecutor know up-front that you're not willing to cooperate they will tend to save themselves the hassle of trying. It can go wrong if they subpoena a belligerent witness, then they don't turn up on the day they're supposed to testify, and now the jury is empaneled and they start doing a dance where they demand the sheriff finds the witness, but then the clock runs out on holding the jury and it's a mistrial all round.
Yes, "I don't recall" is the oft-heard phrase in the witness stand. I don't remember the specifics of that case and why the guy decided to martyr himself.
I don't think it's necessarily self-incrimination to report a crime you witnessed, though I think it's dependent based on the time from when it occurred to the time of reporting.
Depending on the jurisdiction and the crime and the circumstances an act of omission (like ignoring a murder) would be suspicious and may get you charged with aiding and abetting.
I have my dead creepy uncle's phone in my drawer right now, and can give you soft core child porn from his instagram. His algorithm was even tuned to keep giving endless supply of children dancing in lingerie, naked women breastfeeding children while said children play with her private part, prostitutes of unknown age sharing their number on the screen, and porn frames hidden in videos.
If we're doing US criminal law, failing to report crimes is a red herring here, right? I'd assume the accusation would turn on accomplice liability, on Durov both knowing about the crime and, in that knowing state of mind, doing something concrete to help it (like concealing it from inquiring authorities).
Obviously this is French criminal law, which is, well, wow.
YouTube ignored reports for CSAM links in comments of "family videos" of children bathing for years until a channel that made a large report on it went viral.
Who you are definitely determines how the law handles you. If you're Google execs, you don't have to worry about the courts of the peasantry.
IANAL and not that familiar with the legal situation, but if we assume that running a platform of this type requires you, by law, to moderate such a platform and he fails to do that, idk what we are talking about. Yes, he would clearly be breaking the law. Why would that not get prosecuted in the completely normal, boring way that I would hope all law breaking will eventually be prosecuted?
If you are alleging that there's comparable, specific and actual legal infringements on the part of meta/google, that somehow go uninvestigated and unpunished, free free to point to that.
frankly, even with unencrypted chats, any law/precedent requiring that platform providers have to scale moderation linearly with the number of users (which is effectively what this is saying) sounds like really bad policy (and probably further prevents the EU from building actual competitors to American tech companies)
It was their decision to become something bigger than a simple messaging app by adding channels and group chats with tons of participants.
It was also their decision to understaff content moderation team.
Sometimes the consequence is a legal action, like the one we're seeing right now. All this could have been easily avoided if they had E2EE or enough people to review reported content and remove it when necessary.
Telegram started 11 years ago. I know the term has been diluted for ages, but it still rubs me the wrong way to use the word startup for decade old businesses.
A straightforward legal responsibility should be shirked because scaling moderation is hard? How many other difficult things do you propose moving outside the law?
That's not the case here though. Most of the communication on Telegram is not E2E Encrypted.
Even E2EE messaging service providers have to cooperate in terms of providing communication metadata and responding to takedown requests. Ignoring law enforcement lands you in a lot of shit everywhere, in Russia you'll just be landing out of a window.
These laws have applied for decades in some shape or form in pretty much all countries, so it shouldn't come as a surprise.
Have you used Telegram before making this comment? It is moderated. You really think this is about the company, the platform, not about politics? Well you should think again.
it is much less aggressively moderated and censored than facebook, and pleasant to use, source: first hand experience.
But i have no idea if it truly has more or less crime than other platforms. So we can't really tell if he's being messed with because he can't stand up for himself in a way Microsoft or Musk can, or it is truly a criminal problem.
Should have written >unmoderated<. No service would live 2 hours if it would be actually unmoderated. But seemingly they only remove content that is directly a product of/causing physical harm.
As far as I've heard, they did that only under threat of getting kicked out of the Apple and Google app stores. Supposedly, the non-app-store versions don't have these blocks.
In other words, Apple and Google are the only authorities they recognize (see also [1]). I'm not surprised this doesn't sit well with many governments.
The real deal channels are still accessible. I follow them every day. Its the only way of getting a clear picture of the situation in Ukraine. Both sides are heavily using it. Also during combat operations.
One of those was @rtnews which is definitely state-sponsored propaganda and remains inaccessible to this day.
They cooperated to some degree, but I'll go out on a limb to say that the authorities wanted Telegram to be fully subservient to western government interests.
there were multiple Kremlin propaganda outlets you could read in the US 40 years ago, although it is true that (IIRC) there were restrictions on broadcast television
>Eliminating child pornography and organised crime is a societal rather than 'government' interest.
Empirically speaking, governments have had absolutely zero success at this, but their attempts to do so have gotten them the kind of legal power over your life that organised crime could only dream about.
Are you implying that after the Italian mafia there were no more organised crime gangs in the US? There's a huge number of organised crime gangs nowadays; who do you think is distributing the drugs responsible for America's massive drug problem? https://en.wikipedia.org/wiki/List_of_gangs_in_the_United_St... . A policy isn't a success if it kills one crime group only for it to be replaced with more, and the overall drug consumption/distribution rate doesn't decrease. More people are using illicit drugs than ever before: https://www.ibanet.org/unodc-report-drug-use-increase
think there is a societal interest in unsnoopable messaging.
there are other low-hanging fruit EU governments could do to address crime, NL has basically become a narcostate and they are just sitting by and watching - Telegram is not the problem.
In this instance (RT being banned), it's Russia's quite candid strategy to undermine social cohesion in their enemies' societies, using disinformation. Margarita Simonyan and Vladislav Surkov have each bragged about its success. So yes, for social cohesion, when there's a malign external actor poisoning public discourse with the intention of splitting societies, a responsible government ought to tackle it.
Information warfare is a real thing, and if you're suggesting governments shouldn't react to it - on the basis that doing so would fall under 'the old enemy of the people argument' - then what you're actually contending is that governments should neglect national defence.
If we start throwing around terms like "social cohesion" to justify censorship in the West, how can we complain about China doing the same in the name of "social harmony"?
I think your subtle arguments are wasted on EU's decision to stop the spread of misinformation and manipulation. It's that simple for them. Black and white. Us vs them. Don't think too much, you are taken care of by your "representatives" ...
It’s also the government’s role to take measures against harmful actions. Personal rights end where they start to harm others, or harm society in general. They are not an absolute, and always have to be balanced against other concerns.
However, my GP comment was against the claim that “The state has no business judging the truth”. That claim as stated is absurd, because judging what is true is necessary for a state being able to function in the interest of its people. The commenter likely didn’t mean what they wrote.
One can argue what is harmful and what isn’t, and I certainly don’t agree with many things that are being over-moderated. But please discuss things on that level, and don’t absolutize “free speech”, or argue that authorities shouldn’t care about what is true or not.
> Personal rights end where they start to harm others, or harm society in general
This empty saying is used to justify basically any violation of civil liberty, because it is unprincipled and open ended, so it can be used to respond to any action anyone can take
> The commenter likely didn’t mean what they wrote
No, I meant what I wrote. The government has no business judging the truth. What is the Russian disinformation from earlier in this thread? For example, is it discussing the illegal 2014 coup in Ukraine that ousted a democratically elected government that was friendly to Russia? To EU overlords, discussing that event is “spreading disinformation” even though it is factually true and deserving of discussion. It’s a great example of political censorship being a problem.
> don’t absolutize “free speech”, or argue that authorities shouldn’t care about what is true or not.
Free speech should be absolutized in day to day discussion, even if there are very limited exceptions in the law. It’s when there is permission from society to limit speech that populations end up propagandized and suppressed by whoever has power over them. That’s what is happening here, where people are coming up with absurd mental gymnastics to justify France’s authoritarian actions.
> judging what is true is necessary for a state being able to function in the interest of its people
This sounds like support for Soviet or China style control of speech, and labeling of anything that power disagrees with as misinformation. Authorities shouldn’t care about what is true or not, because they are biased and corrupted by their agendas and ideologies and incentives. The free exchange of information is foundational to any free and democratic society. That’s what is necessary for a state to be able to function in the interest of its people.
At least Kim Dotcom earings and the main utility of the service was indeed based on pirated content. Telegram is huge news/chat/etc app, where the things the mention as "enabling" as totally marginal and coincidental, more like arresting a property owner that owns half of the city because some people sold drugs in a few of the apartments.
I believe both cases come down to how much effort the leaders put into identifying and purging the bad activities on their platforms.
One would hope that there is clear evidence to support a claim that they’re well aware what they’re profiting off and aren’t aggressively shutting it down.
To use Reddit as an example: in the early days it was the Wild West, and there were some absolutely legally gray subreddits. They eventually booted those, and more recently even seem to ban subreddits just because The Verge wrote an article about how people say bad things there.
> the warrant was issued because of his alleged failure to cooperate with the French authorities.
That would seem to be the key bit. Makes one wonder what level of cooperation is required to not be charged with a slew of the worst crimes imaginable. Is there a French law requiring that messaging providers give up encryption keys that he is known to be in violation of?
> Why are these service providers being punished for what their users do?
There is a legal distinction here between what happens on your platform despite your best efforts (what you might call "incidental" use) vs what your platform is designed specifically to do or enable.
Megaupload is a perfect example. It was used to pirate movies. Everyone knew it. The founders knew it. You can't really argue it's incidental or unintended or simply that small amount that gets past moderation.
Telegram, the authorities will argue, fails to moderate CSAM and other illegal activity to the point that it enables it and profits from it, which is legally indistinguishable from specifically designing your platfrom for it.
Many tech people fall into a binary mode of thinking because that's how tech usually works. Either your code works or it doesn't. You see it when arguments about people pirating IP being traced to a customer. Tech people will argue "you can't prove it's me". While technically true, that's not the legal standard.
Legal standards relay on tests. In the ISP case, authorities will look at what was pirated, was it found on your hard drive, was the activity done when you were home or not and so on to establish a balance of probabilities. Is it more likely that all this evidence adds up to your guilt or that an increasingly unlikely set of circumstances explains it where you're innocent?
In the early days of Bitcoin I stayed away (to my detriment) because I coudl see the obvious use case of it being used for illegal stuff, whichh it is. The authorities don't currently care. Bitcoin however is the means that enables ransomware. When someone decides this is a national security issue, Bitcoin is in for a bad time.
Telegram had (for the French at least) risen to the point where they considered it a serious enough issue to warrant their attention and the full force of the government may be brought to bear on it.
It seems there has been a misunderstanding; laws for service providers never exempted them from having to cooperate and provide data available to them when ordered.
Because these countries are hypocrites. Because politics, because these guys are from Russia, China. You can so obviously see there's discrimination against companies from those countries. Can you imagine France do this if it's a US company?
Rhetorical question: for what reason should a country be anything other than a hypocrite when it comes to situations such as this? Nations prioritize their own self-interests and that of their allies, even if that makes them appear hypocritical from an outside, or indeed, even an inside perspective. But that doesn't mean there's no legitimacy to what they do.
That's why startups need to get Silicon Valley VC investment so that the VCs can lobby Washington on their behalf with <del>protection money</del> political donations and avoid this crap.
The difference is that this is not an isolated case on telegram(you said it yourself: "some amount", which implies "limited"). At the same time, you can literally open up the app and with 0 effort find everything they are accusing them of - drugs, terrorist organizations, public decapitations, you name it. They also provide the ability to search for people and groups around you, and I am literally seeing a channel where people are buying and selling groups "800 meters away" from me and another one for prostitution, which is also illegal in my country. Meanwhile, see their TOS[1]. They have not complied with any of the reports or requests from users (and governments by the looks of it) to crack down on them. While 1:1 chats are theoretically private and encrypted(full disclosure, I do not trust Telegram or any of the people behind it), telegram's security for public channels and groups is absolutely appalling and they are well aware of it - they just chose to look the other way and hope they'd get away with it. You could have given them the benefit of the doubt if those are isolated("some") instances, sure. But just as in the case of Kim Dot-I-support-genocide-com, those are not isolated cases and saying that they had no idea is an obvious lie.
2000/31/EC[2], states that providers are generally not liable for the content they host IF they do not have actual knowledge of illegal activity or content AND upon obtaining such knowledge, they take action and remove and disable access to that content(telegram has been ignoring those). Service providers have no general obligation to monitor but they need to provide notice and take down mechanisms. Assuming that their statement are correct, and they had no idea, they should be in the clear. Telegram provides a notice and take down mechanism. But saying that there are channels with +500k subscribers filled with people celebrating a 4 year old girl with a blown off leg in Ukraine and no one has reported it in 2 and a half years after it was created is indeed naive.
If I have to dig through third party clients in order to trust a system, then it's clearly a shit system. Signal > anything else, especially telegram, which can burn in hell for all I care.
I don't see the difference with Signal here. In both cases, the only reason why you know that they do E2EE properly is because you (or somebody else that you trust) has audited the client code and confirmed that it does indeed do E2EE.
Nor does it require a third party client. In fact, in this regard, Telegram official client is slightly better because they have reproducible builds for iOS, while Signal, last I checked, does not (they do have them for Android).
kim dotcom ran basically a pirated game/book/music/movie site. Telegram (what I have seen) is mostly hacking leaks although rumored to have CSAM (to those not familiar with the acronym, it means cheese pizza).
Of course you can find both somewhere in the walled planetary garden of googlotron in facebooks sure. But they clamp down on it hard as they can. They clamp down on anything marginally offensive much less illegal. Have you tried the facebook "report post" interface? there are 3,000 various types of offensiveness you may report. That's their bar, their standard, that's 1,000 miles away from definitely illegal content. If their censorship apparatik is so bold as to be wiping out vast swaths of totally valid free speech, anything illegal has no chance.
If the question is "what's a great place to go for piracy" - megaupload, pirate bay, etc, then any common answer to that question is a target... "where to I go for data breaches" - breachforums, telegram, etc. Don't get worked up, all those places were destroyed by the feds and no longer exist.
Why are these service providers being punished for what their users do
[...]
maybe I'm just being naive?
In this case, the comment does strike me as naive.
Back in the 1990s the tech community convinced itself (myself included) that Napster had zero ethical responsibility for the mass piracy it enabled. In reality, law in a society is supposed to serve that society. The tech community talked itself into believing that the only valid arguments were for Napster. In hindsight, it's less cut-and-dry.
I have never believed E2EE to be viable, in the real world, without a back-door. It makes several horrendous kinds of crime too difficult to combat. It also has some upsides, but including a back-door, in practice, won't erase the upsides for most users.
It is naive to think people (and government) will ignore E2EE; a feature that facilitates child porn, human trafficking, organized crime, murder-for-hire, foreign spying, etc etc. The decision about whether the good attributes justify the bad ones is too impactful on society to defer to chat app CEOs.
This should be obvious to everyone here, but it's pretty much inevitable that if a backdoor exists, criminals will eventually find their way through it. Not to mention the "legitimate" access by corrupt and oppressive governments that can put people in mortal danger for daring to disagree.
No doubt that is true, and presumably Cory Doctorow has written some article making that seem like the only concern. The alternative makes it difficult to enforce all kinds of laws, though.
You can go ahead and encrypt messages yourself, without explicit E2E support on the platform. In fact, choosing your own secure channel for communicating the key would probably be more secure than anything in-band.
I doubt that will upset the public the way Signal and Telegram eventually will. Most people, including criminals, struggle with tech. If they want E2EE badly enough, and use one of the big messaging GUI apps they can succeed. If they can only do it via less user-friendly software, they'll need help or to do research, and likely will leave a trail behind them. That is more useful to law enforcement than if they simply had downloaded one of the most popular App Store apps. It's hard for a news story about a CLI utility to gain traction.
Historically speaking, a great deal more crime was impossible to combat in practice simply because no state could afford a police apparatus extensive enough to monitor everything. Coincidentally, this also extended to things like political dissent.
Now that automated mass surveillance actually makes it possible for the states to keep tabs on just about everything, E2EE, if anything, merely rebalances the scales (although even that is overselling it - in practice, with modern surveillance tools, the scales are still much more heavily tilted in favor of those surveilling).
To what extent people really want to embrace the panopticon is not so clear-cut. It is certainly something heavily pushed from above, and in many societies that does seem to be reflected in public opinion (e.g. UK) - but not in all, so I do not think it can be reasonably assumed to be the default.
That's how most law works. I have to give up my right to murder someone in order to enjoy a society where it's illegal for everyone.
If you believe privacy not inspectable by law enforcement is wrong the prerequisite is saying that you're willing to have the the law apply to you as well.
I believe that privacy not inspectable by law enforcement is a fundamental right. I'm willing to accept that aids some crimes but also willing to change my mind if the latter becomes too much of a problem. It doesn't seem to be the case at all ATM.
Yes, that is my position. E2EE back-doors might not affect my communications or yours, but have serious and undesirable repercussions for some journalists and whistleblowers. The thing is, regular people aren't going to tolerate a sustained parade of news stories in which E2EE helps the world's worst people to evade justice.
This comment can itself be said to take for granted the naive view of what law it exposes.
Law is a way to enforce a policy on massive scale, sure. But there is no guarantee that it enforces things that are aiming the best equilibrium of everyone flourishing in society. And even when it does, laws are done by humans, so unless they results from a highly dynamic process that gather feedback from those on which it applies and strive to improve over time, there is almost no chance laws can meet such an ambitious goal.
What if Napster was a symptom, but not of ill behavior? Supposing that unconditional sharing cultural heritage is basically the sane way to go can be backed on solid anthropological evidences, over several hundred millennia.
What if information monopolies is the massive ethical atrocity, enforced by corrupted governments which were hijacked by various sociopaths whose chief goal is to parasite as much as possible resources from societies?
Horrendous crimes, yes there are many out there, often commissioned by governments who will shamelessly throw outrageous lies at there citizens to transform them into cannon fodders and other atrocities.
Regarding fair retribution of most artists out there, we would certainly be better served with universal unconditional net revenue for everyone. The current fame lottery is just as fair as a national bingo as a way to make a decent career.
You know, I agree with nearly all of these points. I even think there is something to your point about Napster 'being a symptom' but (as people love to say around here) it's 'orthogonal' to the original point I wanted to make.
Few things would please me more than to live under a system where arts and culture were freely available to all, and artists didn't have to starve in the process. It doesn't strike me as far-fetched either; it wouldn't take much to improve on the system we currently have.
But my original point was that, given the society we actually had when Napster came along, it was unreasonable for Napster unilaterally to decide for everyone else that existing laws and expectations no longer mattered.
> Horrendous crimes, yes there are
many out there, often commissioned by governments who will shamelessly throw outrageous lies at there citizens to transform them into cannon fodders and other atrocities.
Yes, this happened, is happening and will happen.
I wonder however if the word "often" may perhaps be misleading or even completely wrong.
If you pick one random victim of a horrendous crime today in a western society. Feel free to pick the minority most hated by that society. What is the likelihood that that crime was commissioned by the government? It's more likely domestic violence, trafficking etc done by fellow community members.
Sure there are examples of governments shooting civilian planes in the sky or ferries in the and covering up. And it's perfectly sensible to be outraged when that happens. But jumping to the conclusion that "the government" just does those things as a matter of routine doesn't sound right to me. I don't buy it. It smells conspiratorial thinking and requires extraordinary proof.
> Why are these service providers being punished for what their users do?
I think this is simplified. Certainly yes, if "all" Telegram was doing was operating a neutral/unmoderated anonymized chat service, then it's hard to see criminal culpability for the reasons you list.
But as people are pointing out, that doesn't seem to be technically correct. Telegram isn't completely anonymous, does have access to important customer data, and is widely suspected of complying with third party requests for that data for law enforcement and regulatory reasons.
So... IF they are doing that, and they're doing it in a non-neutral/non-anonymized way, then they're very plausibly subject to prosectution. Say, if you get a report of terrorist activity and provide data on the terrorists, then a month later get notified that your service is being used to distribute CSAM, and you refuse to cooperate, then it's not that far a reach to label you an accessory to the crime.
I’m not a fan of this arrest and I don’t believe service providers have a duty to contravene their security promises so as to monitor their users.
But it seems pretty obvious that governments find the monitoring that Google / Reddit / etc do acceptable, and do not find operation of unmonitorable services acceptable.
VPNs don't pose an obstacle to monitoring any specific activity, and as many VPN-using criminals have found, even their ability to stop law enforcement from identifying you is limited. So they've been less of an issue. Having said that, I would note that Mullavad was forced to remove port forwarding in response to law enforcement interest, and I don't think it would be too surprising (or too dystopian) if in the future "connection laundering" is a crime just like money laundering.
There are several jurisdictions in the world where the government has the power to force a provider to keep logs, and actively lie about it. We simply have no way to know if mullvad or any other logless provider is actually logless, because they can be legally forced to lie about it.
Aside, warrant canaries have never been actually tested in court and the common consensus is that they wouldn't fly in reality if they were ever contested.
Because some things like terrorism and child sex abuse are harms to society as a whole, and even private individuals have an obligation to help combat them. Durov has a service where by design it's hard to filter out that kind of activity, and he's effectively (if not explicitly) helping protect that activity.
No because HP printers *do* print tracking marks to allow law enforcement to match a printout to a printer if they find abuse material that's been printed.
I find it amazing that this is used as an example of a good thing.
A few decades ago, one of the factoids about USSR was that they required all typewriters to be registered with the state, with a sample page produced for every unit manufactured so that the state could track their use (this last bit is unlikely to be true, but was widely believed). That was supposed to be a case in point on why free societies are better, not an example to follow.
I was pointing out that the GP's strawman was really immaterial because HP don't get sued/arrested because they comply with law enforcement (at the expense of user privacy).
Since I know better than to use a printer in the commission of a crime, it doesn't really affect me, but I'm aware that the majority of users consider it a privacy violation.
> So is France going to arrest the owners of HP, because their printers can't filter out CSAM?
A more comparable example, is France going to arrest someone who maintains a printer in an office and knows an employee is printing CSAM but doing nothing about it?
I hope they would, this is the boat Telegram is in.
If you think one person printing out CSAM on a printer one page at a time is the same as running a service that facilitates tens of thousands of people to trade CSAM, there's nothing I can say to explain it to you.
I strongly suspect there's more to it than just running a chat system used by criminals. If that were the issue then tons of services would be under indictment.
We'll have to wait and see, but I suspect some kind of more direct participation or explicit provable look-the-other-way at CSAM etc.
Let’s just say I encrypt illegal.content prior to uploading it to Platform A. And share the public key separately via Platform B. Maybe even refer Platform A users to a private forum on Platform B to obtain the keys. Are both platforms now on the wrong side of the law?
This is a big problem. Why are we talking about "cooperation"? What does it mean? A judge doesn't ask you to cooperate, he seizes your servers. Ah, it's not a court. It's the police? The state? It's not a free country, sir.
In principle, the police can tell if a song infringes copyright, or if a message spreads hate (I'm trying to sound American here). Or if a picture is really a "cheese pizza" or just a strange artistic depiction of youth. Not because the police don't know about music or TC/IP, or don't care about art or reading. Everyone knows they care. But because it is a legal problem.
In my country, let's call it a republic, at least it was a long time ago, even the state can own all your bases because you don't pay taxes, the police can only arrest you for six hours while they call the prosecutor to check that the 200 grams of white powder is what it is. They knew. They had already stolen the rest. If the forensics aren't quite sure what you're bringing them, it's the stuff that makes the stuff what the stuff is, you're free to go.
The prosecutor can make a case and send the 100 grams of white powder the same day, claiming that the stuff is the same stuff that other similar stuff is made of. He expresses his strong conviction. The judge then sends an arrest warrant to the police. You've been arrested, you have no money, the judge imposes some restrictions on you: you can't leave the country, you can't contact certain people, you have to go to court every week to sign a book. The investigation is open.
You have access to all the documents. Nothing happens without your approval and control.
This is how it works if SIA (yes, the singer) is not involved. If it is, you will be dead for a week and no one will ever find your body.
Aware of what? Government says a file is illegal? Sounds like a censorship regime to me.
Not if the key is provided to the platform operators to confirm the contents. Otherwise yes anyone could claim any encrypted file contains illicit material and people would game that system.
For what it's worth the key may not need to be manually shared to the provider as referrers often leak where people learned about the file and that source location may also contain the key or password. All it takes is one person using a web interface or addon that leaks such information. Some addons break the referrer-policy header and many website operators don't even set the header [1] in the first place. Example header testing [2]. Please test the sites you visit and kindly ask the website operators to address any missing headers.
# nginx example
referrer-policy "strict-origin" always;
Often is the case but I would still suggest setting the referrer policy should the file be enticing enough for people to register an account assuming forum ranks and further actions are not part of the picture.
I'm not sure where this myth originated—perhaps from Kim Dotcom's Twitter account? I clearly remember the Megaupload case. They knew they were hosting pirated content, didn't delete it after requests[1], and shared money with the people who uploaded it because that was their business model.
Google, Discord, and Reddit all take swift and decisive action against CSAM. I have never organically encountered CSAM on Google, and have only encountered it on Discord and Reddit because of deliberate bad actors. Outside of Reddit's early mistakes with /r/jailbait, none of these services end up being preferred by pedophile communities, because they can expect to be shut down quickly if they rally there.
Telegram has become a hotspot for CSAM to the point that it is pretty much inevitable that you're going to encounter someone peddling it just by browsing other channels.
I think the real difference is the intent. If your platform makes it extremely easy to do illegal things, and you choose not to put in the controls to stop it, and then I think it is fair that government should stop.
Kim Dotcom is still harassed because he is very vocal against the US and what is happening in Ukraine. https://x.com/KimDotcom
The US narrative on Ukraine and Israel is getting weaker. Thorns like Kim Dotcom that has a big following, Telegram that is the only social platform to access the Russian side of the events, can break the US narrative.
It is ironic that the US screams Russia did a war crime in Bucha but Israel on Gaza is fine.
True. Best source to get info from the war are on Telegram. Both Ukrainian and Russian ones. Some channels have millions of users and provide daily map updates, information about enemy positions and even information about locations where equipement is stored in EU countries.
It's better not the Kim Dotcom situation, that would mean Durov encouraged the illegal use of Telegram like Megaupload rewarded file uploads which generated heavy download traffic.
If that would be the case he would be at least a accomplice if not even the Initiator of criminal activities.
Otherwise it would be just an abuse of his service by criminals.
>> Why are these service providers being punished for what their users do?
Are we 100% certain that this is only about Telegram? I want to see the allegations, not the vague charges, before pontificating about ISP liability. These charges might be more straightforwards.
Dotcom is being prosecuted for knowingly and deliberately directing and encouraging the unlawful behavior of his users, and it's a criminal prosecution rather than a civil case because he's accused of building a (lucrative) business off the effort. You don't have to agree with the case or believe the DOJ has made it adequately (it's early to say, given the extradition drama), but it's not reasonable to say that Dotcom is being prosecuted "for what his users did", any more than it would be reasonable to say that a mafia kingpin was being prosecuted for what their street crews did at their behest.
(I have no idea what's going on with Durov, or how French and/or EU law works, except to say that legal analysis on HN tends sharply towards US norms, and people should remember that a lot of basic US legal norms, like the rules of evidence and against self-incrimination, do not generally apply in Europe.)
> Why are these service providers being punished for what their users do? Specifically, these service providers? Because Google, Discord, Reddit, etc. all contain some amount of CSAM (and other illegal content), yet I don't see Pichai, Citron, or Huffman getting indicted for anything.
WORSE, you get banned for reporting CSAM to Discord, and I guarantee if you report it to the proper authorities (FBI) they tell them to bug off and get a warrant. Can we please be consistent? If we're going to hold these companies liable for anything, let's be much more consistent. Worse yet, Discord doesnt even have End to End encryption, and the number of child abuse scandals on that platform are insane. People build up communities, where the admins (users, not Discord employees) have perceived power, users (children) want to partake in such things. Its essentially the Roblox issue all over again, devs taking advantage of easily impressionable minors.
Yep. At this point, it's clear to me that Discord is acting with malice. On top of banning people for reporting abuse on their platform, which is by itself insanity, they changed their report system [0] so it's longer possible to report servers/channel/users at all, only specific messages, with no way to report messages in bulk being provided.
They had a scandal where they allowed the furry equivalent of child porn, and quietly banned that type of porn from the platform later on. I assume due to legal requirements.
Edit:
I think the lack of bulk reporting is a pain too. They used to ask for more context. One time I reported a literal nazi admin (swastika posting, racial slurs, and what have you), but the post was "months old" and they told me essentially to "go screw myself" they basically asked why I was in the server.
We've banned this account for breaking the site guidelines and ignoring our requests to stop.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
Why are these service providers being punished for what their users do? Specifically, these service providers? Because Google, Discord, Reddit, etc. all contain some amount of CSAM (and other illegal content), yet I don't see Pichai, Citron, or Huffman getting indicted for anything.
Hell, then there's the actual infrastructure providers too. This seems like a slippery slope with no defined boundaries where the government can just arbitrary use to pin the blame on the people they don't like. Because ultimately, almost every platform with user-provided content will have some quantity of illegal material.
But maybe I'm just being naive?