As an ISP: This is a very boilerplate subpoena. Whether or not the specific FBI agent knows or cares what Signal is, I'm about 99% certain it's just the result of a copy/paste from a template.
I actually believe that law enforcement has the legal right to subpoena information, with a judge's consent, while investigating criminal activity. This is exactly the solution to that problem. These platforms should want to know as little about you as possible.
Yes, although the way around this for law enforcement is to pressure Apple and Google to remove Signal from the App Store/Play respectively (to protect children!) and work on operating system level bypasses of Signal. I am fearing this scenario.
I wonder how far they could go in compelling Signal to push a change that let more info leak for a specific user. I know there have been somewhat similar cases where companies were compelled to add new functionality, logs, etc, to capture info for a specific user.
I'm surprised the FBI has tried to get a custom keyboard into the Play Store yet, or asked Google to add a key logger to the stock one. Sure, the legality is blury at the moment, but it's just a matter of changing some laws and then that becomes legal.
I’m curious whether there’s a big overlap between people that the FBI is interested in and people who care about grammar. That’s not a flippant response: I’m genuinely curious if that is part of a shift towards white collar crimes (which could be welcome) and more generally criminal psychology.
I’ve been watching _The Wire_ and Idris Elba’s character sounds like a good overlap, but he feels very unusual (in fiction)--and definitely too careful about OpSec to use that kind of service.
More generally, there’s a lot of anecdotic evidence between problematic behaviour and low-level spelling details: one that is apparently better documented now is a correlation between insurance fraud and whether you capitalise properly the name of your employer on registration form.
It is perfectly possible to be a grammarly user and not upload everything to it, but if I am running a script or draft of a blog post through it, what harm is there?
According to Google, Gboard uses Federating Learning to train a model on user data on the local device, so no sensitive data is not sent to the server. Only the gradients are sent and aggregated on the server. https://research.google/pubs/pub47586/
Google has been pretty adamant for years that they don't use or retain your Gboard data, unless you're typing it into search or some Google product that gathers it there. Prediction is supposedly done in-app.
Every time I see similar questions, I just wonder why companies such as Signal do not put in place a mechanism to veto changes across countrie.
If they had developers in, say, France, who would need to accept changes made somewhere else then a single country could not force such secret modifications.
Of course an app is actually a binary, so there must be a way to verify that code + compilation = the published binary.
Sideloading on android is quite simple. "download apk" -> "launch apk file" -> "alert gives you a shortcut to settings to allow installing apk from [source]" -> toggle the only switch on that screen -> "launch apk file" now installs it.
You press the only non-"give up" button at each stage and you're done.
Remember that Fortnite succeeded in convincing people to do this by the millions. It's not hard.
Kids hooked on a game vs adults reading a scary message for an app are psychologically very different. Even if fortnite retained millions, how many users did they lose?
>Kids hooked on a game vs adults reading a scary message for an app are psychologically very different.
Somewhat. And sometimes. And when the high-security-site for installing a high-security-tool is asking you to do a thing, it's quite a bit psychologically different than e.g. a reddit post saying "install this apk and approve all prompts: http://www.5z8.info/hack-outlook_w2f7vj_dogfights"
If there was a heinous criminal and they were using Signal, I wonder if law enforcement could get a warrant to push an update to Signal (or even OS push services) through the Play Store/App Store, just for that user. Wouldn't that work? It's the same legality and hoops to jump through as a wiretap, it seems to me, and it solves law enforcement's problem.
The more I think about it, the more I am convinced they could do this, but they would rather not because it's much harder, since it requires a special warrant with high evidence thresholds for each user. This reinforces the claim that what they really want is an ability to get any messaging data without a warrant, but they don't want to say so.
This is exactly why I don't put 100% trust in these apps that claim e2e encryption. It's trivial to update the app and destroy the contract if the company is willing and cooperating. You need hardware based security to make it truly impossible, like the case of unlocking and iPhone, although I honestly don't know how hardware based security could be leveraged for chat. Maybe in a similar way to how DRM works where the CPU never has access to the data and something else is drawing the pixels on the screen?
I fully believe that by the end of the decade it will be illegal in some Western countries to go online with a device that doesn't provide a measured boot attestation that it is running a government-approved operating system.
I'm not sure why I was downvoted, but I believe this too.
Same goes for music and video - we won't be legally allowed to hear and watch non-DRM-playable devices as the speakers and displays will be rented not owned by corporations.
We have almost arrived at the Rental Economy, where we dont own anything and are at the behest of corporations. It's modern serfdom
You might have already seen this but in case you havent, there's a famous talk about this called "the coming war on general computation" (https://www.youtube.com/watch?v=HUEvRyemKSg), and much of this has already come true
That seems to be recent thing (over the last decade or 2) with the US. The Supreme Court of India recently made these observations when the government refused to share certain information with it under the bogey of "national security":
> "... In a democratic country governed by the rule of law, indiscriminate spying on individuals cannot be allowed except with sufficient statutory safeguards, by following the procedure established by law under the Constitution ...
> We had made it clear to the learned Solicitor General on many occasions that we would not push the Respondent-Union of India to provide any information that may affect the national security concerns of the country. However, despite the repeated assurances and opportunities given, ultimately the Respondent-Union of India has placed on record what they call a "limited affidavit", which does not shed any light on their stand or provide any clarity as to the facts of the matter at hand.
> However, this does not mean that the State gets a free pass every time the spectre of "national security" is raised. National security cannot be the bugbear that the judiciary shies away from, by virtue of its mere mentioning. Although this Court should be circumspect in encroaching the domain of national security, no omnibus prohibition can be called for against judicial review.
> The Respondent-Union of India must necessarily plead and prove the facts which indicate that the information sought must be kept secret as their divulgence would affect national security concerns. They must justify the stand that they take before a Court. The mere invocation of national security by the State does not render the Court a mute spectator"
> ... We are not interested in knowing matters related to security or defence. We are only concerned to know whether Govt has used any method other than admissible under law ..."
With very limited use, even secret subpoenas can be a good thing, for example in a counterintelligence situation where you don't want to tip your hand to a foreign intelligence service.
...or to your own citizens, who you may be looking at as a domestic threat.
Let's be clear
There is no reason to assume that this type of thing is constrained to "just the type" the government can have their arm forced into admitting to.
It’s a very small thing, but ~I suspect that~ “Unix timestamp” means they actually send the epoch number, without even bothering to do the conversion to Gregorian data format.
PS: I didn’t notice the attachement. Yup, that’s _exactly_ what they send. Curious whether the suspect could recognise their creation date and know that they are being investigated that way--and whether that exposes Signal.
If neuralink/related tech ever gets to the point of mind-to-mind communication, without a doubt our law enforcement will claim they have the right to subpoena a person's thoughts. We may be setting a precedent now that is more important that we could ever think (pun intended).
That kind of technology would make the issue more obvious, but I already operate under the understanding that people's personal computing devices are effectively extensions of their minds.
I could be wrong but I was under the impression that the way end-to-end encryption worked (like what Signal claims, I thought) was it was physically impossible for them to decrypt (handover decrypted data (aka your messages) to a court of law) because the public/private keys are impossible to crack and also not known by Signal.
It sounds like this isn't the case whatsoever.
I don't really understand modern chat apps that talk about encryption. By no means am I a pro on the subject so I apologize in advance but... if you really don't want ANYBODY EVER snooping on your data network wise (unless they are holding one of the devices and reading the screen after it has been unlocked via passcode/biometric, etc.), can't you just tell your friend a key and exchange it offline and then communicate freely with no middleman? Or even, with a middleman... that is just transporting your data and doesn't know your agreed upon shared secret or keys.
How could a subpoena ever work against this kind of data?
The problem lies not in encryption or key exchange itself. It’s in the fact that to make/use a solution that doesn’t suck in real life, you need a budget comparable to the entire crime you commit. As a developer, you may [help] create few keys and set up the encrypted scene for sending emails or even insecure chat messages, but your gang will fail even at groceries. All these apps are selling turnkey security in app stores, not something brand new.
can't you just tell your friend a key and exchange it offline and then communicate freely with no middleman
Yes, keywords are pubkey, fingerprint, diffie hellman. It’s easy to use, just run:
Now just send .enc files over the wire. It is trivial to decrypt at their side, even my grandma can do that. She usually leaves raw files in her downloads folder though, but it’s easy to remove them via local crontab job.
It takes extra effort to design a system with this little amount of data. Note that we only have Signal's word for some of this; they could in fact log every single time that you login, which would make the amount of data sent to the FBI much larger (and could be of importance to the case, for example, if the defendant had a dedicated Signal account for the crime that they only logged into at certain times).
Then there's IPs. If you log IPs along with when someone connects, then an IP can often be tracked to a WiFi router, which then pins your location.
Most E2EE communication protocols will see (and thus potentially log) the time and destination of every message you send. If two people have been accused of conspiring to commit a crime, this could be material in forming the case. They may also store your contact list, but a sufficiently long list of messages sent will practically determine your contact list anyways.
Even just the time of messages could be important; if someone interviewed claimed to be in the shower at a certain time, but there were logs of a message being sent at that time, that's probably enough for an obstruction of justice charge to stick.
> they could in fact log every single time that you login, which would make the amount of data sent to the FBI much larger
I guess you mean each time you connected to their server to retrieve messages? There is no "login" step, Signal doesn't have a log in process. Presumably a typical Signal client calls several times per day.
The record they provided says connection date and despite being supplied in milliseconds since the epoch the value appears to indeed represent a whole day not a specific time. So you're correct Signal could be lying and actually store the exact moment you last connected, for whatever that's worth, and we could not prove they don't have that info.
> Most E2EE communication protocols will see (and thus potentially log) the time and destination of every message you send
Signal doesn't know who sent messages among close friends, and optionally not anybody who sent messages to a sensitive account.
Let's take the example of a message I sent to my friend Chris. I know Chris, I've sent messages to him previously and he knows me, so I'm in his contacts list. As a result by default my Signal client keeps some "Sealed Sender" tokens for Chris. When I send a message to Chris, my client uses a token it learned from a previous conversation with Chris, but Signal doesn't know who I am, just that I have a valid token for sending messages to Chris. So it stores the message, and gives it to Chris. Chris's client can determine that this is a message from me, tialaramex, but the Signal service never knew who sent the message.
If one day Chris hates me and blocks me from sending messages, his client invalidates all Sealed Sender tokens, and begins issuing new ones to his other friends so they can continue to contact him.
> Signal doesn't know who sent messages among close friends, and optionally not anybody who sent messages to a sensitive account.
I am aware of this, and I had split my comment into "extra things Signal could log" and "Extra things most other E2EE services could log" but somehow lost the divider between the two during editing.
I think you misread the post above. They're saying that the govt should be allowed to issue subpoenas, and nothing more. They shouldn't be allowed to mandate backdoors, or hack suspect's machines, etc. And citizens should be free to use cryptography to control their information.
Also, "Impossible" is not the right term. "Extraordinarily expensive" is a better one. And yes, anyone can share public keys with each other offline and have end-to-end encrypted communication without help from a service. But advertising companies and the govt are not incentivized to make that practice convenient, and people typically do what is most convenient.
This doesn't seem that different from subpoening a McDonalds for their security footage, and getting an answer back like "sorry, we've already deleted the footage from that day".
Exactly, but I fear that government is a step away to ask for ID for any online service. Median age in politics is pretty old and regarding security and experience from the last 20 years suggest they are also easily scared.
While you might forgive someone working for interior ministries, this is an automatism towards fearful stupidity. Benefits don't play are role here, it is exclusively about any potential security benefit, doesn't matter how small.
Sadly signal is based on phone numbers which I dislike, but other messengers should take note here.
Am I the only one sort of bothered by the fact they shared that specific information with the world? It may not seem like much, but that was user data.
It's potentially useful to the target of the subpoena, who knows when they created their Signal account, and might thus now know that they're under investigation.
The snark is publishing it in the blog post not blacked out. As a side effect the account may or not be warned by this. Not sure, if it’s legal to do so, in the US.
Not only all the records that they have, but it proves that the data isn't meaningful to de-anonymize someone. If they had to redact it we would wonder why and how that information would be useful.
What I don't understand about the whole Signal E2EE model is that while your messages themselves may be encrypted, they are still sending push notifications over Apple's servers, which have to go through APNS. Often the entire message contents can be contained in the push notification.
Does anybody know if Apple's notifications are E2EE? I doubt that gov't doesn't have access to the push notifications...
I believe they are encrypted (and decrypted on device by the Signal app).
They recently had to do some rewriting of the code for iOS15 - they share some comments about that here: https://community.signalusers.org/t/beta-feedback-for-the-up...
Hope it helps
I'm actually surprised they didn't use a notification extension before. They're surprisingly great as an API - I used it to dynamically render preview line chart images for a finance app I worked on a few years ago. Just send over the limited line data, render the image, and you're good to go.
I would (naively) assume that the notification service sends opaque (encrypted) blobs that are processed (decrypted) by the app before display to the user.
I don't know how Signal works but it is possible to send a silent encrypted push notification that the app can decrypt and show as a local notification.
I'm not too familiar with this, but my understanding is that the push notification just wakes up the Signal app, then the Signal app gets the encrypted message (either from Signal's servers or the push notification payload, I'm not sure) and decrypts it client-side and provides the notification text.
Even if the push notifications themselves are encrypted, isn't there still the question of whether Apple store the (App x Notification x User/phone number) graph?
This applies on every single app, and is quite irrelevant as you already trust Apple by using their closed source device. If they want your data, they sure get it.
Unless you only contact Signal users who have verified and compiled the client themselves, you put the same kind of trust in Signal, which specify what data is logged (phone numbers are stored hashed for discovery by other users).
The same may or may not be true for Apple (I have no idea) but claiming it is irrelevant as an answer to a question about whether an _Apple_ technology is encrypted, is mind boggling to me.
I mean, there is always a root for the trust. In this case, the root is Apple. Signal is one leaf below. Compromising root compromises everything.
In this case, root being the device and OS, it has unrestricted access to everything happening by your actions. The data you see on your screen is processed by the CPU, developed by Apple, controlled by kernel, coded by Apple. The can access everything they want.
> Often the entire message contents can be contained in the push notification.
Good grief, why would you do that? Just send a notification that data is ready and the when the app wakes, go get the remainder of the data from signal servers.
I'm guessing here, but wouldn't they just push the e2ee message through APNS? Then decrypt client side. Or does Apple require plaintext messages for push notifications (that seems bad if they do)?
When you craft a push notification server-side, it contains the payload in plaintext. Now, that is probably encrypted in Apple-land, but my point is that the gov't probably has sunk its teeth into Apple already. So, yea signal's encryption may be open source and proven, but I doubt Apple's doesn't have a backdoor.
I mean Apple themselves is telling devs to not send sensitive data in the actual notification
> [...] never include sensitive data or data that can be retrieved by other means in your payload. Instead, use notifications to alert the user to new information or as a signal that your app has data waiting for it.
Not sure if Signal is doing this, but they could send a notification with title "New message" and encrypted payload. The payload can be processed by a client-side notification extension which decrypts the payload and chooses what notification text the user will see.
Not true. The server can send encrypted (not plaintext) payload. The client can then use a NotificationExtension to decrypt the payload as it comes in.
I was wondering about the same thing. I think that signal just sends a message to APNS (and Google’s equivalent) that you have something to look at like a new message or whatever. That makes the app wake up and goes to signals servers for the actual content and the app creates the actual notifications on your device.
>Because everything in Signal is end-to-end encrypted by default, the broad set of personal information that is typically easy to retrieve in other apps simply doesn’t exist on Signal’s servers.
The E2EE in Signal only protects the actual content of messages. In the case where Signal takes an assertive action, and the users are not paying any attention to their "safety numbers" (probably the most common case) they could in theory get message content with a MITM attack.
With an less assertive action (simply saving the data) Signal could get access to things like contacts and phone numbers.
Tutanota and Protonmail have both been forced in the past to take assertive actions to retain data as a result of legal warrants. Does American law even allow such warrants? If not then perhaps the USA is underrated as a place to base privacy oriented services.
A judge can sign an order commanding a witness or party to preserve documentation and evidence, under penalty of contempt of court. However, there is still a great deal of uncertainty as to what actions the subject of the subpoena must take in order to preserve that evidence. It's pretty clear that you have to disable automated destruction mechanisms, you can't disable any recording functions you may already have, and you can't go and shred relevant papers in your possession; but whether a court can order you to write code or take other burdensome steps in order to record certain electronic records that you didn't record before to assist an ongoing investigation is still a very open question.
Out of curiosity do you if you're within your rights to say "this will cost 'x' amount, we cannot afford it" or say if this is requested we would prefer to dissolve the company?
Basically can the UK government compel you under the threat of criminal prosecution?
I don't know of any instance when someone has threatened to close the company, in the UK (but then again, these notices aren't public knowledge, so just because you don't hear something doesn't mean it doesn't happen).
Noncompliance with a data retention notice lands you in court, but it's a civil matter, not criminal - though if you then ignore an order of the court, you could be facing some jail time and/or a fine for contempt of court. https://www.legislation.gov.uk/ukpga/2016/25/section/95/
Regarding compliance costs, there's a vague provision in the IPA for contribution to your expenditure: https://www.legislation.gov.uk/ukpga/2016/25/section/249
A little more detail, although not much, can be found in the accompanying Communications Data Code of Practice
Sealed sender only means Signal doesn't know who sent a particular message. They have to know who the recipient is so they can deliver it. Like forging the "From:" address on an email. Except in the Signal case the IP address/port of the sender is unique to the user and if the recipient responds then the link between the users is made.
The private contact discovery depends on an Intel SGX hardware enclave on their server. Which is good in this case as it implies more work to bypass it but where is the ultimate trust here? Intel? Did Signal ever get this working?
In general Signal can just see what IP address/port picks up a particular user's pre-keys if they want to know who is talking to who.
> The E2EE in Signal only protects the actual content of messages.
> [By] (simply saving the data) Signal could get access to things like contacts and phone numbers.
And the linked blog posts show that for a few years now they've been working on limiting their own access to this kind of data --- i.e. it's not as simple as just saving it (like e.g. WhatsApp is able to and most likely doing 100% of the time to build social graphs).
Now of course all of those things are likely vulnerable to some attacks, but that's another discussion and it doesn't change the fact that Signal doesn't have immediate easy access in the way you claimed in your original comment. The fact is that there are some barriers and they'd have to put some effort into either disabling these protections or exploit flaws in them to get to the data.
>...it doesn't change the fact that Signal doesn't have immediate easy access in the way you claimed in your original comment.
I did not at all mean to imply that. The assertive action that Signal would have to take might involve actual work. The question is if they could be forced to do that work by the authorities of the country they operate from. I doubt that the amount of work would really factor into the legal stuff.
Signal of course might of already done the work as part of some cooperation with a national signals organization or simply because someone felt bored and contrary, but they could not admit that if they want to preserve the value of the information gathered.
The E2EE encryption part is in the end the only provable aspect of Signal Messenger. The leakage of meta information is inherent when one entity controls all the infrastructure.
Even worse - American laws allow the US government agencies to actually access the servers directly (or even add other servers or routers) in the data centre of the service provider, and the service provider is legally obliged to not tell anyone about it!
Out of interest, is there a place where this practice doesn't exist (genuinely don't know), where service providers are allowed to tell targets they are being subject to a warrant?
While I applaud Signal's response I expect this entire event (subpoena and response) will be provided as one of the exhibits to congress by the Department of Justice to justify their request that it be unlawful to provide such services. The DoJ will say, "See, here is this horrible crime we are investigating and because this company chose to make it impossible for law enforcement, with a warrant and a subpoena to get it, the criminal is going to go unpunished and that will be on you because you refused to mandate lawful access to communications."
The Congressional response should be, "Do you have no other way of investigating these criminals?" "Could you not put an officer out to surveille them?", "Have you not seen the misuse that law enforcement has engaged in, with such capabilities? From petty revenge to stalking lovers who rejected them. Will you consent to mandatory surveillance of all law enforcement officers that is recorded and stored in a civil controlled repository so that officer conduct may be reviewed at any time?"
I would go further and ask for mandatory 24/7 surveillance of elected officials for transparency and to combat corruption every time they bring up this bullshit.
> This is why there should not be a single entity that runs the network. Make it fully decentralized with economic incentives for node operators.
Just make the network work based on open protocols, where anyone can run a server that interoperates with others. (Matrix.)
For the non-developer end-user, they can use one of the existing servers, who in theory could charge their users.
Yes, it would be hard to maintain branding cohesion, and the product/service would lose to a centralized service. However, transitioning to heterogenous social/messaging landscape (from current oligopolies) could make smaller services like that competitive.
IMO it’s not clear we should treat a legal/political issue with technical workarounds, it doesn’t seem sustainable.
The state can shut down anything in theory. In practice it is much harder in a liberal democracy to shut down a network where operators are numerous, interchangable and have low capital requirements.
I’ve never been offered drugs by anyone in my life and I couldn’t tell you if someone standing on a street-corner is someone simply wanting to cross the street when the lights change or an undercover cop… or an actual prostitute. What am I supposed to be looking for?
(Yes, I’m on-the-spectrum and don’t get invited to parties, so I’m not representative of everyone else’s experience, but I am being sincere in my anecdote)
Similar situation as yours yet I got offered several times to so it's just a matter of learning how to spot them and not of going to parties, although that would make things faster (more people would help you ID the professionals).
Going downtown around 1am on a Sunday morning should get you the full package, at least in most European cities. City parks, albeit edgy and dangerous, are an almost sure bet.
Dealers usually ask you if all is fine and you need help with anything, prostitutes clues are usually attitude and attire, plus they often work in groups. A good undercover cop will be unrecognisable, for obvious reasons (they have to copy criminal behaviour to be successful).
Providers have a harder time than their customers (longer jail time and higher fines) hence the secrecy which makes it diffocult to spott them.
When in doubt assume it's not a professional, if they are they will make it obvious.
The drug war has claimed millions of lives and drugs have never been as plentiful. There are no words in the dictionary strong enough to describe this kind of failure.
In reality most people aren’t going to (or rather, shouldn’t try to) seek out drug dealers and prostitutes on the street. For the former it’s more likely someone will “know a guy who knows a guy”, or maybe use the dark web. For the latter there are websites and classifieds, you just need to know the right terminology, etc (from what I’ve gathered, I have no personal experience...)
So yeah, it’s underground, but not that underground.
I've never paid for sex but prostitution is commonly known as "escort" services which are trivial to find and drugs are mostly sold by word of mouth wherein you socialize with the kind of folks who buy illicit substances and are introduced.
The trafficking of underage girls (and the trafficking of adults, for that matter) is a crime in and of itself, not because of what they are being trafficked to do, and it's a shockingly immoral misattribution of blame to say that the problem with trafficking is the crime of prostitution being "committed" by the victims of trafficking.
(Analogously, this is why law enforcement professionals and the like have been moving to the term "child sexual abuse material" instead of "child pornography": the problem with it is not that it's pornography, it's that it's sexual abuse.)
It is not only entirely possible to fight the crime of trafficking people and forcing them to do certain acts without criminalizing those acts, it is in fact significantly more effective. Imagine, for instance, if Abraham Lincoln had tried to eliminate slavery by declaring that picking cotton was illegal. Not only would it not have addressed the problem, it would have made the problem worse - slaves are in no position to refuse to do an act because it's illegal, and a fugitive slave would be liable for the crimes they committed as a slave.
I made that point to say that drugs and prostitution definitely have victims and are not victimless. I could bring up STDs, robberies and killings due to drugs as well, its possible to fight trafficking without criminalizing it, but its not likely to happen nor would a moving target of decriminmalizing actions they do be a realistic goal.
I understood your point full well, and it is both wrong and immoral.
Prostitution does not have victims. Forcing someone into prostitution has a victim - the unwilling prostitute. This is a crime committed by the trafficker against the prostitute, not a crime committed by the prostitute against the client (or even vice versa).
Again, take the example I gave of slavery. The analogous argument to yours is to claim that cotton picking is not victimless. Obviously there were victims in the antebellum south. But to say that cotton picking is the crime is to miss the point entirely and to hurt the victims of the actual crime. Cotton picking is victimless, unless you think cotton feels pain. Forcing someone to pick cotton is a crime.
STDs are not a crime*. They are a potential downside of sexual activity of all kinds. We are comfortable with all sorts of activities having potential injurious downsides which are not criminal. You can get a concussion playing football, but that doesn't mean giving someone a concussion in the course of playing football (as opposed to because you specifically set out to attack them, of course) is a crime, and it certainly doesn't mean that playing football itself ought to be criminalized. If someone claimed that the only way to stop football concussions was to criminalize the game, we would find that position laughable or at least dangerously authoritarian.
Robberies and killings are a crime, and they can be prosecuted in their own sake. Robberies and killings also happen in, say, banks; we don't say that this is a reason to make banking illegal.
In fact, it is because banking is legal that police and regulated armed guards can be at banks and stand by as legitimate, victimless banking transactions happen, and that a bank under attack can call for the police and know that they will not get shut down. Going back to the example of the fugitive slave - a slave who knows he's been forced into illegal actions is unlikely to seek his freedom and is quite likely to seek the effective protection of his master, preventing him from accessing the liberty due to him in a just society. Similarly, people participating in the victimless crimes of the sex or drug industry generally have little recourse to the justice system. Robberies and killings (and mistreatment of prostitutes) happens because criminals know that their victims do not have the same ability to pursue justice as the general public does. Fully legalizing these industries is the only plausible first step to solving this problem.
In an artificial environment sure, you will have prostitution with no trafficking, but it is never that way in practice. Even in places with brothels you still see humans trafficked. It is irresponsible to assume they go away with legalization.
Concentration of money will cause unscrupulous people to gather.
Realistically, legalization doesn’t end these issues.
Is the drug policy really the issue here? Singapore is so against various consumables you get jailed for vaping (nicotine). You are assuming its policy, but its more likely drug culture.
Drug policy isn’t simply binary do X or Y, it’s a system that can work or fail. Singapore’s policy drastically reduced consumption, thus it was effective just as Sweden’s policy was. US drug policy on the other hand has failed to see similar results and thus it failed.
If you're killing yourself with drugs in private, it's on you. Despicable choice for sure, and it sucks if a bad environment drove you down this road - but that's on you: don't make me pay for your healthcare.
Trafficking underage girls is obviously a crime against another human being, it's not a victimless crime like drugs and should be punished.
Should this not also apply to unhealthy eating? How do you feel about:
If you're killing yourself with unhealthy eating in private, it's on you. Despicable choice for sure, and it sucks if a bad environment drove you down this road - but that's on you: don't make me pay for your healthcare.
I think it's a fairly well-understood causal inference problem to figure out how much effort (money) needs to be spent on other people for GDP growth from that one person to offset the money spent on them by society.
We just need someone to step up and collect the data.
Doesn't this already apply to eating? We don't forcefully commit unhealthy eaters into institutions where they are conditioned to avoid unhealthy foods.
Honestly I’m surprised it’s legal to run a Tor exit-node at all given the sheer volume of morally reprehensible things that go through it - regardless of its utility for journalistic freedom.
Tor was created for the US Navy to hide their intelligence operatives and wouldn't work for that purpose without the rest of that noise. It requires that everyone uses it for anyone to reasonably be considered hidden.
It's not exactly a secret and any discussion of Tor's history will immediately bring it up, but sure.
> For the onion router to work properly, the Navy needed to step back from running it. A cloaking system is not useful if all the cloaks say “Navy” on them. “If you have a system that’s only a Navy system, anything popping out of it is obviously from the Navy,” Syverson says. “You need to have a network that carries traffic for other people as well.” Tor Project was incorporated as a nonprofit in 2006 to manage operations. [0]
Ah - wow, dang. I believed the cover-story that Tor was a side-project of a humanitarian org in the US gov that wanted to provide people in firewalled countries a way to access the wider internet - of course, Tor does do that (just as a nice side-effect, and to be fair, the US does benefit and encourage democratic (...and neoliberal) ideals worldwide as its in their own interests to).
...it's kinda like how we were all told that the 1950s-1970s space-launches with monkeys and other animals in them were for bio-scientific experiments when they were really just cover stories for launching spy-sats and military satellites.
> just as a nice side-effect, and to be fair, the US does benefit and encourage democratic (...and neoliberal) ideals worldwide as its in their own interests to
To be clear, they encourage democratic ideals where/if it's in their own interests to. There are plenty of places where they encourage non-democratic ideals as well (or directly fund their own coups to put usually military leaders they like in power).
Probably most Tor exit nodes are run by agencies trying to snoop on traffic.
I wouldn't trust Tor to solely protect my identity since there might be ways to attack it. But it can be an useful asset in a "pipeline" designed to protect one's identity.
> Probably most Tor exit nodes are run by agencies trying to snoop on traffic.
Well, that would explain why the nodes don't get shut-down when the local police come knocking...
Still, that must require some very awkward conversations when ethics committees or even departmental budgets are approved - not to mention inviting consternation from the general-public who really don't approve of their tax-money being used to knowingly facilitate dark-net activities, especially horrific child abuse, ugh. (I'm a utilitarian myself, but personally I'd pull the plug on it: surely there are better ways for embedded agents to exchange information? What about digital-steganography, and concealable satellite comms hardware?)
Aren't both XMPP and matrix networks decentralized? XMPP has been around since 1999 according to wikipedia. It is my understanding that providers could keep metadata/logs about accounts involved.
what do you mean by ownership of account. You can migrate to other servers if you want? Or does that mean you lose your user ID which is tied to a provider/domain?
Isn't registering an account necessary to avoid impersonating? how does Status ensure a stable user ID?
Everyone gets a random super long number for an ID. Your friends can assign your name when they add you as a contact (just like phone numbers). And then you can also get a globally recognizable name like ENS domains if you want to pay a little. It’s like a vanity plate on a car.
Problem is, messaging platforms in "the Internet" aren't interoperable. If I use service X, I usually can only talk to people that also uses X. So "the Internet" is mostly composed of silos.
Silos are the reason that platforms get centralized in the first place: people generally enjoy using the same X as their peers, so they can communicate in a single platform.
The issue is the internet doesn't provide the required service for individuals to find each other directly.
We need DNS for everyone, not just the rich...er those bothering to register a name.
Make it GUID based, and let people contact each other by GUID instead of phone number.
Preferably add privacy enhancing encryption too so that only those with your public key could decrypt your GUID DNS entry to then find your IPv6 address.
As far as I know, Matrix wants to do something similar to this. They want user IDs to be public keys. A user ID can be bound to a name, but can change the name over time and keep your connections.
I'd say the big problem at the moment is that people are not used to paying for service. Lots of stuff related to naming is very cheap. But if you require it to be free, then you will get bad incentives like trying to build a silo and fill that silo with ads.
as soon as you create a namespace like that, you've made the lever that will be utilized for centralized control. In reality, if you had no chance of getting somewhere without the centralized registry, you don't have the connection.
What I am proposing is an extension to DNS to allow individuals to find each other directly rather than having to go through middlemen like FB etc to simply message each other.
DNS already exists as a point of centralised control.
Service Providers already have to log IP vs user accounts.
don't see how extending DNS for the masses could make things any worse.
As a sibling comment points out though, who pays to run it?
I think general public doesn't have anything against being spied. They are already convinced that it is for their own good and for catching the bad guys. They are sure they have nothing to fear.
This is definitely not true. Dianne Feinstein for instance has been instrumental in almost all of these efforts. As a senator from California, she could be replaced with someone nearly politically identical that didn't support government surveillance.
For lots of reasons. There are 100 Epsteins you dont know the name of still at large, and their main purpose is blackmail of top politicians and businessmen on behalf of whatever intel agency or mafia runs them. Trumps mentor Roy Cohn was one of these.
And blackmail is only one tool in the toolbelt. Bribery comes long before that. Anybody with too much integrity is quickly removed from office or worse.
Ive tried to explain to some progressives why AOC and the squad have turned their backs on them with this, but for many their schoolhouse rock propagandized brains refuse to admit its as bad as it is.
It only gets worse the more we aquiesce to increases in censorship and surveillance (which I like to remind people is in the hands of the executive branch, further eroding the checks and balances system).
I'm not sure which was intended, but I think this is much more accurate as a cynical comment on human nature than some comment on "shadow government/deep state" type stuff.
As defines so much of society and what people claim is "human nature", there is no need for shadow governments or deep states when you have power structures and incentives. Those scale, conspiracies don't.
The thing is, your voice can also add to the din of noise that drowns out the signal. Not every vote adds signal.
Here the problem is when you go down the ballot and reach the judges, schoolboard, and other offices where most people have no idea who the candidates are and many just vote randomly.
In Arizona there was a campaign that unseated an incumbent schoolboard member by a rival candidate whose last name, if some letters were transposed, was a famous local figure. The funny last name guy won.
So go ahead and vote, but please leave blank or skip over any of the candidates that you haven't researched. Don't vote randomly - some people are trying to have a real election.
I'm pretty sure many voters are voting based on colors. They researched which team they like the most and now they vote for that team each time. And likely true for more than just the USA.
I advocate joining a revolutionary political party instead. If you have a big enough constituency that can cause problems, they will eventually have to acquiesce to demands as opposed to simply controlling the choices or ignoring the results of a vote (which I would point out happens routinely, even in presidential elections such as the 2000 election theft or the attempted theft and coup of 2020).
Except Apple is making a direct attempt at solving the issue as it relates to CSAM (and easily expanded to other data) and facing a huge backlash. I wonder if there’s no solution because we’re (myself included) are just stubbornly unwilling to consider any solution that isn’t absolute privacy. I’m not willing to sacrifice my privacy to a nosy government, but willing to consider solutions that might allow the government to pursue its goals. Apple seems to think it’s possible that we can have the best of both worlds, even if they clearly haven’t figured it out just yet.
Apple is not a solution, it's a stop gap. They will still want a copy of the messages after it, and all your other data.
And the reason for the huge backlack, is that this stop gap will actually make it easier for them to request more afterwards, because the infrastructure, the proof of concept, will already be there and running. And it will cross to other providers: "see Apple does it, so clearly it's Signal that's being protective of criminals, we should impose them to do the same thing Apple did with no issue".
Has Apple announced that they're making iCloud end-to-end encrypted? It seems like people see the on-device scanning as a road to an "obvious" next step, but I'm not sure that Apple has announced that that's the next step. They might scan your device locally, and mine everything in the cloud for advertising purposes. They haven't said anything to the contrary, and their current terms of service allows it.
I could be missing something, but I did a quick search and all I see is news about them scrapping their once-encrypted backups at the request of the FBI.
Agreed. The general sentiment I perceived from HN at the time was that almost nobody was willing to accept Apple's CSAM scanning, even though CSAM had been confronted as an issue before the internet was widely available. I perceived a lot less room for opinions in favor of sacrificing a limited amount of privacy for greater public good, or similar. After the media finished its reporting on the subject, it seemed like there wasn't much more discussion about it, and Apple now seems poised to go forward with releasing its implementation of the scanning anyway at some unknown future date.
The arguments about slippery slopes and potential surveillance weren't as interesting to me as the opposing argument: that a very high level of privacy (not even an absolute level) carries consequences for a specific segment of society by the intrinsic nature of what is kept private, and in the name of protecting that segment of society, the tradeoff is not worth it.
There is also the idea that data on a hard drive can be as damaging to human livelihood as physical contraband, to the point that the vast majority of the world's legal systems, not just those of the U.S., have decided that the data should not exist under any circumstances. CSAM is one of the few classes of digital data that compels the creation of scanning systems for such data on a scale that isn't driven by political ideology, propaganda or similar. It's difficult to imagine how Apple would be obliged and driven enough to implement such a system out in the open and in the name of the public good if the publicly announced reasoning was to scan any other class of data (assuming that Apple can be trusted, at least).
It is driven by a moral panic - that the panic in question is about a very real issue doesn't change its underlying nature. So whatever measures are put in place this time, they'll be normalized and deployed when the next one happens.
This is like having a camera in your house, that does image recognition, and once it thinks it saw something illegal, sends the "something is happening at XY street" to the police. Does it help the government solve crimes? Sure. Is it creepy and intrusive, and not something anyone sane would accept? Yes.
I don't know why we accept the same things when we're talking about digital privacy.
Or perhaps more likely, they'll go the lavabit/CALEA route, and order that their platform be modified to allow wiretapping, at which point Signal must choose between either complying with such requests, or going out of business.
If that happens, hopefully usage of p2p messaging apps like Briar or Status will gain more traction and usage.
Signal's userbase is different from Whatsapp's, they made a conscious decision to put up with some of the inconveniences for ultimate privacy. Once the canary is out they'll leave.
Bonus points for sending the Unix timestamps in millis ... I hope the lawyer who got this response has to hire a consultant too translate it for the court!
> The only information Signal maintains that is encompassed by thesubpoena for any particular user account, identified through a phone number, is the time of account creation and the date of the account’s last connection to Signal servers. That is all. [2]
I am a bit skeptical about the precision due to the last 3 zeros for the 'last connection date'. It seems to unlikely that it got exactly that moment...
>Will you consent to mandatory surveillance of all law enforcement officers that is recorded and stored in a civil controlled repository so that officer conduct may be reviewed at any time?
I think this should be implemented regardless. But the danger is they can easily say "sure; your turn". Some in law enforcement may even genuinely want to do that for its own sake.
I think that's a totally orthogonal thing to violating citizens' privacy and shouldn't be considered as leverage or a counterargument.
"Do you have no other way of investigating these criminals?"
"Could you not put an officer out to surveille them?",
Your argument kind of falls on itself there --> We can surveil them in person, but not digitally?
Why?
I think the tech crowd has this wrong.
The issue - as you have indicated - is not 'By What Means To Surveil'.
The issues are: Legitimacy, Proportionality, Oversight.
Messages, tech, sign language, in person, phones or messaging pigeons, the issue remains the same:
Is there a legitimate reason for access?
Is the intervention proportional to the probability of cause, the ostensible crime, the risk to other citizens and the public good?
Is there authoritative Judicial oversight of the surveillance, and, is there sufficient Congressional oversight of the legality of the program?
Those are the questions.
Should police be able to tap Signal (or anything else) for data on anyone they want, for whatever they want, willy nilly without a Warrant or oversight?
Definitely not.
Should Apple be scanning content for crimes?
Probably not, but that's slightly more complicated.
Should the police be able to access the Signal messages of someone they apprehended at a murder scene wherein other suspects fled the scene, and are therefore likely the suspects accomplices?
Likely yes. Or at lest, most people would agree with it both in the pragmatic sense, and also the Constitutional sense.
Should the FBI be able to, with special permission of a Federal Judge, watch all cell tower transactions in 5x5 mile grid grid, while there's a literal manhunt on for literal terrorists during a literal state of emergency?
Probably yes again, it's hard because the proportionality and tactics are rare and unique.
> Your argument kind of falls on itself there --> We can surveil them in person, but not digitally? Why?
Because, before the internet, when surveillance was a lot of work, this prevented the abuse that is mass surveillance.
Only now do we see how much democracy relied on this natural limitation of state power for civil rights. It was never just the need for warrants that maintained civil rights. Remember the Verizon FISA court order authorized surveillance of millions, and that was just one order of hundreds.
Power corrupts. State power should be sufficient, but minimal. Being allowed to do physical surveillance only is sufficient to reduce crime rates to the point where most people can safely neglect that crime exists at all, and it is less power because it does not scale.
You mention proportionality yourself. Proportionality means that something is not done if the same objective (finding a given murderer) can be achieved in a less rights-infringing way. Proportionality at the policy level, means that a surveillance power may not exist, if its objective (such as safety from murder, i.e. low rates of murder, high chance of finding murderers etc.) can be achieved without the power or with a power that is less likely to be abused or that infringes rights of suspects less. (I admit this is somewhat of an editorialization; the technical meaning of proportionality is in [1]. To be clear the existing legal proportionality principle does not try to directly minimize power; but it does usually present an obstacle whenever new powers are created by law)
Limiting power is simpler and less error-prone than allowing power and adding control structures like warrant requirements for the power. It is thus better, if the outcome is the same.
> Should the police be able to access the Signal messages of someone they apprehended at a murder scene wherein other suspects fled the scene, and are therefore likely the suspects accomplices?
> Likely yes. Or at le[a]st, most people would agree with it both in the pragmatic sense, and also the Constitutional sense.
Sure, but that is not the question. The question is: should Signal be forced to implement its software in a way that allows the police to access its messages?
Likewise, everyone also agrees that in person surveillance is in general legitimate, but that doesn't mean that we all agree that the government should be able dictate burdensome measures on third parties to make it easier to perform this surveillance: say require that the friends of the suspect keep notes on his whereabouts. Changing its architecture would cost Signal a lot of money and good will, and the suspect would probably not be very happy that his friends ratted him out as well.
They can easily read all the signal messages they want, by getting the phone Signal was installed on(assuming the user is logged into Signal, which 99% chance the answer is yes).
They are not willing or able to do that for whatever reason, but the ability exists.
I agree the technology used is a secondary issue, but Law Enforcement got lazy when they were able to wire-tap phones willy nilly whenever they wanted. They need to get un-lazy again. They can do all they want to do without needing un-encrypted traffic, un-encrypted traffic just makes them not have to work as hard.
I'm fine with them having to work harder instead of them getting to see all the communications they could ever dream of.
> Should the police be able to access the Signal messages of someone they apprehended at a murder scene wherein other suspects fled the scene, and are therefore likely the suspects accomplices?
I believe under those circumstances, the police should be able to apply to a judge for a warrant, and if probable cause is found and a warrant is issued, attempt to access the Signal messages of such a person. A requirement that they succeed has technological implications; Signal, the device, or both would have to contain a backdoor, or use security measures that are not very effective.
I am an absolutist on this question; I don't think governments should have the authority to mandate backdoors in products or services even if doing so would result in the prosecution of many criminals who otherwise go unpunished. For those who want to balance the government's interest in prosecuting criminals against secure and private communications, the question I would ask is: how often do investigations or prosecutions fail because it was impossible to access encrypted data with a court order? A precise answer might be difficult since the content of the encrypted data is unknowable, but an upper bound could be established by assuming such encrypted data is always incriminating.
Are they though? First it's "only" to catch the super-terrorists. Then it's only to catch the murderers. Then it's for VIP missing person cases. Then it's petty crime. Then it's for political opponents. Then nobody dares ask anymore about "legitimacy, proportionality, oversight."
When those in power grant themselves more power, the onus is on them to prove they can't abuse that power. Of course, the way power works is they have that choice and we don't.
And to be clear, this isn't some hypothetical slippery slope, either - we've literally already seen it happen. E.g. various legislation passed ostensibly to target terrorism is routinely misused. Here's just one example that predates the Patriot Act, even: https://www.hrw.org/report/2010/07/04/without-suspicion/stop....
I don't think your link supports what you're saying at all...? In fact it gives examples of police forces voluntarily limiting it's use, essentially the opposite of a slippery slope...
Why do you consider it acceptable for the DoJ put citizens and their communication at risk? Aren't they supposed to work for the people instead of against them?
It wouldn't be a very elaborate rationalization. e.g. "Banks have KYC and AML requirements. We have an information economy. 'Misinformation' is the equivalent to funding terrorism. Therefore, information-exchange platforms will also be required to perform KYC and AIL or be prosecuted. Compute providers, ISPs, payment processors, and crypto exchanges [assuming they still exist when this hypothetical is actually proposed] will be forbidden from serving any platforms that don't."
Transparency is critical. If Signal cares about the ethics at least as much as the marketing then they did right by the ethics and by their bottom line.
Maybe that’s what congress should do. Until such time, Signal should follow the law, in letter and in spirit. Even if the US democracy is flawed, it’s still a lot closer to the concept than any alternative.
Yes they fucking should. Nobody should be above the law - nobody. There should not be “two tiers” of law enforcement, be it by class or occupation. Everybody should be held to the same standard.
The single most clear political lesson of the past decade is that using power, even blatantly cynically, when you have it, won't produce much of a backlash. Your fans will just wait until the "other team" does it to complain.
And that's for hyper-partisan issues! I'm not sure there's any truly influential political group that would strongly oppose this. Thinking it's just the politicians who are unaware and/or disagree with the tech-minded is a mistake. The populace is less on our side re: surveillance than we'd hope.
Making it unlawful to operate this kind of service would be a very bad idea, but it's far from clear that it's unconstitutional, and I would expect courts to rule otherwise if Congress decides to impose more logging requirements.
Anybody concerned about these issues should consider donating to their favourite non profit that can have an impact that works in the area. Most HN users can afford $20/year pretty easily (others could afford $200/month and not even notice it)
As they say, “Put your money where your mouth is.”
The ACLU is not what it once was. I will not donate to them. Even the EFF is growing questionable. I would definitely be curious what recommendations people have.
For supporting "this" very specifically, ACLU (in the form of their 501(c)(3) foundation) is directly involved as legal counsel to Signal, so donating to EFF would be less relevant to "this" than donating to either Signal (also a non-profit) or to the ACLU Foundation. But, sure, the EFF is also a good organization.
If your concern is specifically the ACLU's disagreement with recent Supreme Courts on the intended scope of Second Amendment rights, the EFF isn't going to address that either since it's out of scope for their mission. But that's in no way relevant to what Signal had to deal with here, an area in which I think ACLU and EFF are pretty well aligned.
> If your concern is specifically the ACLU's disagreement with recent Supreme Courts on the intended scope of Second Amendment rights, the EFF isn't going to address that either since it's out of scope for their mission. But that's in no way relevant to what Signal had to deal with here, an area in which I think ACLU and EFF are pretty well aligned.
The comments about the ACLU probably aren't about the second amendment. That's never been the ACLU's thing. It's probably about the recent backing away from the 1st amendment which has historically been the ACLU's domain.
They have limited resources and so they cannot fight every single fight. Some people object to their current method of filtering. Apparently, groups tied to neo-nazi/white-supremacy are some of the ones they filter out of consideration, but they have done so in the past.
Some people think the ACLU should fight specifically for such organizations to make a point that everyone, even hateful bigots, have the same rights.
Personally? I'm on the fence. On the one hand, given limited resources & all else being equal in two cases, why not choose the group of people who are nicer? On the other hand, if you choose the hateful group & set a precedent that even the most hateful people have the same set of rights as others, it's close to irrefutable in future cases with nicer people that they get those rights too.
As soon as you identify a label that can be used to deny free speech, that label will be applied to everything and everybody in an effort to deny free speech. It is extremely important to defend free speech. Some people get confused and think that a defense of free speech is a defense of the label.
Historically, the ACLU is one of the most well known organizations to defend free speech regardless of label. They should not create such a gaping vulnerability in their strategy.
Don't announce to the world that any case involving the word "crypto" will not be defended. Or the words "carpentry", "space", or "elephant". The issue should be free speech, not miscellaneous labels.
I think "failing to support" in the grandparent comment is too weak for some. This is Glenn Greenwald on a recent ACLU amicus brief[1]:
>> This is the first time, at least to my knowledge, that ACLU is explicitly arguing in court that the First Amendment's free speech clause has been interpreted *too broadly* by courts, and are advocating *a more restrictive view* of what free speech means.
I'm not sure about that case in particular, but on your question of
> why not choose the group of people who are nicer?
I'd say the grim batman ACLU of my alternate-history fanfic cares more about precedent than it does about defendants.
If Greenwald's high school teacher found out he was gay and started referring to him with feminine pronouns, because the teacher felt being gay feminizes you, would Greenwald just chalk that up to free speech? Or would he consider it bullying that ought to stop?
What if a white teacher spoke to the only black student in the room using his own version of AVE, and used his usual English for the rest of the class?
I don't know the answer, but I can understand the ACLU seeing a case where rights come into conflict and choosing a result that seems more just.
There appear to be two different rights in conflict there, so there's some complexity to the situation.
On the one hand there's the teachers' free speech rights, freedom from compelled speech. On the other hand are the students' civil rights protecting against discrimination.
The ACLU in that situation has decided the later should prevail over the former when it pertains to government workers & their speech while on-the-job.
Does that type of speech, refusing to respect a person's identity, have any additional legal implications being that they seem to be public employees in a public school system? Discrimination as they argue in private businesses versus government ones? IANAL, just curious
The brief summary here[1] implies that's the basis of the ACLU's reasoning:
>> Mr. Cross spoke in opposition to Policy 8040 at a school board meeting, and refused to comply with the provision...
>> While the teachers may disagree with the policy, they do not have the right to violate it in their capacity as K-12 teachers in the Loudoun County school system.
The brief was unclear about whether the teacher refused to comply in the meeting or in another circumstance, and I don't care enough to check sorry.
FWIW, I'm mostly in favour of at-will employment, and think that they should be able to suspend or get rid of him for any reason. And I'm in favour of school choice, and think that if the views of parents and teachers would be better reflected if the government weren't involved. And I'm a cynic, and believe that the only reason anyone's talking about a Virginia school is the governor's race, and it's all just entertainment for people living elsewhere.
Since several people are asking why the ACLU isn't what it once was, let me answer that.
In 1978, the ACLU successfully defended the right of neo-Nazis to march in the predominantly Jewish town of Skokie, Illinois. This action reflected their commitment to free speech, regardless of how offensive the speech might be. The movie "Skokie" and documentary "Mighty Ira" are based on this event-- I highly recommend them because this is an important piece of U.S. history.
In contrast, the modern ACLU has backed away from this stance. A leaked ACLU memo[1] says that before they take a free speech case, they will consider the "context of the proposed speech; the potential effect on marginalized communities; the extent to which the speech may assist in advancing the goals of white supremacists or others whose views are contrary to our values; and the structural and power inequalities in the community in which the speech will occur."
I understand citizens need the right to voice their opinion without fear of government repression; but citizens shouldn't believe they have the right to insult and behave antisocially to other citizens.
Any kind of white supremacist behavior is not something to be treasured as freedom, because that enables their harmful behavior against other citizens.
It's only freedom of speech if it protects speech which is actually controversial. This isn't high school. People should have the right to give offense and behave antisocially, if for no other reason than because there is no one who can be trusted with the power to determine what falls into these extremely arbitrary and subjective categories.
Yes we should tolerate the speech of white supremacists and other hateful characters, because the alternative is to see reasonable statements lumped in with them. Where we should draw a clear and sharp line is at violence.
"It is better that ten guilty persons escape than that one innocent suffer."
- William Blackstone
"I disapprove of what you say, but I will defend to the death your right to say it"
- Evelyn Beatrice Hall
You prevent them from gaining power by winning debates against them and demonstrating to everyone why they're wrong.
I mean have you actually seen the "science" used to claim even the existence of different races? It's all misleading statistics and pseudoscience. You group a bunch of people together based on geographical origin and then observe some differences in the averages between the two groups, ignoring that the differences within each "race" are larger than the differences between "races," and that the lines are being drawn arbitrarily, and that even the measured averages could be different as a result of environment or culture rather than genetics.
There is no way for a Nazi to win that on the facts because the facts are against them. Which is why they have so little actual support. There are probably more trolls pretending to be Nazis than there are actual Nazis.
They're used as the boogeyman specifically because there is such widespread agreement that they're wrong.
Individuals who dared to make anthropological, social and genetic studies on ethnic groups were retaliating upon with great anger and were banished from academic life even if the scientific methodology was sound.
No one dares to do such studies anymore so we can conclude that the "good" has won.
Unless I'm misinterpreting your angle, this sounds like a blatant, hysteric mistruth. Nearly every day I see studies where social and ethnic groups are used to partition and understand the population, particularly with COVID.
So it depends what you mean. If you mean there are no studies done with such partitioning aimed towards the benefit of those groups, I call that bunkum.
If you mean studies are not sponsored where the outcome is of no societal value beyond reinforcing or justifying an established hegemony or excusing discrimination, then perhaps... and good.
You're missing an important point here, that Eric Weinstein brings up a lot. The goal of science is finding truth, but the mechanism of how it happens right now is very tied to the "goal" of each study. Goal-less data collection is apparently impossible to certify and "be acknowledged".
>no societal value beyond reinforcing or justifying an established hegemony or excusing discrimination
Researchers might have hypotheses they want to test, and any hypothesis that's not "good" is not explored. Because
1. real identities and careers are affected
2. collected data isn't good enough for journals
> You prevent them from gaining power by winning debates against them and demonstrating to everyone why they're wrong.
"A lie can travel around the world and back again while the truth is lacing up its boots."
It doesn't matter how solid your facts are if people are listening to vox pops of people who back their own internal beliefs.
> There is no way for a Nazi to win that on the facts because the facts are against them. Which is why they have so little actual support.
They don't need support, they just need an echo chamber to be encouraged. Pop onto stormfront and try convince a handful of posters there and see how far you get.
> There are probably more trolls pretending to be Nazis than there are actual Nazis.
If x% (where x is some suitably small number) of a group are the only ones who truly believe it, and the rest are just trolls, then increasing the population size leads to both more trolls and more Nazis. It also doesn't matter to anyone whether the person spewing vitriol is actually a Nazi or a troll when the abuse is directed at them.
Our tooling for communicating ideas and finding good ones is equivalent to mob rule, same as a hundred years ago, just amplified. More at my older comment https://news.ycombinator.com/item?id=29051178
> Pop onto stormfront and try convince a handful of posters there and see how far you get.
I've actually done this - it's far easier than you make it seem. I haven't really started collecting metrics on success, but that's kinda what we're missing - messengers who construct a pyramid of truthy statements and go back up the chain to whoever convinced THEM to a particular viewpoint when they find resistance to change from the opposite side. We're missing tech that incentivizes such behavior.
Of course this requires that everyone has a set for themselves a threshold for when they would change their mind about a topic - which I find is far more prevalent among US conservatives (at least online, as they behave to me), than when you try to establish the same among US liberals. Of course, ignoring the obvious field that doesn't have a threshold by design - supremacist religion.
My working theory is that supremacist religionists who converted out of it simply replaced it with liberal ideas (and were being supremacist about those), while there are both religionists and non-religionists among the conservatives.
Pretty sure millions were already in Soviet gulags with Pravda and Izvestiya cranking out official propaganda before anyone became a Nazi.
If you want to understand the terror of total state censorship practiced "the first time", then look to Lenin specifically and communism more generally.
Alternately, one can argue this climate of fear and systematic repression was present during the French revolution, but not localized to the entire state.
Cute, but trivially falsifiable. Wearing a Freddy Krueger mask on Halloween doesn't make you Freddy Krueger.
And to the extent that it is true, it implies that we should stop using Nazis as the boogeyman. Because trolls are going to use whatever will get a rise out of people. Best not have it be this that they summon up.
I don't think any quotes are required around the research performed by Rushton & Jensen (2010)[1] and others that they build on. Doesn't it seem perfectly plausible that, much like other physical characteristics (height, body composition, heat tolerance, etc.) the meat in our skulls also varies between ethnic groups? Shouldn't this be researched and investigated like every other observation?
Once you start studying groups by linguistic origin (not "race") things become much clearer since people historically didn't really move around that much, and genetic markers are quite visible (a la 23andMe). I don't think you need to be a Nazi or a troll to be interested in any of this.
You're talking about the differences in averages. But that's meaningless. If the median Asian man is 5'8" and the median European man is 5'9" (or whatever the numbers are), this tells you nothing about any individual Asian or European males, because like 40% of Asian males will still be above the European median and half of Europeans will be below it.
And it's a completely arbitrary line. If Swedes are on average taller than Austrians, are they different "races" then? It's balderdash. There are genes that affect height and whatever else, but they're widely distributed throughout all "races."
It tells you a whole lot about the composition of either end of the bell curve. It means that in a given population the shortest people are much more likely to be, say, Taiwanese (average aheight 5'3"), and the tallest Dutch (average height 6'½").
Edit: To your point about group taxonomy, usually precision is only available to a level where genetic markers are present and distinct. This in turn comes from groups not interbreeding. For instance, using 23andMe's taxonomy, Sweden belongs to the "Scandinavian" group and Austria belongs to the "French & German" group:
There are two adult brothers somewhere in New York City. They have both of the same parents. One of them is 5'3" and the other one is 6'½". What's the probability that one of them is Dutch and the other one is Taiwanese? If your answer wasn't zero I would like to see your work.
If they are brothers then I suppose they could be mixed Taiwanese-Dutch. But that really isn't the point is it? I'm talking about trends in populations. It's an answer to the question of "Why do we see different representation of these groups on the height spectrum?", not "Explain the difference between these two individuals".
Edit: Also to assert that, given two random people on opposite ends of the height spectrum, it's much more likely that they are distantly related than closely related (controlling for sex) because (controlling for nutrition) height is a heritable trait.
> It's an answer to the question of "Why do we see different representation of these groups on the height spectrum?"
But we already know that, by and large. It's genetics. And the thing we know about genes is that they align very poorly with "races".
It's like grouping animals by color. Then you discover that brown animals are bigger than red animals, because bears are brown and bears are big whereas cardinals are red and cardinals are small. But sparrows are brown. Using the logic of "race" we put the sparrows in the same category as the brown bears. The black bears go in with the black housecats. The red housecats are with the cardinals.
The fact that you can find statistically significant differences between the color groupings doesn't mean you're not doing something preposterous.
>It's like grouping animals by color. Then you discover that brown animals are bigger than red animals, because bears are brown and bears are big whereas cardinals are red and cardinals are small.
But there are groupings of people by genetic markers which correlate to linguistic/cultural origin. These aren't "races" per se but they are a valid and interesting taxonomy of people. And unsurprisingly given they are determined by genetic distance, these groups tend to have a more homogenous physical appearance. When we're talking about IQ, asking "why is does it differ across identifiable groups of people?" seems to be a very controversial question to ask or attempt to answer, often getting you lumped in with the Nazis, which was my original point.
>If you take two populations of otherwise identical people and then isolate them from each other, they'll diverge over time. Maybe the land is different, or the weather, and some genes are better adapted to one place or the other so they become more common there.
By these logic every form of life it's the same since all evolved from the same unicellular organisms.
And yet my cat doesn't have leaves while my plant I keep in the pot doesn't meow.
> These aren't "races" per se but they are a valid and interesting taxonomy of people.
The thing about groupings like this is that they only exist in the statistics.
If you take two populations of otherwise identical people and then isolate them from each other, they'll diverge over time. Maybe the land is different, or the weather, and some genes are better adapted to one place or the other so they become more common there.
That's just averages. Originally 50% of the combined population had trait A. Over time, it becomes the case that 30% of the first population has it and 60% of the second population. You can measure the difference in the average. But the people in the first population who have the trait are the same as the people in the second population who have the trait. It just happens to be the case that that trait is better adapted to the place where the second population lives. If you want to talk about something, talk about people who have the trait or don't, rather than where their ancestors lived.
> When we're talking about IQ, asking "why is does it differ across identifiable groups of people?" seems to be a very controversial question to ask or attempt to answer, often getting you lumped in with the Nazis, which was my original point.
Oh, absolutely. And if that's true, it's true whether you want to hear it or not.
The salient point is that the Nazis are still wrong even if it is. Because it's all still just averages. If there is some gene that makes you smarter and 55% of "white people" have it and 45% of "black people" have it and as a result the averages are different, the conclusion obviously isn't that all white people are smarter than all black people, so Nazis are still wrong.
There is also no concrete proof that that's the case rather than the disparity being a result of nurture rather than nature. It's legitimately hard to separate them out even when it's not as politically charged as this.
> The thing about groupings like this is that they only exist in the statistics.
You could say that about any grouping of people whatsoever.
> If you want to talk about something, talk about people who have the trait or don't, rather than where their ancestors lived.
But clearly where there ancestors lives is a major factor in identifying the likelihood that a person will have a particular trait. And all of this matters because people have gotten it into their heads that every cultural/ethnic group should be equally represented in every field. When you point out that, for the above reasons, this is just as absurd in executive leadership as it is in basketball, they call you a racist/Nazi.
The same objection is raised when people ask "Why are Ashkenazi Jews disproportionately rich/powerful/successful?" or "Why are African Americans disproportionately not?" and you point to the IQ data. In fact you're a racist for even thinking to study this in more detail. This is why I'll never trust anyone to decide what is and isn't an offensive thing to say.
> The salient point is that the Nazis are still wrong even if it is.
It depends on what you mean "wrong". They're clearly morally wrong for murdering and sterilizing millions of people. They're not however wrong that eugenics programs are just as effective in humans as any other animals. And there's more than one way to do eugenics. Why not, for instance, as a matter of public health, simply incentivize people with desirable traits to produce more offspring?
> It's legitimately hard to separate them out even when it's not as politically charged as this.
I agree. Like most things it seems to be genetic predisposition in combination with environmental factors.
>You're talking about the differences in averages. But that's meaningless. If the median Asian man is 5'8" and the median European man is 5'9" (or whatever the numbers are), this tells you nothing about any individual Asian or European males, because like 40% of Asian males will still be above the European median and half of Europeans will be below it.
Valable also for German Shepherd and Golden Retriever.
>If Swedes are on average taller than Austrians, are they different "races" then?
But Swedes can be still considered a different ethnic group than Austrians. Does the exact terminology matters? Various groups of people have both differences and also things in common. So what?
The core problem here is that our technology for discourse is basically twitter and "voting with likes". The algorithms are optimized for attention and likes, so the mob wins - not the best argument.
IF we fixed THAT problem (with new technology, an early one is https://www.kialo.com/tour, but it's not good enough), we can have what you want.
Right now, we're effectively the same tech level as the 1900s (perhaps in a more dangerous way) w.r.t finding truth or the "right" arguments
I’ve never been a fan of voting with likes either. I’d be all in for legislatively banning up and down vote systems. You will get minimal traction here though as the political left use these content voting systems to focus effort and gamify division.
As a Black American who has fought against white supremacists in the street I would say the government has no right to prevent their free speech.
At the same time, I’m under no obligation as a private citizen to tolerate their odious speech. This even includes in business settings… At an old company I once asked our CEO to take down “sponsored content” from our home feed that was promoted by a group on the SPLC hate group list (for comparison we also regularly took down ISIS material.)
Maybe try thinking about it for a moment instead of resorting to the boring heuristic of assuming Americans are dumb zealots treasuring rights because they are brainwashed.
People like you need to stop saying things like "I understand citizens need the right to voice their opinion without fear of government repression" because this is exactly the opposite of what you want. You want views that you consider objectionable to be subject to government repression. You don't have to qualify it, just admit it.
The reason people oppose restrictions on freedom of speech are fairly obvious to anyone who has thought about the question for more than 5 minutes. The moment you start deciding which speech is and isn't legal, you have now established a mechanism to censor. Ban 'hate speech', those who define the term now control the boundaries of allowable speech, and you can guarantee bad actors will seek those powers and abuse them.
You misunderstood me. The government should not censor speech because governments tend to authoritarianism, but civil organizations also shouldn't support neonazis when they exert their freedom of speech against other citizens.
TL;DR: The government must not censor freedom, and citizens should not support acts of oppressions disguised as civil rights.
"Shouldn't behave antisocially", as in e.g. that whole asbo thing that the Brits had until very recently? I'm very glad US doesn't have anything like this:
Civil order that it is a crime to breach. So you could get slapped with one on "balance of probabilities" while becoming criminalised for something fairly arbitrary like entering an area or being drunk. I seem to recall somebody was given an ASBO forbidding them from having noisy sex - which they subsequently did and then faced criminal sanction.
>Any kind of white supremacist behavior is not something to be treasured as freedom, because that enables their harmful behavior against other citizens.
But inciting on harmful behavior against white people is somehow the right thing to do?
I believe all people should be treated against the same set of rules.
The EFF takes weird policy positions. They're against speed cameras, but in favor of the status quo (police with guns manually pulling people over for speeding in cities).
That isn't a weird policy position. The speed cameras do ALPR and can store all the results, so they build a location history of all the vehicles that weren't even breaking the law. That's bad.
It's also bad that cops pull people over arbitrarily, but that's not the part in the EFF's bailiwick, and there are other solutions than putting ALPR cameras everywhere.
It’s even worse. The police don’t pull people over for speeding anymore (that’s good, at least from the “guys with guns” perspective). But the bad part is drivers are now emboldened to drive recklessly. These people now kill and injure more than the police — looking at the latest numbers police violence is down while traffic violence is up. IMO speed and red light cameras would do a lot to reduce the use of police while still seeing traffic safety enforced.
Well said. But you could also state your final sentence as “… would do the job that police are supposed to do but aren’t…” I see what you’re getting at, though. Another related area where we could reduce their use is ticketing for vehicle window tint violations. In California, vehicles can’t have tint on the front windows because it’s quite dangerous when other people can’t see drivers. Tons violate it, and it’s historically been “enforced” as a broken tailight-esque reason to pull over POC and search them for drugs, etc. In areas where parking violations are enforced, the parking enforcement officers could easily check this and issue tickets.
Sorry what's weird about this? I assume they're against the creation of automated government databases of cars, licenses and persons (say with ML) and not tech per se
From my understanding the ACLU used to stand for the civil rights of everybody and even defended KKK members. But now they have taken sides on various social justice movements and targeted groups that oppose these sides. Instead of fighting for civil rights like they used to, they pick and choose which rights they think are okay to violate if it's for the "right cause". It goes completely against their old motto.
General trend has been that they've become more lopsided due to their donor demographics and having to serve them. Providing one counter example doesn't disprove it.
Is there something I can read that establishes the general trend? I know there was a recent leaked memo, but it seems like a weak way to establish general trend.
For the ACLU, they have lost the institutional interest in fighting for free speech. Fighting for the rights of digital Nazis to march, so to speak, is not something the ACLU defends today.
Yep... once noone defends the nazis, anyone can be branded a nazi, and you lose one more right (free speech).
And sadly, the word 'nazi' has lost all of it's meaning in the last years, and has gone from swastika carrying Hitler supporters to someone who doesn't want illegals working below minimum wage to take his job.
If people come to a different country to make a better life for themselves than they would in the countries they were originally from, what's wrong with that? If you're regarding them as sub-humans who don't deserve to work to make a decent life for themselves, rather than directing your anger at the employers who exploit those unable to rise up against the poor pay/working standards/..., we could argue over whether "Nazi" is the best word to describe those views, but it's irrelevant, we should be talking about those views. Do you think you're entitled to a better life than someone else solely based on the fact that you're born in one place and they're born in another? Do you perhaps think the responsibility lies with them to get themselves fired so that that poor employer who always wanted to employ you and pay you a decent wage but was somehow tricked by them can now do so?
There is always someone from some shithole country who's willing to do the work cheaper than you.
I live in a small european country, and literally every foreign worker is lowering the wages of all the local workers - because why offer higher pay to get a local, if you can get some cheap foreigner to do it at minimum wage? And i'm talking about legal workers... if you take in account illegals, that are willing to work below minimum wage, the situation gets even shittier for locals.
You weren't talking about legal workers, you were specifically talking about, this is an exact quote, "illegals working below minimum wage". That's what I was responding to. If you also have issues with people "from some shithole country" employed, working, and paid fully in accordance with your own country's laws and regulations, that's a different discussion, in that case the employers and employees are all doing what's expected and encouraged by your country; this is your country giving them and you a signal that your own expectations are unrealistic.
Yes, and in the secon post I was talking about both legal and illegal, and both of them destroy the labour market for the locals.... and in both cases, opponents to imigration (legal economic or illegal) are branded as nazis.
Is it all opponents of immigration that are branded Nazis, or is it just those that regard foreigners as sub-human and refer to their countries as shitholes?
For digital civil rights issues, I give my donations to the EFF. I personally think some of the regional ACLU affiliates can be hit-or-miss, but that's certainly not a universal opinion.
Well based on the names you would think so.
But in fact BCCLA fights at the federal level regularly, and is far more active than CCLA.
BCCLA is way more effective and aggressive in pushing for meaningful change in Canada.
Nice! Yeah, I have actually contacted BCCLA a couple times in the past and been really impressed with their through responses to my queries. I once asked about a fairly new program that was coming in and they knew way more about it than I expected. Very cool :)
If you use Amazon.com for shopping, and you do, then you can choose Signal Foundation for your benefiting organization. It's a small amount of money, but it's a little bit for every purchase.
Not such a great assumption anymore. I’ve given up my Amazon.com Prime membership and completely avoided them for many months now. I personally know several others who have done the same thing.
I aspire to not hand them AWS dollars one day, but Aurora has been an amazing product for my business.
No, I don’t. Bought exactly one thing from Amazon in the last 18 months literally because I could not get it anywhere else (and I searched for a week before I gave up and bought my mechanical backlit Mac keyboard from the devil)
I'm all for defending freedom of expression, religion, speech, etc. But this is weighing into topics that aren't really about freedom and in some cases they're actively fighting against freedom. AKA it's become way more political and one sided.
From the descriptions you gave, I did not expect most of those links to end up being "The ACLU harmlessly addresses a trans issue in passing."
I think there's an argument to be made for the ACLU becoming increasingly performative, with catchy Twitter slogans edging out the real work, but this isn't it.
There is no “harmlessly addressing” the trans issue.
People feeling uncomfortable with their bodies and performing mutilation is sad. Some people remove their feet as a kinda fetish. Trans people try to look like something else to make themselves feel normal (evidence is that’s not super effective, but that’s beside the point).
The main issue isn’t that. People can do what they want in my opinion. It’s the compelled speech that the ACLU is promoting. “You have to use someone’s pronouns” is the exact opposite of free speech. It’s about as authoritarian a position as you can take. Imo pronouns are even more insidious because (a) they can change by the day and (b) you rarely use someone’s pronouns with said person, so other people have to “enforce” the proper pronoun use.
No, civil liberties has nothing to do with people using someone’s pronouns. It has to do with the freedom to say what you want to say, not compel others. ACLU is now on the side of compelled speech.
Really hard to take your grievances seriously under all that hate speech.
This isn't me policing your speech, by the way, saying that you shouldn't say something isn't the same as saying you shouldn't be able to say something.
Is the ACLU actively trying to force you to use specific pronouns, or is that just something they say because it's good social media optics?
The tweet in which you allege they’re “claiming there’s [sic] no men” is actually saying the exact opposite? It’s saying there are many different types of men.
Good sources, thanks for sharing. I didn't know a lot about the ACLU prior to this thread(other than in the news), but I agree - this is way beyond the scope of what the ACLU has been known for for years. Is there some other group that sticks to more neutrally standing up for folks, politics be damned?
My name is gender-ambiguous but I am pretty strongly male-presenting in person. People I meet over email/text communication often misgender me. Before 2016, when I met those people in person, they would joke about it. Today, it terrifies them that I might tell someone or escalate it, and I have gotten profuse apologies related to the pronouns.
I don't like that this sort of thing terrifies people. It creates a real bias in people's minds against working with anyone whose gender they don't know. The ACLU used to agree with me.
My free speech rights are being threatened. Does it matter what identity merit badges I own to have rights? You may be surprised what group I ostensibly am one of, and yet I find their pushing of this anti-free speech to be atrocious.
Perhaps it’s offensive to the other side being forced to accept someone born the opposite sex as exactly equivalent? Perhaps it has nothing to do with judging them or their lifestyle? Perhaps by the same logic they shouldn’t care about grouping themselves forcibly into a group their offending at least some of the members of?
Noticed that the last connection time is a date, rounded to the day.
1634169600000 (unix millis)
Thursday, October 14, 2021 12:00:00 AM
Well done. I immediately thought that having a millisecond granularity of last connection time could be used to roughly correlate who contacted whom, depending on what the "connected" event is considered.
> this subpoena requested a wide variety of information we don’t have, including the target’s name, address, correspondence, contacts, groups, calls.
Couldn't they easily use the phone number associated with the Signal account to find the target's name? Even if the account uses a pre-paid SIM without a credit card attached, the telco could still reveal the location of the device and probably its IMEI, which could be used to find other SIMs associated with it.
I think Signal should stop pretending they're an anonymous service when their identity is based on phone numbers.
>Couldn't they easily use the phone number associated with the Signal account to find the target's name?
What do you mean? They already have the telephone number. It is listed in the subpoena. All Signal is doing is confirming that this phone number has a Signal account - sure, it's not perfect, but I don't really see your point. They don't need Signal to be able to do any of the things you listed.
For what it’s work, as far as I know, Signal has never claimed to be an anonymous communication service. They are a secure communication service.
This means they minimize the information that is in the clear and do this quite well.
Now to your specific concern about the phone number. Signal only has your phone number and sign up time. This means that for someone to request your information, they’ll pretty much need that already. If you look at the subpoena you’ll see that the FBI is asking for data associated with the persons name or phone number. Name is something Signal doesn’t have. You’re right that it’s trivial to get with the phone number (and the FBI has obviously done this), but they must necessarily have this information to even request data from Signal.
In other words, Signal doesn’t provide a means of leaking your name. What using a phone number identifier does do is allow someone to go from name to phone number to Signal request, though that Signal request with turn up very little data.
As other posters have pointed out, the onus is not on signal to connect a phone number with a person's physcal address. Besides, what about burner phone numbers, silly...
If we're talking about pure messaging here and not sending a jpg or other attachment, why not create an app that simply uses plaintext, like a terminal. Generate some 4096 bit keys and make it decentralised. Public keys could be shared among people willing to communicate. Anytime you have a central location where data is parsed, the time stamps and other metadata can be gleaned. The app could even fudge time stamps. I think the future of this is decentralised communications. At the centre of this entire issue is the notion that someone else think they have the right to intercept your communications. I believe a properly-implemted SSH plain text app using big keys would solve this to a point. The app could store all data in a self-encrypted file and self destruct if tampered with. Security is a process, not a product, as Bruce Schneier is famous for saying, so methinks that the process is as important as the product. SSH using massive keys is a proven thing and cracking 4096 bits of AES will not happen in the short or mid term. In fact, most serious cryptographers say the continents will shift before they can break it. Just a thought. I'm not a programmer outside of Bash/sed/awk and other *nix tools, so this isn't something I could develop, but as a decentralised tool, I think it could work if you were willing to use plain text only. The app could have random numbers as a beacon that can be changed at will and only those with that random number can communicate with you and you them. A la Google Authenticator or something similar.
> 1. For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.
The problem is not the encryption. The problem is adoption and ease of use.
There are very many ways to encrypt messaging data by sharing public keys. There are also several decentralized, federated encrypted messenger apps. They are great options for folks, but frankly aren’t as easy to use or feature rich and thus not as widely used.
A few things:
-this looks like more marketing than substance, to me. prosecutors send out shitloads of these subpoenas all the time (see eg here https://transparency.fb.com/data/government-data-requests/) and I strongly suspect this was just a junior (or old) person who doesn't know what Signal is and sent a routine subpoena. They require next to no approvals. In other words, this isn't a coordinated attempt by DOJ to get Signal to start publishing records.
-Ultimately this issue isn't that big of a deal to law enforcement. In 99% of cases they can just get the defendant's cell phone and look at his unencrypted messages in the Signal and Whatsapp applications directly.
-I think in theory if a lot of platforms start doing end to end encryption globally, then things could get a little more interesting. But as far as I know for a lot of tech stacks (like ones more complex than simple messaging) that's difficult to do.
Identifying users is one thing, reading their message is another. People can still deny and not answer questions.
What matters is the messages being encrypted, identifying users is already being made possible through other means.
So yeah, using a phone number is good enough, in my view.
There is no perfect security, there is only "good enough" security.
Not to mention that phone numbers are more secure, in my view, than other sorts of digital communications, and are not always monitored in all countries.
all of it? That's a bit of hyperbole. What is a more measured thought of how much of a negative impact there is?
Certainly saying "I know that Janis and Nate talked on this day this many times / for this long" and "Janis and Nate had a detailed conversation covering lemons and lye" have two different levels of private information revelation; and E2E protects against the latter but not necessarily the former, so why does it negate _all_ the goodness?
Yes, this is why I am very suspicious of Signal as a front for the CIA / NSA. A phone number can reveal so much information about a person because many online and offline services now ask for it.
Signal is the best we have on mobile at the moment in my estimation, but after a cursory analysis of Moxy, I totally expect one day it will be revealed he has been compromised somehow. Nation state actors already have baseband roots, so as long as those arent your threat vector, you are probably, maybe, ok on signal.
I find it really interesting that Bill Binney says, despite years of me hearing the opposite, that we shoild all be rolling our own crypto because its a form of decentralization. The more time goes on, the more I think hes onto something.
The main problem I see is this: a future where only the hackers have privacy, and everyone else apathetically accepts their servitude and abuse. Furthermore, to maintain that privacy, hackers will have to be extremely selective in their friends, due to the invasive nature of the privacy violations from those around us, unbeknownst to them.
I run a Matrix instance on my own hardware for my extended family. I suppose that I could be served with a subpoena/warrant for the data, but the contents of any voice or video calls mediated through my Matrix server wouldn't be preserved.
Likewise, any private chats on the server would remain encrypted and I wouldn't be able to decrypt them even if I wanted to do so.
Since the instance isn't federated, and access is only available through invitation, only those who have access know about.
As such, I'd say that private chats and voice/video calls through my Matrix instance are pretty secure.
Whenever I read things like this, I think about Crypto AG.
Solid impenetrable encryption from a neutral and privacy obsessed country. All a careful front for the CIA.
The encryption was solid as far as I know but they the CIA had a back door.
If there was a backdoor into the system somehow, it would be perfect to have stories such as the above to recruit criminals or even foreign intelligence to adopt it.
As far as I know Signal is 100% legit and deliveres what they promise.
and I use it myself.
I can't help but think the fact the account creation date (and last connection date, although less so for that) are not censored for a reason.
The account creation date is basically equivalent to the phone number and would allow the owner of the account to know a subpoena was requested for them.
Last I checked signal still required a phone number to use so it is an instant deal breaker for a lot of people. I have 3 kids I communicate with but they don’t have a cell number just use wifi when they can. If I could use signal with them I would. Instead I use Wire since it seems secure and doesn’t require a phone number. I can only imagine there are lots of other people with kids in my situation.
Threema is another app I've liked. They have a decent transparency report which shows the limited user data they collect/possess. Link: https://threema.ch/en/transparencyreport
Speaking of donations (a guy from a food bank whom I see in the Safeway parking lot didn't know this, so I think we can assume not everyone does):
Most "donate" pages do not allow for "donor-advised funds (DAF)." They assume you're giving it with your before-tax money and presumably taking a tax deduction for it.
In a DAF, which your financial institution surely offers, you can donate appreciated assets, e.g. your FAANG stock, and take the entire amount as a tax deduction. So if your 10 shares of Facebook (excuse me, "Meta") stock are at 322, you can take a deduction of $32,200 this year.
What's the catch? That money's gone, and you can't get it back. You can only "advise" your DAF to give it to a 501(c)(3) organization, which Signal is. There are no time limits.
The good part, though, is you can probably have your DAF give the money anonymously, so the charity can't bug you every time they're having a fund drive.
While we're on the topic: you can also leave your estate to a DAF. (If you're married or have kids, probably you should ignore this.)
So that money goes to charity, but what charities? You won't be here, obviously. When you're looking into this, see if your DAF administrator allows a "successor trustee." If not, that institution itself (Schwab, Vanguard, whatever) will disburse it.
If they do, you can pick someone whose values you trust to be the successor & disburse the money. (Probably someone younger than you!) You should ask them, or else they'll get a real surprising phone call right after you die.
Another benefit, it sounds like, is that you don't have to pay capital gains on selling those shares.
Like, let's say your intent is to donate $10k to some charity, out of the goodness of your heart and/or as a tax write off. You don't have that in cash, but do in stock.
You could liquidate $10k of stock, pay capital gains on it (if it appreciated since acquisition), then donate it. So you're out the capital gains tax.
The method you describe seems more efficient, since you don't need to sell; you simply transfer ownership of the asset.
Or is there still capital gains to be paid?
I wonder if billionaires are setting up charities as trusts for their kids, then "donating their shares to charity?"
You're exactly right, you don't pay capital gains tax, and DAFs really are the poor man's "tax-exempt foundation."
Billionaires have access to much fancier schemes than this, and I won't even attempt to describe all those. But yeah, I imagine "donating their shares without capital gains taxes" figures into them.
I just noticed you said "trusts for their kids" -- that's something different. If the children can access it, it's not a DAF. But trusts are much more complicated, and someone who understands them (which I don't) can hold forth here.
Yeah, I didn't realize what an enormous difference this made until I ran the numbers.
In your example above, let's say the person purchased those 10 Meta shares for $38 each at the IPO and they're worth $322 each now. That's $3220 in proceeds and a $2840 capital gain.
The taxes on this depend on income level and state of residence, but let's say they're in CA making $300K/year. They'll pay 20% federal capital gains tax + 3.8% net investment tax + 10.3% CA income tax, or $968 in taxes, and they're left with $2252.
On the other hand if they donate the shares to a charity (or DAF), they get a tax deduction for the appreciated amount ($3220), which can be taken against 35% federal income tax + 10.3% CA income tax = $1459.
So in the scenario where they just sell the shares, the proceeds after taking taxes into account are:
Donor $2252
Charity $0
And in the scenario where they donate the shares, they are:
Donor $1459
Charity $3220
In other words, for an effective cost to the donor of $793, the charity gets $3220.
To be clear, this is exactly the same for any gift of appreciated stock -- a DAF is not required to donate appreciated stock in-kind and deduct the full (unrealized) price without paying capital gains. But the vast majority of charities are not equipped to accept in-kind stock donations, which a DAF facilitates by taking stock in-kind and cutting the charity a check.
(Also, DAFs allow claiming the deduction during high-income years and deferring distribution to charities over a longer period of time.)
Isn't that double counting? In the first scenario, $2252 is the amount that they get in their bank account post-tax (marginal). In the second scenario, they get $0 in their bank account post-tax (marginal), and there is a deduction but that occurs on the amount donated so they don't really gain $1459 but rather avoided paying those taxes.
There is at least one intermediate step: it's not prohibitively expensive to set up a charitable remainder trust. You have more control than with a DAF. But you have a fixed cost to set up the trust and some annual administration and tax compliance costs. It can make sense if you plan to donate more than, say, a million dollars.
This is more or less what billionaires do to pass their wealth to their children. Here's a recent article that goes into detail about one particular family's setup.
DAFs and GRATs are completely different vehicles. DAFs are beneficially owned by some 501(c)(3) and withdrawals can only go to arms-length, 3rd party 501(c)(3)s, with no reciprocity attached to the gift.
DAFs are a convenient way to gift appreciated stock (which is already a nice tax gift to the charitable wealthy), but fundamentally not a vehicle for passing money to your heirs.
Your article is about GRATs, which should be illegal.
I don't have this problem, but getting a "fair" appraisal of your art can be tough. Maybe they auction it off, and the proceeds go to their foundations?
Signal is a 501c3 nonprofit- they don't have all that much funding or legal firepower beyond their regular operations. The ACLU also loves them, and getting a letter from the ACLU probably makes matters go away faster then getting a letter from some random lawyer.
What worries me is that even though they don't own the data, they could be forced to push an update that will upload decrypted messages from people's phones. Not owning the client would be better
Not owning any server would also help. Metadata, contacts and groups can easily be recorded if you own the server. Federation is a big reason why I consider XMPP superior to Signal.
Federated servers are still servers. It’s nothing for the FBI to subpoena any server you’ve connected to and then any server that server connected to, etc.
The important thing is minimizing the data that is collected, period. Otherwise you’re just obscuring the data storage.
> It’s nothing for the FBI to subpoena any server you’ve connected to
I can choose to use a server outside the US. I can choose to run a server in my basement and still talk to all my contacts. You simply do not have this freedom with Signal.
Finally, I do not have to trust anyone about what (meta-)data is collected.
If you chose a server in your basement, then it’s communicating with federated servers. Those servers that your friends use have the same data that you’ve sent to them. So they don’t even need to find your main server, just collect enough.
An out of the US host is easier for them actually. Say hello to NSA exploits without a warrant!
Have you looked to see what metadata something like Matrix collects? How federation and contact lists work on that server?
I just realized something. One of the only things contained is the account creation date. How hard would it be for the FBI to pull that text you get at that time/date to activate Signal? Not Impossible I would imagine?
Edit: What raised my eyebrow is that the subpoena specifically asks for that. Why?
Shouldn't Signal be required to produce all the encrypted data stored for this user, in case law enforcement are able to get the associated private keys off the suspect's phone?
Signal stores messages on their servers until they're delivered at which point they're purged.
Additionally, Signal's encryption scheme gives their messages the "forward secrecy" property which means that acquiring key material at some point in the future does not allow you to decrypt any previous messages. Any encrypted messages that they could provide would be useless.
For more, check out their really interesting doc on the double ratchet algorithm that they use!:
Related: does anyone know why the Signal-Server code allows for toggling request logging? [0] If this allows for logging raw HTTP(S) requests server-side, presumably this could be grabbing the passwords generated and held on each device used for authentication [1]?
There's also no mention of TLS termination in the Signal-Server repository on GitHub, or TLS between the Redis cluster in use. If FBI or NSA has compromised the AWS VPC, then all of this network traffic would be in the clear to be picked up behind the load balancer(s).
The Account DynamoDB table [2] also seems to indicate more information is tied to the phone number in cleartext than what is indicated by the response?
The HTTP request does not have the users private keys. That never leaves your phone.
The DB table looks like signal creates UUIDs to represent users and devices. This isn’t really sensitive information but really just how you can see and revoke devices associated with your account. Maybe I’m misreading something tough. Do you have any reason to believe otherwise?
Reminder that this does not hold true for Apple's fake "end to end encrypted" iMessage: iCloud Backup, which is not end to end encrypted, uploads all of your iMessages* to Apple each night in a format that Apple can read without you (and turn over to the state upon legal demand such as this).
Note that disabling iCloud Backup won't help you, as it's turned on by default and everyone else you iMessage with will be leaking your conversation plaintext to Apple for you.
Disable iMessage. Use Signal exclusively.
* if you use Messages in iCloud, iCloud Backup instead backs up the cross-device sync key instead of the iMessages themselves, which means Apple gets your iMessages in real time as they sync between your iCloud devices, instead of once per day
I'm worried that the provided information could be incorrect. For example, that user could have messages waiting to be delivered to himself. In that case, I think signal doesn't know the senders but should still disclose the number of those messages and their size.
Signal erases that kind of information but I'm pretty sure that user must have had some messages delivered to them while signal was processing the subpoena. So pretenting they don't know anything else is just wrong IMO.
It seems theoretically possible for signal to not know the original sender nor the final recipient e.g. if they use their central servers as a tor middleman (I forget the real node type name)
Good for signal but if I were going to play devil's advocate, I would put myself in the shoes of folks in the government that want this information... I would contact Google or Apple and ask them to throw in some non-advertised APIs that take and then send screenshots of the signal app from the OS to a central server. Put another way, if the OS isn't fully trusted, it's still game over, even with all of the encryption in the world.
If you have a phone number, a zero-click NSO spyware would provide full control over target device.
* FBI could get user’s data from Apple if it’s an iPhone (considering that iOS is closed source), or force a malicious update to user through so many ways (including by pushing a bad update or manipulating safety numbers in signal).
So why asking a messaging app that everyone knows doesn’t have the requested information?
It's required. There are statutes tying disclosure of subpoenas to Obstruction charges. This is not a new issue; subpoena secrecy was a thing before there was an Internet.
The primary blind spot in threat analyses conducted by law enforcement agencies, is that they do not consider potential threat vectors from bad actors within the state itself.
This threat is why there need to be checks on the power of the state to conduct searches, one of which is privacy technology like encrypted communication networks.
I think that’s uncharitable. Everyone is going through the motions required of them, and this is the public demonstration of those mechanizations (although Signal is a bit cheeky, which is fun). The next step would be government requiring, through legislation, more invasive logging and data collection (Australia and parts of Europe have already seen the beginnings of this discussion) of messaging apps (“we’ve asked for what we can, they said they don’t have it and aren’t required to have it, what do you want us to do?”).
When encryption and secure messaging is outlawed, only outlaws will have and use it.
> When encryption and secure messaging is outlawed, only outlaws will have and use it.
They don't necessarily need to outlaw it. They may just throw up enough hurdles that it doesn't become a major success. Developing a communication system that is secure, featureful and convenient to use for the general population is not a trivial task. A large effort that can be undermined.
E.g. if they only require logging from communication service providers but not from application developers then this would force a decentralized solution. If they lean on payment providers it might get difficult to charge for phone apps or get donations.
The software could continue to legally exist but see little adoption. Which is enough to enable surveillance.
Isn't this what happened to Protonmail? They were required by legal order to start logging activity for a specific group of users. It's not outside the realm of possibility that the govt could try to force a company to either start logging Signal metadata or provide a backdoored app to a user. Not that it would necessarily work, but I do expect them to try at some point.
With enough effort, anyone can go to jail. America held a taxi driver for 17 years at Guantanamo Bay with no evidence. Tech won’t save you from the state. As always, if your threat model includes a state actor, you are going to have a bad time. For all intents and purposes, their resources are unlimited.
Freedom is won in the courts and the legislature, not in the code (although tech is as useful tool for keeping government implantations in check).
(I still use and donate to Signal, but have a healthy understanding of its limits)
Yes, but their tyranny must also increase in order to circumvent the technology. They will increasingly resort to actions like you described. Hopefully the population will eventually revolt and put an end to the corrupt government once it becomes unacceptably totalitarian.
Freedom is won through weapons. Encryption is a potent weapon, it can defeat states, militaries. Before computers, it used to be a military tool. It must be democratized, the whole world must use it.
They can put one or two people in jail, but they can't put everyone in jail. If everyone has easy access to end-to-end encrypted messaging and relies on it (for non-nefarious purposes), the government will have a tough time changing that.
This isn't their first rodeo. The DOJ is well aware of what happens when they send subpoenas to Signal. They're not sending it because they're unaware of the probable result.
Impressive, but why do they need to store the exact times of when the account was created and last accessed? I would think a very coarse time down to the month would be good for most system administration needs.
Note that the Signal LLC did not even try to object to providing information about their users. They provided as much information as they have. That is unfortunate.
On the books since 2011. Upheld in a recent decision of France's supreme court despite what some thought to be quite clearly contrary EU caselaw (which takes precedence over national law, roughly speaking)
https://www.nextinpact.com/article/45613/comment-conseil-det...
From that article: “The internet is generally not anonymous, and if you are breaking Swiss law, a law-abiding company such as ProtonMail can be legally compelled to log your IP address.”
Please don't post canned arguments to HN. They point to super-repetitive/generic places, and those usually get nasty, as indignation rushes in to fill the curiosity vacuum.
What does rights have with tolerance? ACLU were trying to protect the rights. Now they do it only if that isn't interfering with their moral standards.
I see this as a change of scope. Once their scope was rights of all, now it's rights of a particular group.
Except it really doesn’t. The United States has been a stable democracy with absolute freedom of speech for two and a half centuries. The same for Switzerland and the UK (British hate speech laws were only installed in the 1990s).
In contrast the country where the Nazis actually came to power, Weimar Germany, had some of the most extensive hate speech codes of any Western nation in the 1920s.
Further back we had punitive taxes on newspapers, laws on blasphemy, laws on obscenity, laws against "sedition" and most kinds of assembly..
The idea that the UK is a bastion of liberty is mostly a confection, really. The only thing we are a bastion of is liberty for the very wealthy - and only in the last twenty years or so have we really had a strong conception of rights in law. Our supreme court only got created in 2007: in all those historic free speech cases before then (and all the rest) the outcome was decided by literal aristocrats who had inherited a title.
>The United States has been a stable democracy with absolute freedom of speech for two and a half centuries.
By freedom of speech you might consider that law enforcement does not take actions against someone speaking its own mind.
But if companies and activist groups are retaliating against someone by denying employment, conducting business, acquiring goods, home rental, even instigating against that individual to the point of asking for violence, because he expressed his thoughts, then "free speech" might be pointless because very few will dare to speak their minds.
If authorities aren't going to punish someone, then some groups will.
> "free speech" might be pointless because very few will dare to speak their minds... If authorities aren't going to punish someone, then some groups will.
I agree freedom of speech is never absolute, and it is not equally guaranteed/protected.
But the idea of free speech is not pointless, even when very few will dare to exercise it because of those realities. Those few being there, despite the grave consequences from non-government or even governmental groups, are part of what makes their speech powerful and inspiring. That's why Damore or Snowden were actually boosted by those groups' retaliation.
Freedom of speech does not mean everybody can say whatever without any repercussion, or that government will help defend you. It means government won't retaliate against you, and you have to strategize how you are going to execute it.
I agree with your major points, but with some modifications:
> It means government won't retaliate against you
No, that's the meaning of the first amendment. Free Speech is a principle of tolerance not limited to the government.
If society as a whole believes in Free speech, that society as a whole will be more tolerant, both in private and public spaces. Not absolutely tolerant, but more tolerant. The first amendment is the codification into law of this broad principle.
The other thing, which is not talked about nearly as much, are the forces opposed to free speech. You have, basically, a tug of war. On the one hand, society believes that "free speech" is a valuable principle. So it does things like having a generous moderation policy on hacker news and laws like the first amendment are passed. But then you have the opposing forces, the ones who say "free speech is nice, but issue X is so important we must punish people who speak against it". That has traditionally been things like advocating violence.
Now if the set of such issues fills up all of life, then you will lose free speech. So the real problem is not people exploiting free speech -- the paradox of tolerance is a purely intellectual exercise that almost never manifests in the real world. In the real world you have the principle of intolerance, that more and more aspects of ordinary life are put into the "this is violence" bucket and so used to restrict free speech, up until people claim that speech itself is violence. No society has ever come close enough to valuing free speech so much that the paradox of intolerance could possibly kick in.
1st amendment is explicitly restricting government, I don't think it is codifying some real situation of tolerance in society that once occured into law. There may have never been such universal tolerance in U.S., maybe there was an idea of it. Your examples are not good - HN is private website, 1st amendment was passed by a small group of elites, maybe with good intentions, but hardly because of tolerant society.
I agree with you on the argument "paradox of tolerance" - it talks about an extreme situation that mostly does not happen, as a reason for general policy. This argument seems often misused as a reason to come down hard on any speech that is objectionable to some group.
More sane policy would be to adopt a default of free speech, be tolerant, and only then decide on exceptions on case by case basis, like about promoting Nazism in Europe. I don't think promoting Nazism should be protected the same way other free speech is, because of the concrete example of tragic consequences we have where it lead and the cost whole world had to pay.
right. only the fringe who weren't employable to begin with will speak their minds. that's probably why lately some lunatic ideas seem very loud... and everyone else just keeps their heads down
> The United States has been a stable democracy with absolute freedom of speech for two and a half centuries.
The United States has had plenty of limits on free speech throughout its history. Letting Nazis or the KKK or whomever speak their mind and publish their literature doesn't mean we have absolute freedom of speech, just fewer restrictions on speech than elsewhere.
And it's worth pointing out that the paradox of tolerance was written as a reaction to Nazi Germany. The Nazis were the kind of intolerant group that paradox was arguing shouldn't be tolerated, not because of their speech alone, but because of their willingness to use violence, and refusal to tolerate any speech but their own.
If one side incites violence with their speech and the other side merely tolerates them for the sake of free speech the violent side wins. That was the relevant lesson of Nazism, not that the Nazis never would taken power in a Germany that allowed them to speak and publish freely, because they did.
This is why the KKK is allowed to march, but not allowed to make terrorist threats or incite violence (which they would be, if the US had absolute free speech,) because even tolerating the speech of scoundrels has to have a limit.
I don't think the Church has any hope to regain the power they once had.
These days my biggest fears are communist regimes (like China) and Islam spreading in Europe (we already have some courts partially applying Sharia laws in the UK).
Church was the Evil Empire long after medieval - in some countries, like Poland, people are still being denied reproductive rights because of catholic ideology. Islamic extremism is a marginal problem compared to this - there are no places in Europe where people's rights would be limited because of Sharia.
We’re not even a hundred years out from culturally and legally discriminating against an entire race of people in the US. Democracy has been stable here (and continues to be) but for whom was it representative?
Are you actually serious? You don’t know which race the US discriminated against in the past 100 years?
The answer is black people aka African-Americans (as well as Chinese but that’s just outside the 100 years) [1]
You do realise that black people did not have freedom of speech for almost all of America’s history?
The idea that America is some bastion of free speech is utter nonsense, when you consider that a significant portion of its population was denied this (and much more) based only on the colour of their skin.
I mean if you think about for most of America’s history it’s been a literal fascist state.
This is a distortion of the paradox of tolerance, it doesn't endorse censorship.
The paradox of tolerance only comes into play when views start getting translated into actions. Let the Nazis march down Skokie. Action should only be taken against them if they start acting on their views. Removal only happens in the interest of self-preservation. The paradox of tolerance is not an endorsement of censorship.
The paradox of tolerance arises because (neo)nazis want to abolish free speech, indeed they want to abolish entire groups of people.
What is legally protected as speech and what is actually speech cannot be equal. For instance, speech that involves property damage is not protected, even though that is a common kind of speech in certain circumstances (e.g. graffiti "Romans go home")
Conversely, some groups abuse free speech protections in order to intimidate others, e.g. Westboro Baptist church picketing funerals. I would call this an action because the primary intention is not expression but rather intimidation.
It may be that such actions must be legally recognized as speech in order to maintain the enforceability of speech protections, as intent is a nebulous concept, but it is not obviously so and many jurisdictions don't recognize such things as speech.
> The paradox of tolerance arises because (neo)nazis want to abolish free speech
The argument you're making is that free speech is important so we need to protect it by censoring people who advocate imposing censorship.
But then you're advocating imposing censorship, which implies you should censor yourself. Which is why it's a paradox.
You still have to choose between "don't censor things" and "censor things" with the knowledge that if you choose "censor things" it's not just you who will be adding to the list of banned knowledge.
It's not a paradox if you guarantee equal chance to express oneself in written form or speech and equally punish violent behaviour, which is what PP wanted to express I believe.
Westboro Baptist is a grift. It’s founders and leaders are lawyers. They intentionally yell offensive things to enflame public opinion in the hopes that someone assaults them or the police violate their rights. Then they sue for tons of money
There is no paradox. You either support the right of everybody to peacefully assemble or you do not and if you do not, you don't support freedom of speech.
Of course not and we don't in the US as a matter of fact. In LA they have gang-injunctions, where part of the injunction is that you can't even be near people of the same gang at all even if there is no criminal behavior afoot.
"The 4–3 ruling, overturning a Court of Appeal decision, upheld a San Jose injunction that barred the gang members from "standing, sitting, walking, driving, gathering or appearing anywhere in public view" with each other in a four-block radius. "Liberty unrestrained is an invitation to anarchy," Justice Janice Rogers Brown(appointed by Bush)"
It’s easy so say net win for society is privacy. But it’s important to also acknowledge it does come at a cost — there exists criminal behavior that most reasonable people would agree is bad and should be stopped that may reach a dead end with services like Signal. In formulating your statement that examining criminal behavior is a problem, you are suggesting there shouldn’t be ways to uncover crimes.
> What you can't do, is click on a button to spy on people
There's a subpoena in this process that you're glossing over. You can argue that's too easy or too secretive or something, and that's more than fair, but it's not just 'clicking a button'.
I'm not arguing about the subpoena, I'm arguing against the idea that encrypted solutions are bad.
If you have a subpoena to open a safe, and you realize that you have no tools that are strong enough to open that safe, you don't suddenly blame safes. You don't tell banks they should stop using safes. You don't ask them to create weaker safes robbers can break into.
You try another route.
A subpoena is fair. Asking signal to preemptively not encrypt the data in case we need it later is not.
azinman2 didn't say that there should not be encryption, just that there's a cost, and I think that's a fair statement. Sometimes, 'other methods' are not viable and you're not going to be able to stop the bad guys.
Sub poena is basically a rubber stamp after filling out a form. Often done in secret with the barest of oversight. A warrant requires a bit more justification at least.
The obvious counter-argument, unfortunately, is that the government has a clear idea of which requests the FISA court would reject, and therefore never bothers making such requests. That would mean the court is correctly preventing executive overreach and there's nothing to worry about.
If there is an automated way to get this data, the agencies will do it, and then retroactively generate the paperwork to make it look ok in court if they feel it’s worth their time.
If structurally there is no privacy except some friction with generating paperwork, then we already have solid evidence here in the US in modern times that you defacto have no privacy.
This is only true if the companies you're asking for data refuse to provide it _without_ a subpoena. Many companies (let's us AT&T as an example) will provide law enforcement whatever data they ask for without requiring a subpoena.
Bad example: regulate telecommunications companies are required by statute to provide certain information on request. A better example is Amazon with their law enforcement partnerships through Ring.
I assume parent was probably referring more to the subpoena- / warrant- less "creative" solutions that have been discovered, than the typical exhaustion process.
So you want cost the taxpayer significantly with potentially months of unneeded work and expose cops to potentially more danger to ultimately arrive at the same result? How exactly is this better?
Wiretapping is illegal without a warrant. I believe the spirit of the law there implied that wiretapping of [previous, historical conversations] was _always_ illegal, since a wiretap could only be tracking future conversations by its very nature.
The nature of communication has changed, such that all conversations theoretically have a permanent, historical record, despite the intention of those conversations to not have that historical record. It's called "instant messaging", after all, not "perpetual letter writing". It's meant to be an analogue to talking directly with one another.
The path we've gone down where everyone uses a third party to communicate with each other, and that that third party could theoretically record and retain all communications back and forth in perpetuity does not change the _intent_ of the laws as they were written.
The laws were to protect everyone from unreasonable review of their historical actions.
Perhaps you remember that story - I've completely forgotten the source and am having trouble finding it - about the person taken in the night and thrown in front of a judge. He asked what his crimes were, and the judge said "that's what we're here to find out", as they were going to go through everything he'd ever done to find something to charge him with.
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." -- Attributed (possibly apocryphal) to Cardinal Richelieu (1585-1642).
Ironically, under totalitarian regimes, citizens find themselves faced with the choice between either following the unwritten rules of the regime (which require them to break the laws as written) or following the written laws (which are an impediment to the regime, and thus cause one to be guilty of more nebulous crimes like "sedition").
I don't disagree that overly broad laws are a problem in non-totalitarian countries, but I think that regimes like Stalin's are a special case.
>So you want cost the taxpayer significantly with potentially months of unneeded work and expose cops to potentially more danger to ultimately arrive at the same result? How exactly is this better?
Because my privacy and that of most others who are decent, law-abiding citizens is more important than not making police do their jobs.
How do you think police caught people before apps like Signal? With real police work. Perhaps if they had to spend more time doing that, they wouldn't have time to beat and kill as many unarmed civilians.
>there exists criminal behavior that most reasonable people would agree is bad and should be stopped
Absolutely.
>In formulating your statement that examining criminal behavior is a problem, you are suggesting there shouldn’t be ways to uncover crimes.
I didn't get that at all. Before Signal and other encrypted apps, folks who didn't want to be spied upon would meet in person, in private places or write messages in code.
That didn't stop the police from bringing down many criminals, such as Al Capone, the New York Mafia and many others, did it? Nope, it didn't.
What you seem to be advocating is that everyone's privacy should be forfeited so police can get information without doing, you know, police work.
I'm all for bringing criminals (especially violent ones) to justice. But I'm not willing to give up my privacy so that police can spend their time eating donuts instead of their jobs.
Feel free to disagree, but I'm going to keep using Signal and be glad of it -- not because I'm involved in criminal activity, but because I value my privacy.
You’re attacking a straw man. I never proposed anything other than recognizing the cost of encryption. And if you are to honestly do so, then you also need to recognize things happen now digitally that would have been in person before, which ends up leaving clues like witnesses and DNA.
This is the cost of abusing the public's willingness to allow certain exceptions to civil liberties. In a society where the public generally trusts the authorities, this problem wouldn't occur. People would almost always be willing to have their communications available for potential judicially-guarded examination, trusting that only justified suspicion of particularly violent crimes will ever be cause for using it.
But when the authorities transgress once too many, the public in general will switch to services that properly defends their privacy.
We can consider this a game-theoretic outcome of abusing the trust of the public. The consequence will eventually be that properly henious criminals will have better tools for not getting caught.
Those who want to keep the current draconian status quo in place are incentivized to make public any wins, it would justify the existence of the extreme measures. The fact that they haven't boasted about any win is telling.
Yes, and we should also be aware of a rhetorical strategy used by surveillance state apologists where they alternate between "The lack of recent terrorist attacks proves that mass surveillance works, so we should extend it to fight other crimes" and "The recent attack shows that we need more surveillance powers to protect ourselves".
The beauty of the argument is it can never be disproved because all success or failures theoretically need to be kept secret no?
Would YOU be happy spending trillions of dollars with no way to see if it was accomplishing anything, while clear negative effects were also occurring?
The widespread abuse of power in government agencies makes this argument a little naive imo. The vast majority of what they do has very little effect on anyone's safety. I'd rather be able to communicate privately and let people keep selling drugs if they want to.
>the widespread abuse of power[...] The vast majority of what they do has very little effect*
doing a lot of work here. To what degree is that simply anti-governmental sentiment rather than an honest evaluation of the agencies in question?
Say you'd be living in a narco neighborhood in Mexico were cartels regularly shoot civilians up in private wars, have you considered how badly institutions could do in comparison?
Have you considered that the institutions ostensibly serving those civilians might have been infiltrated by the cartels, and that giving them more power might not have the effect you want?
The net benefit to society when government is granted and/or authority is granted broad powers of surveillance is the abuse of that power to serve the desires of those in power rather than society in general.
Your statement is carefully crafted to sidestep this with the wording, "...there exists criminal behavior that most reasonable people would agree is bad and should be stopped that may reach a dead end with services like Signal...", ignoring that the crime of abuse of power is far greater than any crime that could be prevented when it'd granted.
There will always be "some people" that think this way. But more certainly such powers will be abused by those entrusted with them.
The end doesn't justify the means. Police in democratic societies have less power on what they are allowed to do in order to stop crimes, uncover crimes or prosecute criminals. Like requiring a search warrant or how long the police can hold you, interrogate you and so forth. But speech in general has always been a private matter, encryption only reinforces the status quo of society.
What argument do you have that less encryption is the preferred solution?
I have family members that have gone through violent crime that now have PTSD, and due to lack of evidence because of the inability to read chat logs, the perpetrator is free and the case never brought against him.
Meanwhile Encrochat's non-encryption ended up allowing a multinational set of drug cartels to be taken down.
I (obviously) have no idea about the details of that situation, but since a violent crime can't be committed over the internet via a chat app, there ought to be physical evidence of that crime, no?
If there's some sort of conspiracy element to that, I can see how chat logs might be useful.
But attempting to require folks to provide information they don't have (as is the case here) is a fruitless endeavor.
What solution would you suggest? Get rid of encryption? Force providers to collect the contents of their users' computers and phones?
While, as I said, I sympathize with your family members (and you), such an outcome doesn't justify taking away everyone's privacy.
Especially since the vast majority of people are decent, law-abiding folks.
I get that your experiences and the pain they've caused won't allow you to see things differently, but privacy is important, and I for one, won't give mine up without a fight.
I dont wish to come off as unsympathetic because no one should have to experience violent crime. But dont you consider the danger of weakening encryption through law? Facebook and google are giants that have earned all their money because they are able to read everything people say on their platforms in plaintext. Once you weaken encryption for police, you weaken it for everyone with a motive like the police or the state.
The US government is too caught up in prosecuting victimless crimes, bullying defendants into taking plea deals (and forfeiting their right to a fair trial), handing out cruel sentences, and using evidence borne from illegal searches (while lying about it).
Until all of that changes I am not interested in giving them more ammo.
They can pull this information from either the sender or any of the recipients phones. If the government knows the sender, they can arrest them and confiscate the phone.
No. They’re suggesting that law enforcement should have a valid reason to request someone’s private data such as this process.
You have added that last line yourself, and it appears to suggest that you would prefer all of humanity be constantly surveilled in case it may catch more criminals.
The Fourth Amendment clearly states that law enforcement has to have a subpoena where a judge agrees there's a valid reason to demand private property, with very limited exceptions.
E2E does not require a valid reason. Its only change as far as law enforcement is concerned is to stop monitoring when they do have a valid reason. (Which I think most people feel is as acceptable trade-off.)
> you would prefer all of humanity be constantly surveilled in case it may catch more criminals.
Not only did I not say such a thing (I even said it was easy to argue that encryption is a net win), it’s not something I believe, especially when you put it in such extreme terms. But encryption brings a cost, one that shouldn’t be ignored.
Most people here are taking extreme arguments — assuming everything is about mass surveillance and crimes are more often than not victimless. This ignores the reality that real crimes are regularly happening that most reasonable people would wish to stop, and when you add friction to that, it means there are many cases were justice will not be served.
Apart from just not having encrypted data, the only way to achieve what you're suggesting is with a government backdoor into the encryption.
Any backdoor - any! - will result in your data being exposed, sooner or later. Your Signal messages could then be exposed in a data breach on the dark web for all to see.
It is not worth it to risk everyone's privacy to allow for the chance at easily prosecuting a small number of crimes. Remember - you're not preventing crime this way, just allowing for easy evidence capture. There are viable alternative ways of investigating crimes, as others here have said. There are not viable alternative ways of protecting our data.
So there was no crime before Signal? The "I have nothing to hide so I don't care" argument is so shortsighted. Absolute power corrupts absolutely. Remember this from the Nazi resistance?
First they came for the socialists, and I did not speak out, because I was not a socialist.
Then they came for the trade unionists, and I did not speak out, because I was not a trade unionist.
Then they came for the Jews, and I did not speak out, because I was not a Jew.
Then they came for me, and there was no one left to speak for me.
Now is the time to speak out. By the time you want to protest and push back, it could be too late.
Think about it this way; if the government wants to know something about you, they'll be able to find out. Switching browsers, or search engines, or email providers, or chat apps will not stop them from their goals.
But it can make your life a lot more inconvenient.
> In formulating your statement that examining criminal behavior is a problem [...]
Who exactly said this? It's rather the other way around: flagrantly examining and being able to examine non-criminal behaviour at a whim is a problem. The excuse of potentially being able to spot criminal behaviour is not enough.
The GP did: "I actually believe that law enforcement has the legal right to subpoena information, with a judge's consent, while investigating criminal activity. This is exactly the solution to that _problem_." Nothing was said about spotting at large, but the context was subpoenaing information with a judge's consent while investigating criminal activity.
>"I actually believe that law enforcement has the legal right to subpoena information, with a judge's consent, while investigating criminal activity. This is exactly the solution to that _problem_."
Absolutely. The other side of that coin is that people are not required to keep (or in this case, even gather) information in a way that allows the government to obtain it.
I'd also point out that this isn't about information that could prove a crime. It's about the government demanding information from a third party about unknown persons and the contents of their personal effects.
Given that Signal doesn't collect or have access to such information[0]:
"...this subpoena requested a wide variety of information we don’t have, including the target’s name, address, correspondence, contacts, groups, calls."
It's not possible to provide it. Are you claiming that Signal should be required to gather such information solely for the benefit of the police?
As the Fourth Amendment[1] to the US Constitution says, in part:
"...and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized"
And since the subpoena was asking for Signal to identify the subject (their name), such a demand is clearly outside the bounds of the Fourth Amendment.
I'll say it again: Whether a judge (in this case, it was a grand jury and not a judge, but why split hairs?) agrees or not, Signal can't provide information it does not possess.
I suppose a law could be passed requiring them to collect such information as was demanded, but it's hard to see how that would be defensible on any grounds.
We end up debating trade-offs where people don't agree.
Privacy with end-to-end encryption keeps everyone's communications safe. Criminals, politicians, people working for government contractors, and everyone else. This means criminals can get away with more things. It also means that politicians and surveillance governments have a harder time monitoring regular people or their government challengers.