> The district court said it felt bound to dismiss her claims because of a 1979 Supreme Court case, Smith v. Maryland. That case involved the collection of the phone numbers dialed by a criminal suspect over the course of three days. It’s one of the cornerstones of the so-called “third party doctrine,” the idea that people have no expectation of privacy in information they entrust to others—and it’s outdated to say the least.
Post "the Fappening" and the realization that there are essentially no enforceable protections for information you store in the cloud, I'd say Smith v. Maryland makes as much sense as it ever did. After all, back in 1979, call records were stored in a dusty database somewhere, used only for billing purposes. Today, the corresponding data stored by Facebook, Google, etc, are mined, processed, and correlated across time and space for advertising and marketing purposes. If Smith stands for the proposition that the government has at least as much right to access your data for law enforcement purposes as AT&T does for billing purposes, then replacing "AT&T" with "Google" and "Facebook" and "billing" with "advertising" doesn't undermine the rationale of the case one bit.
Obviously you can argue that it was wrongly decided in the first instance, but that's a different argument.
Some kinds of data have to be recorded, eg for certain sorts of financial transactions in excess of a certain amount. But you could operate an anonymous email service or some other sort of service where you don't retain any records. Government entities have been aware of/monitored 4chan for years, for example, but I've never heard of any attempt to compel Moot to engage in harvesting or storage of data (by compel I mean via litigation or legislation, not some politician saying 'there oughta be a law' on the nightly news).
In this line of argument, the host that houses the servers is also a third party. So, there's no need to contact the opperator, when they could just access the machines.
That doesn't mean some Federal agents haven't paid him a visit and told him to log IPs or else they might start "investigating" the site. If they want to the Feds could easily come up with grounds for seizing the sites hardware. I suspect moot "volunteers" information related to serious crimes.
It is notable that some 4chan competitors have gone out of their way to say they don't log IPs and have distributed their infrastructure.
Point well taken, but it's a bit of a red herring, isn't it? The government isn't forcing Google to sift through your email, or forcing Facebook to figure out who your friends are, or forcing Apple to track your movements via GPS. I think there is a lot of room in Smith to treat that data differently than data the government requires you to disclose.
There is that too. But I'd say their public complaints about securing the data better is pretty clear evidence they want to bias the system outside of legislative channels as well.
> Post "the Fappening" and the realization that there are essentially no enforceable protections for information you store in the cloud
I don't think that is a fair statement. I've got financial/billing information for many people that moves across the internet. We haven't had any serious breaches that compromise that information.
A fair statement would be "Apple et al. do not find it cost effective to secure such data while simultaneously pleasing shareholders and law enforcement."
The moment Apple, etc. wants to try to secure it more, the Government complains about it publicly and trots out bullshit about catching pedophiles. The Government's methods are actually more dangerous to children since making that data insecurable guarantees people who are willing to break the law can get to it.
The Government doesn't want the data secured because it would complicate things for them yet the Government simultaneously says they have the right to all such data.
It is really evidence of a systemic bias in how the system works. Sure, the Judge can't really do anything [without being accused of activism and likely overturned by appeal] but that doesn't make this 'less bad'.
If it wasn't a general Government position that this was true, they should have muzzled their people and/or issued statements stating it was the position of those people and not the Government as a whole.
If Smith stands for straw man, then it's even stronger today.
From the horses mouth:
First, we doubt that people in general entertain any actual expectation of privacy in the numbers they dial. All telephone users realize that they must "convey" phone numbers to the telephone company, since it is through telephone company switching equipment that their calls are completed.
People dial phone numbers, they don't dial cell towers or GPS location data, and there are of course infinitely many other examples.
Second, even if petitioner did harbor some subjective expectation that the phone numbers he dialed would remain private, this expectation is not "one that society is prepared to recognize as 'reasonable.'
Users surely realize that every gmail they send and every Facebook message is both mined for information and also accessible to hundreds of people at these companies. At least, they realize that as much as they ever realized that the phone numbers you dial don't just sit in your phone but have to go out through the wires to the phone company.
As for the second part--the test is whether a subjective expectation of privacy is objectively reasonable. That is to say--it's what a reasonable person would expect knowing the facts, not what the average person subjectively believes. It's possible for your average person to have a subjective expectation of privacy that is not objectively reasonable if your average person is confused about the objective facts.
Looking at the objective facts: cloud data is mined and processed and accessible to many people at cloud companies, it's hard to say that any subjective belief that cloud data is private is objectively supportable. It might be widely held, though I think that's changing with every news story of iCloud leaks, but that doesn't make it objectively reasonable.
I give my data to Facebook. Facebook tells me what they're going to do with it, and provide me with service in return. This doesn't mean I've given Facebook permission to sell my data to anyone. While Facebook makes extensive use of my data -more than is needed to present a wall to me- that doesn't mean I lose expectations of privacy.
I'd want Facebook to fire people who were looking through my photos without operational reason, as one example.
Perhaps it's a European thing: Companies should only ask for the data they need; they should only keep it for as long as they need it; they need to protect it against loss.
I strongly believe that law enforcement and intelligence agencies should be using validly formed legal documents to get access to cloud (and other) data.
Let's take a simpler situation. Say you tell me that you were with Bob and Jim out on Mill Road last week. Now if a crime has been committed, and a cop is asking questions door to door if anyone knew where you were and who you were with, do you think that the officer should have a signed warrant for me to disclose to them that you said you were out by Mill Road with Bob and Jim? How would the Facebook situation be any different, other than the fact that they make money off you telling them these things?
> Now if a crime has been committed, and a cop is asking questions door to door if anyone knew where you were and who you were with, do you think that the officer should have a signed warrant for me to disclose to them that you said you were out by Mill Road with Bob and Jim?
It's important to distinguish between you being allowed to tell them and them being able to compel you to tell them. I see no problem with requiring a warrant for the second one.
A warrant isn't required to compel someone to testify against someone else. Historically,[1] the balance has been struck in favor of the government's right to all relevant evidence.
[1] The subpoena ad testificandum dates back about 800 years.
I get that judges are resistant to changing something that has existed for a long time, but it still seems like a bad policy. If someone has to be compelled to testify it will almost always be either because the witness doesn't want the perpetrator to be punished or the perpetrator is so dangerous that the witness is afraid to testify against them. Compelling testimony in either circumstance (as opposed to, say, negotiating for it in exchange for witness protection) is very likely to produce false testimony. It's hard to see a government interest in false testimony. Moreover, history is not always right. Censorship is as old a practice as the printed word, that's no excuse not to apply the First Amendment to curtail it.
This is also justifying the lack of a warrant requirement based on something that breaks the analogy to the original case. Witness testimony is no one's "papers and effects", unlike stored data.
How do you expect the courts to respond if the government prosecuted AT&T for obstruction of justice for refusing to execute a wiretap without a warrant?
Strictly speaking they don't "make money off of you telling them these things." They make money from advertising in a well-targeted fashion, which is actually a much more complicated proposition, and one that has far more to do with how much sense they can make of the the data you provide – not just the acquisition of the raw material.
Using a more accurate view of their model, it becomes clear that FB is strongly incentivized to keep their raw data to themselves. Failure to do so means risking that someone will not only beat them at their own game, but will do so at their expense. Same goes for Google, etc. So yes, it is perfectly reasonable to share a lot of very personal data with them while expecting that much of it will be carefully protected. After all, the stratospherically high value of their position depends on the inability of others to connect the very fine-grained demographic profiles they create with the specific individuals behind them.
Given this reasonable expectation of privacy, it makes more sense to see these third-parties more like doctors, priests, or accountants. Yes, they're all third parties too. But they're ones who we absolutely expect to protect our privacy. Given how deeply established the norms of confidentiality are in cases like these, I fail to understand why we we've acquiesced so readily to the fairly radical standard set in 1979.
"Users surely realize that every gmail they send and every Facebook message is both mined for information and also accessible to hundreds of people at these companies."
You're kidding right? My expectations would be that the vast majority of people who use those services have no idea that is happening. The internet is a device now, just like all the other devices in their home that most people have absolutely no idea how they work, they only know how to use them.
For a chap interested in the law, you seem to be rather flexible in your perspective.
An unlocked door on a house doesn't seem to support an objectively reasonable expectation of security, but that doesn't mean it's any less wrong to open it and burgle the place. And I doubt the fact of an unlocked door or window would be supported as a defence in a burglary case.
Data stored with cloud companies are accessible, as though through an open door, for those companies' employees. But that doesn't make it any less wrong for such employees to go trawling for gossip, amateur porn, etc. in such private messages.
The factual difference is that those companies don't just have the ability to trawl through your data, they actually do it.
Imagine you allowed Facebook to send people into your house to take pictures, so they could see what cleaning products and the like you buy and sell that information to advertisers. The government wouldn't need a warrant to subpoena those pictures in Facebook's possession of what's in your house.
I think your general argument is a reasonable one if read in isolation. Unfortunately it seems quite possible that rayiner's characterisation of the law is correct, and that the law just happens to be bad and/or anachronistic in this area.
In general, I don't think privacy and data protection laws are anything like strong enough in most jurisdictions today, and I think the data mining industries (and governments) are pushing the boundaries of acceptable behaviour as far as they can as fast as they can so they can then claim that their behaviour is normal and accepted and therefore "OK" before those laws catch up.
The difficulty is that privacy is something of a Pandora's box situation -- once lost, it is almost impossible to truly restore -- with the added twist that it's mostly important to protect against "it would never happen to me" situations, which means the average person doesn't realise or doesn't care what they're losing until it is too late. I'm afraid the data miners will win (or already have won) and for at least one generation the safeguards that privacy should bring them have been lost for life.
Realistically, most people probably won't suffer great hardship as a result. The worst they'll get is some spammy advertising. The people we should worry for are the innocent ones who will nevertheless spend months trying to fix their credit after identity theft, or paying a fortune to accountants when they are investigated for (negligently) suspected tax fraud, or going through a hellish legal process for months to defend themselves against false accusations, or trying to get permission to travel on holiday after being put on some list because something they said was taken out of context, or having their candidacy for public office undermined by character assassination, or in the ultimate worst case who are dead because the SWAT raid/drone strike/other severe and immediate action left them no chance to set the record straight. These are among the reasons privacy and data protection really matter, but to the extent that anyone is really questioning laws and practices today they seem more hung up on Facebook and Google mining data to serve ads.
> trying to get permission to travel on holiday after being put on some list because something they said was taken out of context, or having their candidacy for public office undermined by character assassination, or in the ultimate worst case who are dead because the SWAT raid/drone strike/other severe and immediate action left them no chance to set the record straight. These are among the reasons privacy and data protection really matter, but to the extent that anyone is really questioning laws and practices today they seem more hung up on Facebook and Google mining data to serve ads.
It is rational to worry about something proportionally to the product of how bad it would be and the probability of it happening to you. Getting drone-striked would be worse than losing a job because of data mined from your online presence, but the latter is orders of magnitude more likely to happen to you.
Privacy does matter. To me, the way forward on privacy is real privacy. Encrypted everything and laws against tracking people on the internet except with explicit consent construed narrowly. I don't think the way forward is trying to paint the government as the bad guy while giving private companies carte blanche to do all the same things.
My personal experience has been that privacy and data protection (not the same thing, but of course closely related in practice) are issues that are very hard to advocate for. This is mainly due to a combination of "it would never happen to me" (which it probably won't, if we consider only a limited range of "it" and what is happening right now, today) and the stable-door-horse-bolted nature of the problem.
Of course, this doesn't mean it's any less worth defending the principle and advocating for change. It just means it's sometimes hard to get past "I don't care what Facebook uses to advertise to me" or "If you've got nothing to hide then you've got nothing to fear" as the default response even from quite smart and rational people if they haven't spent much time thinking about the issue and/or don't fully understand the technology involved and its potential applications.
> Users surely realize that every gmail they send and every Facebook message is both mined for information and also accessible to hundreds of people at these companies.
Well it's not like they ever offered users a choice or educated them....
But haven't federal courts ruled that the government does need a warrant to access the content of emails? Which would imply that users do have an expectation of privacy.
Soon the logic will be: "Before snowden, It would have been reasonable to expect that your information was private, but now any reasonable person using the internet/phone/etc. would assume that the information will be harvested and stored by the government and therefore has no reasonable expectation of privacy, therefore no warrant will ever be required"
I'm by no means a lawyer, but I'd respond by saying that the judges responsible for the original decision likely didn't realize the consequences the Smith decision would have in the Internet age. So much of the digital content we produce ends up in the cloud, whether in an online backup service, SaaS, email, or otherwise. To say that sharing information with a cloud provider eliminates your expectation of privacy is almost the same as saying that it's not really possible to have any sort of private digital records whatsoever.
The exception to that would be in cases where end to end encryption is implemented, and only the user holds the key. That seems to be where we're increasingly moving in a world where Smith stands, and it's certainly not optimal for law enforcement, who won't be able to access that information even with a warrant.
Not that I agree or disagree, but the argument is similar to the Second Amendment argument that the framers of the Constitution couldn't imagine M-16s, let alone nuclear weapons.
I understand precedence, but I don't get blind adherence to it. It's a dangerous and - dare I say, lazy - to not revisit old laws and court decisions that could not imagine the present. It's odd to me for a judge not to assume (s)he has leverage to challenge old decisions - and when one does (like challenging government assertions of state secrets) they're derided as "activist."
If we find an old ruling or law still valid, then so be it. But bad (and good) precedents needs constant revisiting, or we'd still have prohibition, slavery, and so on.
This certainly does happen for court decisions [1], but part of the legitimacy of the court comes from the idea that it is consistently (and therefore fairly) interpereting law.
It's also important to make a distinction between "the implications of the court's interpretation of the law are different in this new world" and "the implications of this law are different in this new world. For the latter, the legislature, not the court, should fix the law. For the former, it is still within the legeslature's power to fix the law in such a way that the previous court decision is invalidated.
It's a nitpick, but consistency and fairness are independent qualities in legal interpretations. As the simplest example, consistently applying an unfair ruling or precedent does not result in fairness. Only by presuming that all previous court rulings (at least those that can be used as precedents) were fair, does consistency result in fairness.
The fun part about the second amendment is the full auto weapon ban. If this is constitutional - then the same mechanism could be used to regulate other types of guns. If it is not (which current readings seem to hint at), then it should fall.
As a supporter of the second amendment, I see no issues with a full-auto ban. Even though, technically, you can obtain a fully automatic weapon if you so wish. There are just a good deal of hoops you have to jump through to do so.
The reason I have no issue with this is because the government also has a duty to protect the common good. If it is determined by elected officials that specific weapons are far too dangerous to a large enough group of citizenry, then there's no problem with limiting its availability. It is a difficult balancing act. The government may make it difficult to obtain permission but they cannot outright make it impossible. Which is the basis of many lawsuits against gun control laws.
Also, I suppose it depends on how you interpret "arms". I define it to include weaponry we today call small arms, which range from pistols to rifles to machine guns. Which are available with an increasing level, based on lethality, of vetting before you can legally obtain said weapon. Any weapon beyond that list most likely just does not apply.
This is why I think the nuclear weapon example used by gun control people is just outright silly.
There is a simple fix: the government cannot possess any weapon that it denies to individuals.
If you cannot own a fully-automatic firearm, the Army cannot own any. If you cannot own an air-to-air missile, the Air Force cannot own any. If you cannot own a nuclear missile sub, the Navy can't have any.
This would eliminate outright bans, although regulatory restrictions that tend to favor people supported by military infrastructure would likely replace them. Perhaps you would be permitted to own and use fully-automatic rifles, but you would also be required to regularly train, register, and qualify in their use, such that it would be a surmountable and nondiscriminatory barrier to ownership, albeit one more easily passed by military employees in the regular course of their duties.
There is no particular barrier currently imposed by the second amendment with respect to requiring that those who do keep and bear arms be proficient in their use. It may even be supported by the "well-regulated militia" clause. You may bear arms, but if you do, you must also be able to use them proficiently.
As I interpret it, the 2nd amendment allows me to own and carry any weapon of military significance, without regard to the threat it may pose to myself and my neighbors, from a simple, throwable rock to my own multi-warhead ICBM with tunable, independently-targeted payloads. It is incredibly obvious that latter possibility is not very compatible with modern civilization.
If anyone is going to have private weapons of mass destruction, I would certainly want some limitation on their use, like the "odd man" failsafe-override procedure in Andromeda Strain, where those people with ultimate control over that weapon can satisfy some simple, objective criteria that ensure its use is rational, ethical, and viewpoint-agnostic. This should allow an equal possibility for using something in an armed rebellion, or for the government to use it to suppress one.
The government nuclear infrastructure already has safeguards against improper use of arms. Psychological screening, inventory controls, and authorization procedures, to name a few. If private individuals or organizations were required to implement substantially similar safeguards, no one would bother owning any, because the weapons are simply too expensive to maintain, for something that you are never likely to be able to use and still remain the good guy.
If you think of the government as just a lot of regular people doing their jobs, allowing regular people to have military-grade weapons as an expensive hobby is no more threatening. A Wyoming ranch owner with a nuclear missile is not more threatening than a Wyoming silo-bunker lieutenant with a nuclear missile, if they both have to get approval from someone presumed rational before they blow anything up. The latter has his military chain of command. Perhaps the former has to call up the county sheriff for his launch codes. Donning the uniform does not somehow make a person more qualified to handle a weapon. So if something is too dangerous for an individual, it is certainly too dangerous for an individual with a specific employer with an organizational structure geared around dilution of individual responsibility.
I would agree with the training part, I would have no issue at having training be a requirement.
The rest I have to disagree on simply because I don't view nuclear weapons, and other primarily military-based weapons (air-to-air missiles being one), as "arms". Like I said, it all matters on how you define that particular term. Obviously you have your definition and I have mine. Thus, the eternal debate.
Your attitude allows existing rights to be eroded, circumscribed, or eliminated entirely by manipulating the colloquially accepted definitions of dictionary words.
An arm is a weapon. The right to bear arms is the right to carry weapons.
If you would like to redefine "arms" to exclude certain types of weaponry, surely you have no problems redefining "speech" to include making cash payments to political campaigns and to exclude publishing written content to the Internet? Or perhaps you would like to redefine "unreasonable search and seizure" to exclude the collection, retention, and analysis of all packet headers and unencrypted content transmitted across the public networks? Or maybe a "speedy and public trial" will take place in secret, using secret evidence, without the assistance of counsel, by the military, at Guantanamo Bay.
Consider that you may be part of the problem.
The existence of nuclear, chemical, biological, electromagnetic, and radioactivity weapons demand that the amendment guaranteeing the right to bear arms be amended, not reinterpreted or ignored. They are, unequivocally, arms. And those arms are arguably too dangerous for any single individual to own or control without restriction. Saying they are not "arms" is simply a lie for political expediency, and undermines the rule of law and the constitutional system of government.
Our collective unwillingness to revise old laws to make them more applicable to current conditions has led to patches and workarounds of dubious worth, and to some people equating fully automatic rifles with nuclear bombs. The "ongoing debate" generates ambiguity, which is exploited by political parasites to our detriment. We cannot afford to endlessly argue. It can be settled conclusively, to be reconsidered later, whenever progress demands it.
All we need to agree upon is exactly what sorts of arms are too dangerous for a randomly selected individual to possess, and apply that standard equally to all people, including those with badges or rank insignia.
I'm late getting back to this but I laugh at your so-called ability to determine my attitude and opinion about anything based on a few sentences you read on a web page. You do not know me, you do not speak for me, and you surely do not have the ability to tell me what I think about any particular subject.
I can easily extend your definition of arms to mean anything and everything, which in the end makes it mean nothing.
> Obviously you can argue that it was wrongly decided in the first instance, but that's a different argument.
Isn't it the same argument?
The thing that has changed over time is the nature, quantity and implications of the data held by third parties. Target now knows a daughter is pregnant before her father does, etc.
Against this level of knowledge stored by third parties, the third party doctrine eats the fourth amendment. Which means that "the proposition that the government has at least as much right to access your data for law enforcement purposes as AT&T does for billing purposes" has to be wrong. The principle is defective. Who you call is the business of the phone company because they have to route your calls; how does that make it the business of the government?
I agree with you, but this also doesn't seem (in my totally amateur reading) to actually be covered by the 4th amendment. If we really want this sort of privacy enshrined, we could really use a new amendment. It should be unsurprising that huge changes in society, like those of the internet age, would require fundamental changes to the Constitution - this is what amendments are for. This idea is laughably impractical though, which either suggests that amendments were made too difficult to achieve, or that the system is working well by making us wait for overwhelming consensus on the question.
> I agree with you, but this also doesn't seem (in my totally amateur reading) to actually be covered by the 4th amendment. If we really want this sort of privacy enshrined, we could really use a new amendment.
The First Amendment read literally would disallow laws against libel or copyright infringement or even fraud. On the other hand, read literally it would only protect "freedom of speech, or of the press" and provide no protection for electronic communications because they aren't spoken or printed. Literal readings are not useful.
There is very little a pro-privacy amendment would accomplish that couldn't reasonably be read into the first, fourth and fifth amendments already. The problem is not the constitutional language, the problem is what the courts are letting the government get away with.
I'm not suggesting that the amendments should be read literally, I'm suggesting that in this particular case there is very reasonable difference of opinion about what should and shouldn't constitute people's papers, effects, and private property for the purposes of federal law, and it could use to be cleared up by people who are elected.
There is always a very reasonable difference of opinion about what a given constitutional provision actually means.
The reason amending the constitution is hard is that it's meant to act as a check on "people who are elected." Take a look at the amendments since the bill of rights.
11: Sovereign immunity for states
12, 15, 17, 19, 20, 22, 23, 24, 25, 26, 27: Changes to voting rights, elections and issues related to elected officials
13: Abolish slavery
14: Post civil war amendment, reduction in power of the states, new enforcement powers for federal government
16: Grants Congress the power to levy income tax
18: Prohibition (repealed by 21)
Congress hasn't voted to meaningfully reduce federal power in over 200 years.
Not always, but sure, often. That doesn't mean it is pointless to ever bother writing anything down. Just like the difficulty of passing an amendment is a check on the legislature's ability to grant themselves more power, the possibility of doing so is a check on the judiciary's ability to make laws mean whatever the current court wants them to mean.
But if your point is that privacy advocates have a better chance of success through the avenue of a broad interpretation of the fourth and fifth amendments than through a new amendment that puts a broader interpretation in the actual text, then it is a point well taken. They may well prefer to pass the opposite law, based on their historical action.
Then what's the point of privacy policies if they are rendered useless by Smith vs Maryland? Then the privacy policies should only have two sentences:
"We can do whatever we want with your data once you give it to us. Problem?"
The truth is the government is just using the Constitution/rulings as it suits them. They've changed the meaning of the word "relevant" to mean millions of people for 3-hops from the target. They are doing the same with Smith vs Maryland. They're just pushing it to its very limit, or even far beyond the limit. But I doubt that if the Supreme Court from back then were to rule in a contemporary case like that, it would rule the same way.
Agreed, as usual. ISTM the EFF is begging the question (in the legal sense of assuming one's preferred conclusion as a premise) here by breezily asserting that Smith v. Maryland is outdated, without making any clear showing of why. Yes, you could look at the history of someone's telephone calls and infer that they planned to get an abortion, or had cancer, or had particular preferences in takeout food...but that was equally possible in 1979. Indeed, the EFF mentions government access to 'telephone books' - and while nowadays any telephone books I receive go straight into the recycling bin, back in 1979 they were the normal way of storing information, and my understanding is that telephone companies maintained reverse directories ordered by number.
This matters because while the government can obviously process data in a mere millisecond these days (like the private sector does), efficiency of execution has no bearing whatsoever on the question of whether people have an expectation of privacy in their phone records; the holding of Smith v. Maryland is that they do not, because by choosing to place a telephone call they have opted to share that information with the phone company, and the fact of the call becomes a mere business record. The Supreme Court's position was that there's a trade-off between anonymity and convenience, and where one opts for the convenience of a telephone call (facilitated by the telephone company) then one sacrifices any expectation of privacy. This matters since if phone calls are still considered business records 35 years on, then warrants are no more necessary now than they were in 1979.
There's a much stronger argument for the privacy of things like email, since one can send encrypted messages on a peer-to-peer basis by the simple expedient of running one's on mail server, even though most people choose to outsource that job to a commercial provider. But phone calls only go through exchanges, and there's no way to bootstrap your way into the phone network that's equivalent to firing up your own SMTP server - if you make a call through Verizon from your cell, how is that fundamentally different from making a call through AT&T from your landline?
one can send encrypted messages on a peer-to-peer basis by the simple expedient of running one's on mail server
...except that you rely on the rest of the internet to actually deliver the message. So in a sense, the argument is the same: you have a reasonable expectation of privacy on the contents but not the addressing information.
But I think this argument misses a big part of the EFF's point. The reason that Smith v Maryland is outdated is that the nature of surveillance analysis has changed. The legal standard should not be based on some technicalities of implementation, but rather on the actual real effects in terms of policy and expectations.
Yes, technically, the difference between looking at a handful of records manually pulled by a technician in response to a court order and examining the metadata of literally every phone call made using sophisticated machine learning algorithms is one of degree, not kind. However, the difference in result is one of kind, not degree, and it's facetious to argue otherwise.
Basing our legal standard on such a technicality simply encourages the government to find loopholes. Basing it on the effective results makes the standard relevant, meaningful and future-proof.
You don't have to attack third party doctrine to get a sensible result in light of modern technology. A straightforward wrinkle to add to Smith is that unlike the plain text call information in Smith, an encrypted Skype call or encrypted cloud storage account is something people can have a reasonable expectation of privacy in.
The only reason you have to attack a Smith head-on is to validate the current monetization structure of the Internet. Encrypted data isn't very lucrative, after all. I don't think that's a valid justification for interpreting the Constitution one particular way.
> A straightforward wrinkle to add to Smith is that unlike the plain text call information in Smith, an encrypted Skype call or encrypted cloud storage account is something people can have a reasonable expectation of privacy in.
You're just hitting the heart of why Smith is broken. The content of a traditional phone call isn't encrypted but Smith still doesn't allow the government to record it without a warrant just because the call is transmitted via third party equipment, am I right? The issue is the metadata. But encrypting the call doesn't hide the metadata which is still needed in order to route the call.
Verizon Wireless knows where you are 24 hours a day, Facebook knows who all of your friends are and when and how often you communicate with each of them and Amazon knows everything you buy and what books you read. Encrypting the content of messages changes none of that. You're already paying money to Verizon and Amazon rather than viewing ads, it doesn't stop them from storing the metadata.
It seems like a crazy ruling to me because it is basically impossible to communicate quickly for farther than about 100 yards on a calm day without enlisting the assistance of a third party.
Does this imply that most people have no expectation of privacy in their communications? Their phone calls? Their emails? Their letters? I find that hard to believe.
It's a lot like handing over your rolls of film to be developed to the teenager at CVS. Discreet. If you voluntarily conveyed the information, you've got a poor claim to privacy in light of Smith v Maryland.
Snowwrestlers point is interesting, why is a warrant needed for a phone tap? You are clearly enlisting a 3rd parties help, and I think the general public knows that their phone company can listen in on their calls... People are clearly sharing the content of the call as much as the number with the Telco. It seems odd that there are 2 standards here, particularly as it's the same transaction.
A warrant is needed because there are explicit wiretapping laws that protect the content of phone communication. This was not a legal principle that was bootstrapped from an interpretation of the fourth amendment.
There may be explicit laws for wiretapping phones, but the expectation of privacy in email does rest on the 4th Amendment. Here's an example of a ruling on emails that are held by a 3rd party:
So there does seem to be a weird double standard in the "3rd party doctrine" between content and routing information.
I think if you asked 100 people on a random street what part of email they expect to be private, almost everyone would answer "all of it." I agree with the EFF that the Smith ruling is outdated.
If you went back in time and asked people if they thought they gave up privacy when they dropped off their film, they would say "hell no." Especially since photo lab workers were prosecuted for doing things like keeping copies of photos they like.
You could write someone a letter without putting your name on the outside - this is why I made the point about it being the price of convenience. The content of your conversation is legally something else again.
But there is no way to do that with an email or a phone call. Their metadata and content are stored the same way by the same 3rd party. So why would there be separate legal treatments?
The simplest argument against your position is that users did not password protect access to information, as well as require their data to be transmitted via 'https' in 1979. These measures provide at least some evidence of the private nature of the information, which is as well secured as that which the citizen 'saves' on their home computer.
We have an expectation of privacy from the government if we the people say that the government is simply not allowed to look for or look at something, regardless of who else may or may not have access to the something.
"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, ...
Whilst the direction of the USG and its Big Brother attitude makes me fear for our collective future, it is worth pointing out that it is warming to see that there is a legal framework to challenge these actions.
In the UK... there is no hope. We have no constitution. EU laws that would prevent such actions are overridden with emergency security laws that almost nobody challenges, whilst the 'Liberal Party' don't just stand by, they partake.
Your constitution is unbelievably valuable. I hope you can hang on to it.
Post "the Fappening" and the realization that there are essentially no enforceable protections for information you store in the cloud, I'd say Smith v. Maryland makes as much sense as it ever did. After all, back in 1979, call records were stored in a dusty database somewhere, used only for billing purposes. Today, the corresponding data stored by Facebook, Google, etc, are mined, processed, and correlated across time and space for advertising and marketing purposes. If Smith stands for the proposition that the government has at least as much right to access your data for law enforcement purposes as AT&T does for billing purposes, then replacing "AT&T" with "Google" and "Facebook" and "billing" with "advertising" doesn't undermine the rationale of the case one bit.
Obviously you can argue that it was wrongly decided in the first instance, but that's a different argument.