His comments are actually the most insightful points I've seen about the discussion regarding PRISM:
I have my own suspicions -- which I won't go into here -- about what PRISM was actually about. I'll just say that there are ways to intercept people's Google, Facebook, etc., traffic in bulk without sticking any moles into the org -- or directly tapping their lines. You may find some interesting hints in the leaked PRISM slides [1], especially the second and fourth ones shown there. The subtleties of phrasing are important, I suspect, not because they were trying to be elliptical but because they reveal what was obvious to the people who were giving that presentation.
And like I said, I have both some reason to believe that there aren't such devices inside Google, and that the PRISM slides are actually talking about a somewhat different kind of data collection -- one that's done from outside the companies.
Beam splitters (prisms?) inside the backbone providers. All traffic goes to its destination unharmed, but the NSA gets all the packets. SSL is harder, but all you need is the private keys. Those are hard to get but not impossible for someone with the resources of the government. This is the only scalable way to do what they are supposed to be doing and not involve lots of outsiders. Note that the people who have really clammed up the past few days are the telecoms in all this.
The linked quora answer (from the co-author of Firesheep) says that even in that case one can't launch a passive man in the middle attack if perfect forward secrecy is used.
Google.com uses Diffie–Hellman key exchange which provides perfect forward secrecy.
So... If I understand everything correctly, it should be impossible to decrypt passively captured HTTPS traffic to/from google.com.
We are not disagreeing with anything he says. We are saying the NSA has the private key.
Ian Gallager: > so if you have the private key, you can decrypt that key, and then use it to decrypt the bulk-encrypted data.
Like he says, if you have the key, wireshark can decrypt the data trivially.
If you have the CA master keys, then the only thing you can do is perform a MITM attack, but not silently decrypt the raw data. A MITM attack would eventually get detected.
Ephemeral Diffie-Hellman creates a new key per connection in a public-safe manner. You cannot eavesdrop on such a connection, even if you have the signing key. The question then becomes, are the SSL sessions actually using that mode.
> Ephemeral Diffie-Hellman creates a new key per connection in a public-safe manner.
How does that make a difference when you have the Diffie-Hellman key? We are saying they have the Diffie-Hellman keys, not the signing keys, nor the block cipher key that is exchanged. They have the only key that matters.
How are they getting the DH keys without cooperation from at least one of the SSL endpoints involved? They're newly generated at every SSL handshake, you can't just get a mole to hand you the keys once and be done with it. If you had the certificate private key, you could do a MITM, but this requires a LOT more resources and would be much more easily detectable.
But I am still lost on how it would be detectable? From Google's end, some client just disconnected. From the client's end, the internet just got a tiny bit more latency.
If you had Google's certificate private key, you can pretend to be Google. It's undetectable from the user's perspective. I think we should trust Google to keep their private keys safe, although it would help a lot if the published in general terms how they accomplish this.
The signing key for Gmail's certificate is a 1024-bit RSA key. That key size is simply not safe against an attacker like the NSA today, so we may as well assume they have the private key even if Google didn't voluntarily give it to them.
But while the signing key may allow them to impersonate Google in some circumstances, it doesn't really help decrypting passively recorded TLS traffic to the real Google. For that, they would need to break the ECDH key exchange, and if Google uses reasonable elliptic curve parameters, that's presumably much harder than factoring a 1024-bit RSA modulus, at least with known cryptanalytic techniques.
"I think we should trust Google to keep their private keys safe, although it would help a lot if the published in general terms how they accomplish this."
Really, I would think it would be easy for the NSA, etc to get an operative inside Google, FB etc and steal these. Intelligence organizations are very good at this after all..
>How are they getting the DH keys without cooperation from at least one of the SSL endpoints involved?
One possibility is to actually compute discrete logarithms.
Does anyone know what elliptic curve parameters Gmail uses for key exchange? If the parameters are large, it is not feasible to break discrete logs using known methods, but while I'm usually wary of claims that the NSA is miles ahead of the academic research community, I could perhaps believe they have faster algorithms for e.g. some NIST curves.
If you hold the theory that the traffic is being intercepted and the parties have compromised the TLS keys: The test for complicity is obvious: failing to rotate out the TLS keys, failure to HSTS, and failure to switch to EDH ciphersuites everywhere.
These are all moderately 'cheap' steps if you believe you're being compromised in this manner.
I was being a bit devious. I am just so tired of tptacek dismissing stuff I say with asinine arguments and then watching my comment get downvoted to hell.
Ok so they split and copy all the packets, nobody else is concerned with the complexity of tagging, filtering, rebuilding and contextualizing this conceptual volume of packet data?
Beam splitters are not enough, they would need something to interpret this traffic.
Then you just process basic metadata. Size, IP source, destination, timing, and statistical analysis of the binary. Assuming that they have ways of converting IP to an identity that information alone would be hugely revealing. In fact basic metadata is what they have admitted to recording.
That's why IMHO secret sharing[1] algorithms are so important
We store only parity of data in one data center, and some in another on a different continent. Any data intercepted or lost does not damage the integrity of the whole, plus this makes ISP can not discriminate raw binary data.
Here's a much simpler explanation: The Feds submit a FISA order for specific data collection. The companies' lawyers approve it. Then the NSA has a convenient user interface for accessing that data (perhaps real-time?) somehow from the companies' servers (possibly through an intermediary). How else is this data being sent to Ft. Mead? Thumb drives via FedEx?
The dates on the slides might be when a company has erected some convenient access point to grab the data "lawfully" obtained by a FISA order. Microsoft whipped something together quickly. Apple took years to get the UX just right.
Frankly, sucking in ALL of the Internet seems extremely difficult and useless. We're talking GOOG+AAPL+MS+YHOO+Skype+many more. And all for $20M/year? The gov't spends more on toilet paper.
This is my thoughts exactly. I cannot imagine any hugely sophisticated data collection infrastructure costing a mere $20m a year.
More likely this is software written to take in structured data obtained by subpoena -- as it's generated by targeted users. This "ultimate user data liberation" API may have even been the system at Google that was attacked by the Chinese: http://www.washingtonpost.com/world/national-security/chines...
It's possible PRISM is a separate program that's accounted for differently. We've had pretty good evidence for a while now that the government does have firehose type capability in at least some locations.
For only $20M/yr (which is nothing by government standards), I could definitely see that being a roadmap for building the user friendly endpoint to obtain the relatively small number of legally obtained records from each provider.
A target's phone call, e-mail or chat will take the cheapest path, not the physically most direct path - you can't always predict the path
Dates When PRISM Collection Began For Each Provider
This is complete conjecture, but this reads to me like the NSA set up its own backhauls and set up peering agreements at artifically low prices to get traffic going over their pipes. Is there historical data for route announcements available anywhere? There are a lot of specific dates that could confirm/disprove this.
This is expounded upon in some news articles, particularly that data is routed not by the most efficient geographic route but by the cheapest by dollar price.
It's a very important tidbit of information that divulges the heart of the matter -- the NSA is a backbone operator[1]. It's a tall claim to make, however their involvement in internet exchange points (and other regional network access points) would be far more difficult to conceal.
Acquire access to major routers, send routing commands, route target traffic into their hidden networks, and avoid physically wiretapping anything. And this framework can be done globally:
Your target's communications [...] flowing into and through the U.S. is as easy as announcing BGP route advertisements globally.
Actually the Great Firewall of China started investigated realtime and fine-grained control of national routing infrastructure as early as in 2003.[1] This allows them to apply routing policies to routers national-wide in seconds. One observable effect is that a single address can be null routed immediately after failure to get blocked by TCP resets. It is believed the HTTPS MITM of Github last time was also helped by this routing framework. And the GFW is viewed by the Chinese government as a national security framework. No wonder the USG is doing the same thing.
[1]: Liu, G., Yun, X., Fang, B., Hu, M. 2003. A control method for large-scale network based on routing diffusion. Journal of China Institute of Communications: 10.
There are two features in the slides that need to be accounted for in any explanation.
1. The second slide, which implies that the fact that the internet traffic passes through the US is relevant.
This implies that either some physical interception was necessary, either from outside the companies, or from the inside with their cooperation, or that it was legally necessary (on order to require the companies to deliver the information. And yet it would seem like the issue of what the NSA could require US companies to reveal, is orthogonal from which traffic passes through the US, since in that case the issue would be what was stored on US servers.
2. The implied cooperation from the companies in the slides. According to the Washington Post, the "Special Source Operations" in the logo refers to "the NSA term for alliances with trusted U.S. companies". Given this, and the use of the terms "providers", it seems unlikely that no cooperation from the companies is involved.
From the above, I would propose that the PRISM scheme must involve some combination of physical interception, and cooperation from the companies involved.
For example, the NSA may be intercepting traffic then asking the companies for private keys for some of the traffic they intercept. This would fit the above facts in that it requires the traffic to pass through the US, and also requires cooperation from the companies.
I wonder if he's hinting at spying on XMPP -- a lot of the companies implicated use this protocol and the timelines of the powerpoint timeline is aligned with a lot of companies mobile growth as well as XMPP usage -- also, this might explain why Google is suddenly dropping XMPP support (rather than just being "closed")
It sounds like this employee was not even aware that Google's "Transparency Report" specifically does not include the number of FISA orders that they have received:
"Update 2013-06-07: at the time that we wrote this post, we asked Google whether its Transparency Report included data about secret FISA court orders that would send data to the NSA. The response we received was extremely vague, but seemed to possibly be "no". In the wake of yesterday's revelations that the NSA was harvesting data from Microsoft, Yahoo!, Google, Facebook, AOL, PalTalk, Skype, Youtube and Apple, Google has now clearly confirmed that the numbers in its Transparency Report do not include the number of orders or targets for NSA surveillance."
This is especially interesting, because it means Google has acknowledged the distinction between NSLs, which include a gag order, and some other even more secret directive, which services NSA surveillance.
It's also worth noting that if Google was willingly supplying anything PRISM-like, this employee would not be allowed to know without a Top Secret clearance. Any Google employees who do have such clearance face the threat of prison if they tell him or anyone else.
> It sounds like this employee was not even aware that Google's "Transparency Report" specifically does not include the number of FISA orders that they have received
Are you concluding that from this statement?
> "I'm not sure what the details of this PRISM program are, but I can tell you that the only way in which Google reveals information about users are when we receive lawful, specific orders about individuals -- things like search warrants. And we continue to stand firm against any attempts to do so broadly or without genuine, individualized suspicion, and publicize the results as much as possible in our Transparency Report."
He may be ignorant of that fact...however, that last statement, "as much as possible", would cover that. However, his assertion that "the only way in which Google reveals information about users are when we receive lawful, specific orders about individuals" would still cover FISA, because FISA, as far as we know, is specifically for the targeted surveillance of individuals.
Moreover, FISA, as it applies to Americans, requires a court-approved warrant for that surveillance. So again, what the OP is claiming is non-contradictory, because FISA requests are both legal actions that target individuals that go through an approval process.
This is entirely different than what PRISM is alleged to be, which, well, we don't know exactly what it is, but involves the warrantless surveillance of Internet usage with the collaboration of Internet companies...currently, Google is outright denying being a part of that.
Also worth noting, national security letters, which Google does oppose, are controversial because, among other things, they do not require a warrant. This is not the case with FISA and American citizens.
Ah, you are right on the PRISM distinction, I mixed up details with the "metadata" thing, which I'm sure is not accidental. A lot of the responses from government officials are to the effect of "these aren't a concern, because to get the metadata we need to go through FISA courts for every individual", which is just a red herring now.
So, there are:
1. NSLs, published on the transparency report
2. FISA orders, which are "through a court", but is by all accounts just rubber stamping, not on the transparency report
3. PRISM, which is intended to bypass FISA entirely, according to the Guardian article. Either Google is lying, the NSA is doing it without Google's knowledge, or there is subtle wordplay involved (so, lying). Or, that it's been grossly misreported.
You're one of the few whose opinion I would trust on this: how much data would the NSA need to derive TLS encryption keys from encrypted data? I feel like I read once that you could possibly use statistical methods to derive the key if you had enough encrypted source data.
The technique they'd be using to do that would be new to science.
Here it is worth pointing out that Google did something awhile ago to make NSA's job much harder (if their goal is to read everyone's mail): they became aggressive advocates for TLS forward secrecy.
TLS forward secrecy involves the ephemeral Diffie Hellman ciphersuites (which your browser supports). Instead of deriving a session key and using the server's RSA key to convey it, the DHE ciphersuites run the DH protocol to derive a session key between the client and the server, and then use the RSA key to sign the exchange ("breaking the tie" if there was a MITM). The RSA key is never used to convey a session key; if you got all of Google's TLS keys, you would not be able to go back in time and decrypt old sessions that used a DHE ciphersuite.
It is also worth pointing out that some of the hardest people working in security show business, including Adam Langley and Michal Zalewski and Justin Schuh, people with unimpeachable reputations in my field, work for Google on these problems.
Only if you assumed he is fully informed. Do you think Google chief software architect would fall within "need to know", if Google had been ordered to hand over their SSL private keys? I doubt it.
from US government, we got confirmations - half-ass confirmations, but still confirmations - that PRISM is real, we maybe have some information wrong and we should stop asking and talking about it, but it's real.
While the companies and its architects all oppose the claims in a way that's very convincing.
I can't tell you want to believe, but maybe a place to start is: the world is a confusing place, full of miscommunication and gray areas. Why should we expect the world to be as it is in the movies, in which bad guys are obviously bad and that there is an "ending" in which things are clear?
For starters, I just re-skimmed the Washington Post report and noticed that it has since been amended:
> It is possible that the conflict between the PRISM slides and the company spokesmen is the result of imprecision on the part of the NSA author. In another classified report obtained by The Post, the arrangement is described as allowing “collection managers [to send] content tasking instructions directly to equipment installed at company-controlled locations,” rather than directly to company servers.
I think I'm wrong that that was one of the actual changes since yesterday...either way, the Post is allowing for the possibility of a miscommunication/misinterpretation by its source. Business Insider also alleges of other hedging by the WaPo here:http://www.businessinsider.com/washington-post-updates-spyin...
At this moment in time, we still do not know two things: who the leaker is, beyond a "career intelligence officer" and the contents of this 41-slide presentation other than the 3 or 4 slides that the Post and the Guardian have published. The identity or motives of the leaker isn't necessary to know, but it's kind of problematic when we are missing context to the so-far published slides that explain the PRISM program. For example, it kind of changes things if the first two slides say "PROPOSAL" or the last slide says "APRIL FOOLS"
Maybe they do, but I don't think you can assert that from what the NYT is reporting in that link. The article is talking about FISA requests:
> The companies said they do, however, comply with individual court orders, including under FISA. The negotiations, and the technical systems for sharing data with the government, fit in that category because they involve access to data under individual FISA requests. And in some cases, the data is transmitted to the government electronically, using a company’s servers.
The data sharing is effortless. But the process to get it is still the bottleneck, because in the NYT article, the companies still assert that they conduct a lawyer review of the requests. And, moreover, FISA mandates that when the request involves an American citizen, that a court-approved is required.
If you're arguing that FISA is wrong and that companies should be doing everything they can, including sending the requested data in dot-matrix printouts, to hinder the process -- no argument from me there. But the question at hand is whether they are part of PRISM, which, according to the reports so far, is a program that is different in implementation and legality than FISA.
(And if you want to argue that sending any data upon a legal review is not at all different than direct, near-real-time access to a company's servers...I'm sure some infrastructure engineers would disagree with you, among other kinds of people required to make such a pipeline happen)
Don't believe. Ever. Agnosticism is a philosophy that extends well beyond religion.
Instead, focus on what you want to be true and figure out how to make that true. You probably don't want PRISM to exist, or more generally, you probably don't want an extensive surveillance program of PRISM's caliber or greater to exist. In America, or anywhere? Be specific. Figure out exactly why each component is problematic. Understand that.
Then start going about making it harder to exist. Learn about and teach ways to circumvent. Talk to people and convince them of your well-founded views. Turn the world into a place where PRISM won't exist. It's a project that can easily exceed your lifetime.
> Be specific. Figure out exactly why each component is problematic. Understand that.
Totally agree.
> But it's better than belief.
As someone dealing with what might be called a mild epistemological crisis, I can sympathize with the advice to apply agnosticism liberally, but I'm not sure belief can be avoided. If you want to take action for or against something, you have to ask: what if your information about that thing is malformed or incomplete, due to human error (either your own or somewhere in your chosen network of cognitive authority) or even malice? The answer is that you'll never know for sure. You could always be the victim of your own or someone else's bias.
So I'm just saying, action requires a leap of faith, sometimes just a microscopic leap, but a leap nonetheless. Belief is, alas, inescapable.
No, it can't. You will always believe something, positively (X exists; X is true) or negatively (X does not exist; X is false). The point is, however, is that your final resting point should never be "I don't know what to believe anymore."
> If you want to take action for or against something, you have to ask: what if your information about that thing is malformed or incomplete, due to human error (either your own or somewhere in your chosen network of cognitive authority) or even malice? The answer is that you'll never know for sure. You could always be the victim of your own or someone else's bias.
This is precisely why it's useful to apply agnosticism liberally. What you've described here is agnosticism. You don't know. You never know. If you have the fortitude for it, it is usually reliable to say that you're always wrong in some respect. But you should still act. Paralysis is worse than misjudgment.
Or to use the words of a now famous movie, "There is no certainty, only opportunity." The only reason he turned out to be right was movie magic, and nevertheless, in the movie, he admitted that he didn't know what the right action was after all.
I'd really like to know whether Google shared its SSL/TLS certificates with government agencies. I asked the author (Yonatan Zunger) a short question and he replied to say he doesn't know anything. I followed up with:
> It seems you care very much about user privacy:
>> I would no longer be working at Google [but for] the fact that we do stand up for individual users' privacy
> Will you be willing to find out whether Google has shared its SSL/TLS keys for public sites with government agencies? As a person interested in user privacy, I imagine this will be as of much interest as whether they have direct access to the data.
> As you mentioned, direct access would require Google to build something. Intercepting SSL/TLS traffic would not.
> If Google shared its SSL keys, then the NSA can intercept all inbound and outbound traffic, and would be able to capture virtually all Google user data. They can store the data and search it. They wouldn't particularly even benefit from direct access to Google's data, if they're planning to store it themselves anyway.
> I would be great to hear an affirmation from Google that it has not shared its encryption keys with governments.
I can't figure out how to link to my comment, but it appears in the comments on Yonatan Zunger's post. (See original post)
Yeah, I saw the headline and thought "We have a chief architect?", but then I saw who it was, and he really does have a longtime track record of critical contributions to both G+ and Search before it.
What might have happened is this:
1. NSA bribed someone to obtain Google's SSL certificate private keys
2. NSA installed wiretaps outside of all Google datacenters
3. NSA hired someone to write software that would reconstruct Gmail inboxes, login activity, etc. based on the decrypted traffic
If Facebook, Google, etc. are indeed innocent they should change SSL certificates immediately, store the new ones in secure cryptographic hardware, stop offering non-HTTPS access, and start inspecting network equipment for the wiretaps.
>"the only way in which Google reveals information about users are when we receive lawful, specific orders about individuals -- things like search warrants."
Things like search warrants? What has been described in the PRISM slides is an interface in which a NSA agent can access a subject's data at will, in a few clicks and an affirmation that "yes, this person is a terrorist".
Also the US government has confirmed that systematic collection of user data was indeed happening for non-US citizens. What of that? Are they users of lesser rights in the eyes of Google, etc?
> Things like search warrants? What has been described in the PRISM slides is a interface in which a NSA agent can access a subject's data at will, in a few clicks and an affirmation that "yes, this person is a terrorist".
Google has officially denied being a part of PRISM. The way you phrased this statement makes it sound like the OP is sneakily leaving out the PRISM implementation, which would be sneaky if Google were a part of PRISM.
(this doesn't mean that Google's isn't flat-out lying about everything...but you seem to be alleging a contradiction where there is none, at least in the OP)
> "Also the US government has confirmed that systematic collection of user data was indeed happening for non-US citizens. What of that? Are they users of lesser rights in the eyes of Google, etc?"
NSA's mandate since its inception is to conduct surveillance of foreign communications suspected to be a threat to America...So while it's still worth arguing whether the surveillance they conduct is unethical or unproductive, it's not going to be worthwhile arguing whether they do it at all.
edit: removed sarcasting phrasing that was too assholish for a Friday after work.
>> Things like search warrants? What has been described in the PRISM slides is a interface in which a NSA agent can access a subject's data at will, in a few clicks and an affirmation that "yes, this person is a terrorist".
I'm not sure where you getting that from the slides. They don't explain how an analyst uses the system.
>>Google has officially denied being a part of PRISM. The way you phrased this statement makes it sound like the OP is sneakily leaving out the PRISM implementation, which would be sneaky if Google were a part of PRISM.
Other media reports from insiders (not information on the leaked slides) make it sound like PRISM is more of an NSA internal thing that aggregates data and presents it to analysts. Based on the description of the slides it sounds like the NSA has some integration with the on site premises of the target company.
So it could be as simple as PRISM is just a way of automating the whole process of getting data from a valid warrant which would be consistent with the denials made tech companies. They don't have any contact with 'PRISM' per se; they have just set up their systems to spit out data when they get a valid warrant.
I was quoting and disagreeing with the grandparent comment...which I interpreted as implying that Google is a part of PRISM (which may actually be true, but that would be begging the question in this argument)...
But I do agree with the grandparent that PRISM, as described in the original Washington Post report and in the excerpted slides, do seem to allude to "an interface in which a NSA agent can access a subject's data at will"
Here's the Washington Post:
> There has been “continued exponential growth in tasking to Facebook and Skype,” according to the PRISM slides. With a few clicks and an affirmation that the subject is believed to be engaged in terrorism, espionage or nuclear proliferation, an analyst obtains full access to Facebook’s “extensive search and surveillance capabilities against the variety of online social networking services.”
This is not about what the NSA does, this is about whether Google and co are handing out foreign users' data to the NSA. It appears they are.
I hope every non-american netizen realizes that their private communications are being systematically collected by US intelligence. It is now impossible to trust an american company.
Before the Internet, the ECHELON program was shown to have been used for private industrial espionage on foreign compagnies [1]. I assume this is happening on another scale entirely now, with the cooperation of Google, Apple, FB and MSFT.
> Google has officially denied being a part of PRISM.
They have denied knowledge of the use of the word "PRISM" to describe anything that they are participating in. So, they've denied nothing in that regard. It just means PRISM is the NSA codename.
> First, we have not joined any program that would give the U.S. government—or any other government—direct access to our servers. Indeed, the U.S. government does not have direct access or a “back door” to the information stored in our data centers. We had not heard of a program called PRISM until yesterday.
"Any program" would be broad enough, but he didn't say that at all. He said they are not part of "any program that would give any government direct access to their servers." I could drive a truck through the holes left in that wording.
Do they have indirect access? Some API perhaps? Do they have any means by which they can automate the export of data for whoever they want, perhaps after clicking a checkbox that says that the target is officially under surveillance? Is there some form of data sharing that is brokered through a trusted non-government entity?
He also goes on to say that they follow the law (meaningless if the law says to hand over the data), and they frequently push back (which orders do they push back on? Probably not the orders that they aren't allowed to legally push back on).
He states that they don't follow "broad orders for all data", but this is easily satisfied by the description in the Guardian article that says an analyst simply has to certify each request by saying they believe that there is a 51% probability that the request is legitimate. Obviously, no one at Google even could challenge such requests.
Nothing in the names policy prohibits pseudonyms. If your given name is "Jason Ramirez", you're welcome to have a separate Google+ account with the name "Nancy Young".
You only have to prove people know you by a name if you want to go by "#RS", "Albert Einstein", or "GreenLife Rx"; all three of those are likely name violations for reasons that have absolutely nothing to do with knowing who you are.
He actually responded to just this question in the comments (and he's a very senior 10-year engineer, so it's plausible).
"I have various reasons to believe that the odds that I would have known about it are higher than those for most employees -- but it still wouldn't be a certainty. That said, the odds are good enough that I would be fairly surprised, and rather furious, to find out that such a thing had been happening without my knowledge."
We pretty much already have confirmation that PRISM exists. [1] It's a little funny all of these companies expect us to believe that none of them are involved.
This denial is a lot stronger than Page's, which was full of weasel words. Unless Zunger is just lying, which somehow seems unlikely (forgive my naiveté), it'll be interesting to know how PRISM actually works. But I will say that his faith in his company sounds misplaced, since Google's more official denials are so weak.
He hints at a couple of reasons for his confidence in his post.
One of his assumptions was that people would notice surreptitiously installed hardware or software doing the monitoring. I don't think this is unreasonable. Hoovering up all the private data in Google is bound to be a big job, regardless of whether you are sieving it on site or transferring it off site. Even if it would only take a few people to install and manage, everyone else would probably be tripping over it constantly. Accessing all of Google's data means you'd be tied into nearly every system.
It's possible to keep projects secret inside a large company like Google, but only if an exceedingly few number of people know about it, and have good reasons to not tell anyone else. A system with as much surface area as you'd probably need to hoover off all of Google's data would be vulnerable to almost every engineer and datacenter worker accidentally running across while debugging other problems. Even -- especially? -- if they didn't realize the significance of it immediately, they'd probably ask their coworkers about this weird extra code/jobs/hardware, and pretty soon everyone would have heard about it. Once everyone inside of Google had heard of it, I think you'd have a very hard time keeping the secret from leaking into the outside world.
Right, that mostly paraphrases his arguments. But it doesn't convincingly eliminate a few possibilities for how this works.
One possibility: Google obviously has some capacity to honor search warrants and NSLs. And presumably that involves some technical artifacts somewhere: admin-level API access to data and some sort of external endpoint through which the government can actually make those requests. So that's all stuff we can confidently say is already there humming along happily, whether used for nefarious purposes or not.
OK, so given that those exist, how much volume do they support? How hard would it be to modify them to bypass the scrutiny process? Or give the NSA access to those endpoints instead of just domestic law enforcement? In other words, these are just changes to the process, totally invisible to anyone without explicit access to it. It might not involve any weird hardware at all, and could operate with very few people in the know.
Another possibility: Google handed over its TLS keys and just let the taps happen upstream.
That's why the confidence of a senior person that there isn't fishy hardware running around makes the question of how PRISM works more interesting. But it certainly doesn't make the project impossible.
I don't think I can guess about how well whatever the search warrant APIs are scale to "look at everyone" without speculating overly much about the design of the system. I would argue that, regardless of the access method, if someone is looking at the terabytes or petabytes or yottabytes or however many bytes of emails there are, you're once again back to a huge amount of network or CPU or whatever utilization. Eg, even if the database access is allowed and off the record, the database admin should still be wondering why the load is so high, as if everyone in the world were reading all of their email simultaneously. And then you're back to gossip and everyone internally knowing about it.
The TLS keys are an interesting angle. Are TLS keys typically one (small set) per site, or would each server typically generate its own unique keys? Even with the latter the surface area might be small enough within Google that no one would accidentally stumble upon it.
With all of the conspiracy theories floating around, what about something fun, like the government proved P=NP and is just able to secretly crack the encryption keys, passwords, etc., like that silly movie "The Traveling Salesman Problem" where they turn algorithms into an action packed movie.
Seriously though, it seems like the general consensus is that no one on either side is actually lying, but the companies involved are using weasely language that can be interpreted to exclude any kind of indirect access, SSL keys, etc? I'm surprised at the tone of the statements the companies are giving, because were it to come out that the companies are giving indirect access, their adamance would just make them look terrible...It's as if the companies in their statements, at least FB and Google, sound shocked, hurt, etc. to find out people would assume that of them.
It's widely known and admitted that they have indirect access, by issuing warrants and being provided with slices of the data. You must be talking about some specific form of indirect access?
Also worth reading is the comment thread. " We don't sell our users' information to anybody.... we do use user information to target ads: this approach has made us what's technically known as "large stacks of cash:" "
"might have also denied knowledge of the full scope of cooperation with national security officials because employees whose job it is to comply with FISA requests are not allowed to discuss the details even with others at the company" http://www.nytimes.com/2013/06/08/technology/tech-companies-...
Welcome to evil google team. It was nice knowing you. You will not miss me, but good bye.
one thing crossed my mind : could the fact that this things blows up right when china's president is coming to the US to talk about cyber espionage and individual liberties be a coincidence ?
That's a good point. It does seem like the slides are fake and someone is trying to play the public against Internet companies, or alternatively, that someone is playing public against US governement.
>> the fact that we do stand up for individual users' privacy and protection, for their right to have a personal life which is not ever shared with other people without their consent, even when governments come knocking at our door with guns, is one of the two most important reasons that I am at this company
The indisputable fact is that your company gathers a lot of private info online about its users for profit, make that for extra, extra profit. Best case scenario, it makes NSA's job super easy, all they have to do is ask for it. Worst case scenario, the government knows all our thoughts as we type on Google search and everything else we do on Google. I do not trust that such info will stay private. Not anymore.
>> the national security apparatus has convinced itself and the rest of the government that the only way it can do its job is to know everything about everyone. That's not how you protect a country. We didn't fight the Cold War just so we could rebuild the Stasi ourselves.
Yeah, WTF do they think they are, Google or Facebook?????
I have my own suspicions -- which I won't go into here -- about what PRISM was actually about. I'll just say that there are ways to intercept people's Google, Facebook, etc., traffic in bulk without sticking any moles into the org -- or directly tapping their lines. You may find some interesting hints in the leaked PRISM slides [1], especially the second and fourth ones shown there. The subtleties of phrasing are important, I suspect, not because they were trying to be elliptical but because they reveal what was obvious to the people who were giving that presentation.
And like I said, I have both some reason to believe that there aren't such devices inside Google, and that the PRISM slides are actually talking about a somewhat different kind of data collection -- one that's done from outside the companies.
Any ideas what he could be thinking?
1. http://www.washingtonpost.com/wp-srv/special/politics/prism-...