Hacker News new | past | comments | ask | show | jobs | submit login
Google Exposed User Data, Feared Repercussions of Disclosing to Public (wsj.com)
953 points by tysone on Oct 8, 2018 | hide | past | favorite | 258 comments



non-paywall version: http://archive.is/rpuA1


I get a TLS problem:

An error occurred during a connection to archive.is. Cannot communicate securely with peer: no common encryption algorithm(s). Error code: SSL_ERROR_NO_CYPHER_OVERLAP

SSLLabs probably has the same problem: https://www.ssllabs.com/ssltest/analyze.html?d=archive.is


For some reason, this only happens with Cloudflare DNS. I had to revert to Google because all archive.is links didn’t work.


Jeez, just run your own resolver instead of dumping all your internet access data on $AntiPrivacyCo.'s reception desk.


They anticipated this reaction and have made some significant privacy promises about the data they receive via Google Public DNS:

"We delete [the] temporary logs [which include your full IP address to identify things like DDoS attacks and debug problems] within 24 to 48 hours."

"In the permanent logs, we don't keep personally identifiable information or IP information. After keeping [the data we do keep] for two weeks, we randomly sample a small subset for permanent storage."

And importantly:

"We don't correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services."

Source: https://developers.google.com/speed/public-dns/privacy

Unless you think they're lying or unable to enforce this policy, this addresses most of the common privacy concerns I've heard in this context.

(I have worked for Google in the past, but I have never been involved at all with Google Public DNS or its privacy promises.)


The key point in all the beautiful promises Google makes is that you need to extend then with "for now". That summarises the healthy skeptical stance, especially when it is about a company that should have given you plenty of examples about why we shouldn't trust them.


Having seen how Google operates on the inside, I trust them to be fully able and willing to comply with the promises they do make, better than most companies. The typical tech startup overpromises and underdelivers with respect to data deletion and security, or simply does a really weak job at those things without making any promises either way.

Adhering to these promises is a separate question from changes of policy in the future, of course, and just as separate from failures in the areas of product design or ethics. Many parts of the conglomerate that calls itself Google have gotten worse in all of those areas over the last several years, though I am still a big fan of how GCP is progressing.

But none of this makes me think that they're retaining more Google Public DNS data than they claim. Given how little of that data they retain for the long haul, the risk of bad retroactive impact from a change in policy in this area is quite low. The risk is admittedly higher for other consumer services which do retain identifiable data over a long period of time.

Conversely, the risk is lower for G Suite and GCP offerings and for European residents, given the concerns and compliance obligations of business customers and the obligations imposed by the GDPR.


In the past, I would agree with you, but with the recent legislative push for privacy around the world, especially in the EU, I doubt very much that any company as big as Google will be able to have much wiggle room to make privacy policies weaker in the future.


That explains why I kept having these errors.. switched back to Quad9 and I'm good.


Or we could support the authors -- Let's not make HN the kind of community that encourages posting ways around paying journalists for their work on articles that keep our industry in check.


Giving a random american company my payment details is unfortunately asking too much.


A ‘better’ non-paywall version: https://outline.com/mNDfrH


Does anyone else get rubbed the wrong way by this sort of irreverent infringement? Even if you have zero concern for journalists’ copyrights, it puts this forum at risk.


Fair use exemption makes specific reference to "purposes such as criticism, comment, news reporting" - those are just the first three listed, and all three apply to a HN post.


This is a gross misrepresentation of fair use doctrine. Fair use requires a "transformation" of the work. It does not permit reproducing a work in its entirety without permission just so you can have a discussion about it.


If it put the forum at risk, they wouldn’t put the “web” link after the title of the post, which basically accomplishes the same thing. This kind of thing is officially sanctioned.


Websites like WSJ manage to get listed high in the Google rankings by presenting the actual article, instead of a paywall, to the Googlebot and to requests with Google in the Referer field (as I understand it). I gather that they do the same thing to the Archive scraper. As far as I'm concerned, that's a cheap trick the website performs, and using a cheap trick to get around it seems fine to me.


Nope


The other way of looking at it is that it can boost publication revenue and journalist income by driving more traffic to the publication than it would otherwise get, of which some may convert to subscriptions (if the publication can offer sufficient incentive).

Obviously, no user would be able to justify buying subscriptions to every publication linked from HN.

And if there was no paywall bypass, then HN couldn't link to it and it would get no HN traffic and no discussion on HN at all.

Allowing paywall bypass means the publication gets the HN traffic and discussion it wouldn't otherwise get, and the possibility of converting some of that traffic to subscribers who wouldn't otherwise subscribe.

For what it's worth, the owners/operators clearly don't think it puts this forum at risk, as the sharing of paywall bypass links is explicitly allowed/encouraged according to the guidelines and moderator comments.

And the very fact that the publications themselves allow bypass via certain referrers (e.g., Facebook) suggests they don't have a problem with it.


  The other way of looking at it is that it can boost publication revenue... by driving more traffic
The same argument has been made in defense of software piracy for over a generation.


Which is why the freemium model evolved.

But we’re not talking about piracy here. If the publishers thought of it that way they’d block all access to archive sites.

Anyway, rather than a snarky dismissal like this, do you have a constructive suggestion for a solution that works well for everyone?

If the answer is “no paywalled sites on HN ever” please say so.

But if you have a more nuanced suggestion that would be a huge help!


Pirates of all media types tend to also buy legal media far, far more than people who never pirate.


While I think posting paywalled links is basically advertisements and I therefore have no problem with people posting accessible versions, I am curious about the same question. I think the question about the legality is legit and shouldn't be downvoted (unless I'm misinterpreting the guidelines). Not sure I'll like the answer and potential new rule, though.


In the United States I imagine WSJ would have to send a DMCA takedown request to archive.is.


I'm not a regular reader of WSJ so why would I subscribe just to read one article?


Nope.


Not I


No I don’t, and no it doesn’t.


WSJ deserves zero respect.


Then don't read their articles


No.


>We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug.

Wait, so they only keep two weeks worth of logs and within these logs they did not find anyone abusing this flaw. How can they be certain for any time period from two week prior ?


Wow. “We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks.”

The wording of this is really pushing the boundary of plausibility.

I fail to understand the logic of how this would protect privacy? Access logs with no profile data logged would not compromise privacy would it?

Can anyone confirm the timing of the google blog post? It seems the WSJ article was posted at a similar time.

This leads me to believe that the most likely reason we are hearing about this now is due to comment requests from the WSJ. WHEN Goog realised it was out they published.

Goog is trying to avoid using the words Data breach as they may get into hot water in EU.

Love how it is buried in the article.

My guess is a similar thing has been happening with android permissions. Data has been leaking through that they have just not admitted to it.


> Access logs with no profile data logged would not compromise privacy would it?

True, but access logs without profile data would prevent you know _which_ profiles were accessed. This matches with the actual claim in the article that they would be "unable to determine which users were affected"


Right, but the only "profile data" they would need to add to the logs to know, would be a user ID. Not really any private info.


User IDs are definitely private info, since they'd allow you to follow a user's behaviour etc.


"user 1234567 viewed post 89564385943"

I can see how that's private info, but would this be:

"client x viewed user 1234567's profile"


If the user id can be joined with other data to give a name or similar then it is considered personally identifying in terms of GDPR.

I am not a lawyer. You should hire an appropriately qualified lawyer to review your data hygiene practices.


Sure, if user 1234567 isn’t particularly popular.


Neither are private info. Private in theop law sense.


  The wording of this is really pushing the boundary of plausibility.
Maybe Google is running short of disk space.


>Goog is trying to avoid using the words Data breach as they may get into hot water in EU.

As far as GDPR goes technically this breach happened right before they would have faced large repercussions from it.


It sounds like it came up as a result of their Project Strobe audit. I would guess they launched this project precisely because the penalties were about to get bigger and they knew they had skeletons.


The company which consider every single bit of data as "gold" decided not to keep their API's access log > 2 weeks? wow!


Is it possible that your impression of the company is (was?) off?

I'm not surprised. They (claim to) do something similar with the logs of their DNS service: two weeks of anonymized logs after which they "randomly sample a small subset for permanent storage".

https://developers.google.com/speed/public-dns/privacy


I had the same reaction so this indeed hints at that I've got wrong expectations. One plus point for privacy, one minus point for handling a beach so badly.


It's not like they don't have the storage space for more. Heck, the full logs for all Google+ usage probably fit on a USB stick. :)


Our default policy is to keep generic RPC server logs for O(weeks). It's best practice as we log a lot of structured data that can be large -- especially at our QPS. Furthermore, we have data retention timelines to keep.


>Our default policy is to keep generic RPC server logs for O(weeks).

Out of curiosity, when was this policy adopted? After these security holes were discovered?


Doesn't the statement "That means we cannot confirm which users were impacted by this bug" indicate it was adopted before the hole was discovered?


I too find Google only keeping 2 weeks of logs unbelievable.


You might find it surprising, but at Google it's common to do things like not returning anything when aggregating data from a small number of users. That's before even looking at projects like RAPPOR or ESA. Source: I had access to sensitive data many years ago.


The regulatory costs of GDPR mean that for every piece of log data, you want to think about whether or not you really want to keep it.

If you don't have a good business case for keeping it, you're often better off erring on the side of deletion.


Both the breach and the fix happened months before GDPR went into effect, though.


Generally you want to build systems in compliance of future regulations before they kick in. GDPR at big companies is a multi year effort.


Even before the GDPR, Google had to contend with the NSA / GCHQ illicit access events and the China hack.

They had plenty of experience to suggest to them that keeping highly-detailed logs around indefinitely could do more harm to their users than good.


Such policy existed since the very beginning of Google APIs, and is well documented within the company. Anyone who worked at Google should be aware of it.


I have my doubts here too.


Developers have a natural, and understandable, urge to include data in the logs that they need for debugging. If you purge logs, it is simpler and easier to have higher confidence that you are retaining only the data you intended to retain.


Why would someone abusing this API stop doing so? If nobody did so for the two weeks prior to the fix, it's a very reasonable guess that roughly nobody did so ever.

It doesn't sound like the logs are detailed enough to prove the antecedent, though, so it doesn't really matter if the logic is sound.


Google and privacy in the same sentence.. and a clumsy one indeed..


Nowadays I tend to trust a company that had a security vulnerability or data breach once and handled it gracefully, rather than a company that says they had no security breach. Making a mistake is only human; Your true test is what you do after you found it.


This!

It's not about the mistake that led to the breach. It's about what you do once you become aware of it as a company, as a team, and as an individual.

I am quite confused about how poor this has been handled by Facebook recently and now Google follows suit.


I agree. I think it is more important when a company is forthright about a breach quickly than the history of breaches. Both Sony and Equifax had large security breaches prior to the large hacks.


So is this an example of a company handling it gracefully? They couldn't tell what the impact was but kept it a secret for 6 months.


Pretty sure he is saying the opposite.


It would make a nice change, if majority did the same.


Company finds a security vulnerability caused by a bug. Logs show that it has never been used by anyone. It patches the vulnerability.

[Honest question] Should the company announce it publicly?

PS: Keeping in mind that this is part of the Murdoch vs. Google war going on for about 10 years:

https://www.npr.org/sections/money/2009/11/murdoch_vs_google...

https://www.thedrum.com/news/2017/03/28/timing-everything-ru...

https://www.theverge.com/2018/1/22/16920254/news-corp-rupert...

[Edit: added the "Honest Question" tag]

Edit 2: Related post by Google:

https://blog.google/technology/safety-security/project-strob...


I don't know if it should or it shouldn't, but it absolutely is not the norm for companies to announce those vulnerabilities publicly. Every year, most moderate-and-up-sized tech companies (really, a pretty big swathe of the Fortune 500 outside tech, as well) contract multiple penetration tests, and those tests turn up thousands upon thousands of sev:hi vulnerabilities, none of which are ever announced.

An obligation to announce findings would create a moral hazard as well, since the incentives would suddenly tilt sharply towards not looking for security vulnerabilities.


>An obligation to announce findings would create a moral hazard as well, since the incentives would suddenly tilt sharply towards not looking for security vulnerabilities.

A good point. There is also the fact that the average Internet user has no clue what things like a vulnerability, or a bug, or even a log is or what it means. Data mining, web scraping or data harvesting--no clue.

I just saw a TV report this weekend that stated CA hacked FB. Well on second thought, maybe that's better than trying to explain that even though "thisisyourdigitallife" you really need to spend some time and effort to understand what it all actually means.


While that is true, it's worth pointing out that Google's Project Zero has a "disclose by default" approach to vulnerabilities they find, even if there is no proof that they were exploited.

The default P0 timeline is 90 days... do we know when Google found this vulnerability in Google+? Does Google apply the P0 deadline to their own vulnerabilities? Is it fair to expect them to?


No, it's not reasonable to apply P0's public vulnerability research norms to internal security research.

P0 "competes" on an even playing field with everyone else doing public vulnerability research and, to a reasonable approximation, has access to the same information that everyone else does. Internal security assessment teams have privileged information not available to public researchers, and rely on that information to get assessment work done in a reasonable amount of time.

When P0 discovers a bug, it has (again, to an approximation) proven that any team of researchers could reasonably find that same bug --- everyone's using roughly the same sources and methods to find them (albeit P0's are done at a much higher level of execution than most amateur teams). That's the premise under which P0 bugs are announced on a timeline: what P0 has done is spent Google engineering hours surfacing and refining public information.

If you want to go a little further into it: the 90 day release window has a long history in vulnerability research. It's the product of more than a decade of empirical results showing that if you don't create a forcing function, vulnerabilities don't get patched at all; vendors will back-burner them indefinitely. Google's internal teams don't have that problem: when Google bugs get found by internal teams (and, presumably, by external ones), they get fixed fast. There's no incentive problem to solve with an announcement window.

Another lens to look at this through is the P0 practice of announcing after the publication of patches, regardless of where the window is. That's because, again, P0 is doing public research. Typically, when a P0 bug is patched, the whole world now has access to a before/after snapshot that documents the bug in enough detail to reproduce it. At that point, not announcing does the operator community a disservice, because the bug has been disclosed publicly, just in a form that is only "available" to people motivated to exploit the bug.

And again: not at all the case with internal assessments.


Thanks, well explained.


But doesn't the Project Zero bug hunter demonstrate a breach by testing for the hole and creating an external report (private or not)?


> While that is true, it's worth pointing out that Google's Project Zero has a "disclose by default" approach to vulnerabilities they find

This wasn't a Project Zero bug, was it? Project Zero is a very special team with a distinct and notable charter. They aren't "Google's Security Folks". Certainly Project Zero has discovered and disclosed bugs in Google products in the past.


It's the norm in healthcare (HIPAA), disclosure is required for breaches that affect 500+ persons, and even <500 person breaches have to be reported annually to HHS and to the individual at the time of discovery.

https://www.cms.gov/Outreach-and-Education/Medicare-Learning...

edit: less-than sign wrong way


> It's the norm in healthcare (HIPAA), disclosure is required for breaches that affect 500+ persons, and even <500 person breaches have to be reported annually to HHS and to the individual at the time of discovery.

> https://www.cms.gov/Outreach-and-Education/Medicare-Learning...

> edit: less-than sign wrong way*

Breaches, not vulnerabilities. The discussion is not whether or not breaches should be disclosed[0], but whether newly discovered and believed-to-be-unexploited vulnerabilities should be disclosed.

[0]: They should of course, after a reasonable period in which to patch the vulnerability used.


Easy fix, just design your system so that you can’t confirm whether there ever was a breach because you deleted all the old data


If the implication is that Google deletes the logs to avoid having to disclose breaches, you've got it completely backwards. The default is deleting disaggregated log data as soon as possible after collection. There is a very high bar that has to be met for retaining log data at Google, and generally speaking it's easier to get approval to log things if you set it up so that they're deleted after a short time span. Not sure if that's what happened here, but that would be my guess.


> believed-to-be-unexploited vulnerabilities

you cannot prove the negative (realistically). If you have a vulnerability, you must treat it as though it has been exploited.


That is the kind of argument that carries a lot of force on a message board, but is not at all lined up with how the world actually works. In reality, almost nobody operates under the norm of "any vulnerability found must have been exploited", and even fewer organizations disclose as if they were.

You can want the world to work differently, but to do so coherently I think you should explicitly engage with the unintended consequences of such a policy.


Sometimes you can, if you have comprehensive logs that cover it.

edit: Within reason, anyway. Obviously if your vulnerability includes write access to logs or something then you're poked.


I think in this particular case, their policy statement in the sister article from Google blog indicates they couldn't really say that in this case.

> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.

^ the above statement, but couched with this:

> We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.


I wasn't arguing one way or the other on the issue, just reframing it so everyone's on the same page.

Devil's advocate: Do you believe that proactive security assessments would still be performed if each vulnerability found was required to be disclosed as though it had been exploited?


> It's the norm in healthcare (HIPAA), disclosure is required for breaches that affect 500+ persons, and even <500 person breaches have to be reported annually to HHS and to the individual at the time of discovery.

It is required by law to report breaches of data, though I can assure you that in practice, this does not happen nearly as often as you'd expect or hope.

There is, however, no requirement to disclose vulnerabilities for which there is no evidence of exploitation or data breach, or to disclose vulnerabilities that were provably never exploited.


IANAL but it doesn't seem that the way they define a breach (https://www.hhs.gov/hipaa/for-professionals/breach-notificat...) includes issues that provably haven't been exploited.


I used to write those letters when I worked in insurance. They had to be reviewed by legal, needed to make it clear what level of threat was involved without divulging certain kinds of info and only occurred when an actual breach of some sort had happened.

In my case, it was usually not a computer issue. It was usually a case of "We sent a check or letter to the wrong address" and it was weirdly common for the reason to be "Because your dad, brother or cousin with a similar name and address also has a policy with us and you people are nigh impossible to tell apart."

And we couldn't say anything like that.

Point being that divulging the issue comes with risks of making the problem worse. So it's not as simple and straight forward as it seems.


That's a requirement in general for CA.

https://www.oag.ca.gov/privacy/databreach/reporting

If there is a reasonable belief that data was exposed, all of the exposed CA residents need to be notified, and if > 500, the Atty General of CA needs to additionally be notified.


Again, that's breaches, not vulnerabilities. Literally every company has vulnerabilities.


Agreed! Apologies, as I think my use of the term "exposed" left that ambiguous. I should have used the original term "acquired". The first line from the link says the following:

> California law requires a business or state agency to notify any California resident whose unencrypted personal information, as defined, was acquired, or reasonably believed to have been acquired, by an unauthorized person.

With links to more specifics in the CA Civil code.


Is there a required timeframe for doing so?


"Logs show that it has never been used by anyone"

Is it 100% confirmed that the logs would show it?

What they said was "We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused."

That seems only to say they couldn't find anything. Not that it absolutely didn't happen.


>Is it 100% confirmed that the logs would show it?

The answer is clearly no as they only had two weeks worth of logs out of the three years during which this bug has existed. Here's what they're saying:

"We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug [...]"

https://blog.google/technology/safety-security/project-strob...


You can't prove a negative. All you can do is hope that your logs are not tampered with and that they show that nobody used the hole that you are aware of.


I don't know how you could prove whether anyone exploited this or not, unless you found a breach list posted on the open internet... even if you had the access logs:

> This data is limited to static, optional Google+ Profile fields including name, email address, occupation, gender and age. (See the full list on our developer site.) It does not include any other data

This is such a bogus statement out front. The first time I read it, I didn't even see "the full list" mentioned. The full list is much longer than this seemingly innocuous list of properties of a person. It includes such gems as:

> A list of places where this person has lived.

> A list of email addresses that this person has,

> The hosted domain name for the user's Google Apps account.

It's a little worse than they painted it to be, maybe not much, but at least they're being transparent, I guess...


That's sort of the point. You can only prove positively that a breach has been exploited because there can be proof of that. But there can never be 100% ironclad proof that it wasn't exploited.


Well a little bit of detail as to whether exploiting this bug even creates a log entry might be interesting.


If it does not they have other problems.


It is, in fact, far worse than ucaetano portrays it: They don't have logs for it, and hence, have no way to know whether it was used or not. Google had the two preceding weeks of logs, out of the three plus years the vulnerability existed.


>PS: Keeping in mind that this is part of the Murdoch vs. Google war going on for about 10 years:

Changes the facts of the story 0%


Given that nobody here knows the full facts, only the subset of the interpretations published by the WSJ, this is very relevant.


I wish I could trust the WSJ's coverage, but I have to agree that they do seem to have an odd tendency to go off the rails on certain issues. A recent example that comes to mind is their Gmail API article, which at best badly misrepresented the situation:

- https://www.wsj.com/articles/techs-dirty-secret-the-app-deve...

- https://www.androidpolice.com/2018/09/21/opinion-no-one-get-...

Given that, and the articles you pointed out regarding Murdoch's history (coupled with his unusual willingness to compromise the editorial independence of his media properties), I'm not sure the WSJ is a reliable source in this case. Not that I expect them to blatantly lie, but half-truths and misdirection can go a long way (cf. comments in this discussion that question whether the WSJ is making a mountain out of a molehill here).


> Logs show that it has never been used by anyone.

Citation needed. Google's blog post wasn't very clear, but it sounds like an API that more than 400 developers use returned more data than was intended. Google thinks the developers didn't use this information, but logs wouldn't help Google come to this conclusion.

Murdoch has a vendetta, but Google isn't being transparent here either.


> Logs show that it has never been used by anyone

Some other article I saw quoted somewhere said that they only kept logs for a short time for this service. I wonder how they ruled out exploits older than the logs?


I have no idea, but another thing to consider is that they probably have longer-term access to the binaries of all the applications that ever used this API, and they certainly have tools for automated/large-scale inspection of applications.


One obvious example is that you can continue monitoring for evidence of attempts to use the vulnerability. A delayed honeypot, so to speak.

In other words, you don't have evidence that this vulnerability wasn't abused in 2 weeks, you have evidence that no one abused it in ~6 months. Still not perfect, but a more compelling argument that it wasn't abused.


Seems likely they actually didn't. Apparently the further down this comment thread you go, the more times this has already been mentioned. (I read the top comment in the thread having already understood this, and thus interpreted it as a hypothetical question. But in this case, it wasn't provable at all, according to Google themselves' own statement.)


>Logs show that it has never been used by anyone. It patches the vulnerability.

>We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug.

They have no idea if it has been used.


It is generally in the company's best interest to publicly disclose the vulnerability, and its effects.

The other parts to your question are:

1. Should the company be compelled to disclose the vulnerability?

I don't think that is reasonable. Enforcing this would be a nightmare anyway.

2. If they did not disclose the vulnerability, but it becomes public knowledge another way, should there be any recourse?

I think that should be decided and enforced by the users. Unfortunately, that is becoming steadily more difficult, as companies like Google grow, and there are fewer viable/available alternatives to their products.

3. Does a company have a moral imperative to share this information?

I think that the action to share such information takes a higher moral ground than to do otherwise.


> Company finds a security vulnerability caused by a bug. Logs show that it has never been used by anyone.

That's not true. The logs show that it has never been used by anyone the two weeks they had logs for. It looks like the vulnerability existed for about three years. Given this is Google+ we're talking about, it's entirely believable that someone widely exploited the bug in the past, but stopped because Google+ is dead and no one updates it anymore.

> [Honest question] Should the company announce it publicly?

Yes, and they did. They just waited for six months to do it.


"We made Google+ with privacy in mind and therefore keep this API's log data for only two weeks"

Does anyone read the article anymore? Or do they just read it for what they want to see and ignore the rest?


> Logs show that it has never been used by anyone.

They don't have log coverage for 90%+ of the time period, so that's not something their logs could even possibly show.


Isn’t the news section of the Journal walled off from the editorial section? As a regular reader of the Journal’s news section (but not the editorial), I find its reporting to be quite solid usually.


Your statement: "Logs show that it has never been used by anyone."

The Wall Street Journal: "Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time."


“We don’t know of any misuse” is very different from “we know it was not misused”. Thank you for being clear about which statement is true.


I think they should announce it. It seems pretty optimistic to have an application with buggy code that exposes user data, yet claim that there is no chance that an error or oversight prevented it from being logged. Saying "if we didn't detect the hack then it didn't happen" doesn't inspire confidence.


Getting insight to all security bugs of all cloud providers could be interesting- it certainly would increase transparency. Companies have security bugs all the time that they fix when found internally without telling anyone.


Yes, it's been part of the regulations in the EU, even before GDPR. Being forced to disclose in a timely manner ensures it can't be swept under the rug because they squint at the logs just right.


What did Google do to piss off Murdoch? Fox News spends most nights railing on Google also.


That didn't stop them so far from making everybody else look bad. The latest in Google's unfair competitive practices:

"Google discloses Microsoft Edge security flaw before a patch is ready" https://www.theverge.com/2018/2/19/17027138/google-microsoft...


Having a security disclosure policy and following it to the letter for everyone (including Google) is suddenly an "unfair competitive process".


Is the security disclosure policy 3 business days?

https://www.theverge.com/2013/5/23/4358400/google-engineer-b...

Do you think Google acted in a fair or unfair matter?


It's interesting you have to go back to 2013 to find something.

1. There's actually nothing here to suggest this was done as part of project zero or any part of tavis's job. In fact, this was before project zero even existed, AFAIK.

2. He published details about it in march (O(60) days), as he said. It was still a security bug then, just missing a working exploit.

3. This thread produced a working exploit.

I'm gonna go with "either unrelated or fair".

Unrelated if it wasn't done by Tavis as part of his job, and fair if it was given the timeline and disclosure policies that existed at the time.


Yes, because when you go public with the issue you are able to control the narrative. Bad judgement to assume that these issues will never reach the public eye. Especially knowing that a patched was successfully applied...instead Google looks like it is having trouble living up to it's Don't Be Evil motto.


No, they would have to issue like 10 reports a day. Bugs that are never exploited in web sites are rarely published for multiple reasons. Definitely not something to blame Google for.


If it wasn't exploited makes it even better...shows that you are being aggressive in identifying issues and applying corrections. Take advantage of opportunities to show transparency in a good light as well as meet your commitments to be transparent when events have not gone your way. This shouldn't be about how many reports you have to issue. Google can afford the staff to make that happen.


That's probably not how end users would see it though. If there was a report every other day that google or facebook found and fixed a security bug people would either ignore it or trust another company that wasn't as open much more.


Unless they are 100% certain it hasn't been exploited, yes. The reputational and legal risk to appearing to not disclose / cover up an issue is far larger than the issue itself. That changes if they are absolutely certain it was not exploited: then it's just a bug that they fixed and there's no impact beyond that.


How many vulns do you think companies find internally daily? Should every vuln be publicized?


Every single root escalation bug could potentially leak huge amounts of data. And the definition of a successful exploit is one where you don't find out for a long time, or perhaps, not ever.

So does that mean that every single time you patch a remote-exploitable hole, in your web server, et.al, you have to file a report with every single government saying that there might have been a leak of all our data from all of our users, but we're not sure? It will be like the California proposition 65 warning where there will be so many "could be potentially harmful to a fetus" warnings that they all fade into the noise.

Just think of all of the CVE's reported by Microsoft, Red hat, etc. Anyone of those _could_ be a vulnerability leading to the loss of user data, and there is no guarantees that you would be able to detect it via logs.


If they potentially expose sensitive data, yes. Again, if an organization is certain that it hasn't then I'd say no. Sure, certain is a high bar but there's absolutely no way for people to make informed decisions and/or mitigate issues otherwise.


That's going to be on the order of thousands or tens of thousands of bugs per year for each large company. It also discourages businesses from performing pentests since finding anything leads to shitty press. And this is largely useless information since it gives no info about unknown bugs. This makes it seem like small companies with no security apparatus are actually safer because all of their bugs remain unknown.


Okay ... and how to enforce this ?

Because reality is that this would essentially require policing every commit that ever makes it to public serving in every company.

To call that unreasonable is vastly understating matters.


I'm not advocating any external enforcement action. Merely taking the position that if there's an internal conversation which goes along the lines of 'that vulnerability could have been really bad but based on what information we have it doesn't look like it's been exploited... should we tell anyone?' then the answer should be yes, it should be disclosed. That sounds like pretty much the conversation that happened inside Google only they decided 'no, better not... we might get in trouble'. That's a red flag right there.


> Logs show that it has never been used by anyone

Yes, because the #1 thing on the mind of someone who gained unauthorized access (e.g. a remote execution vulnerability) to a system is to cover their tracks, which includes things like doctoring logs.


a vulnerability that allows some unauthorized access to user data via the API and a vulnerability that allows edting logs are very different types of vulnerabilities.


Well, the comment I was responding to didn't specify:

> Company finds a security vulnerability caused by a bug

Remote execution vulnerabilities do exist..


The logs live somewhere else. If you had some magic exploit that let you run code on Google systems AND delete logs, you could do much more damaging things than just reading G+ data.


This is the data that could potentially have been exposed for each person [1]. In their release about shutting down Google+ they made it seem like a lot less.

[1] https://developers.google.com/+/web/api/rest/latest/people


Thanks for this. I hadn't been able to track down a clear statement of what was exposed, which is an awfully important question in a case like this. It's frustrating that "someone on HN dug up the API" is the most reliable way to get information like this, and speaks ill of how this was handled even after it was disclosed.


Companies internally find and fix security bugs all the time and dont talk about it if no known breach occured. Is there a requirement to do this? Maybe there should be a requirement to document that due diligence occurred to understand if it was exploited?


How do you even differentiate this from routine server maintenance that installs kernel patches etc. Theoretically those are all vulnerabilities that exposed data and every single company which is learning of them through CVE etc., should be reporting potential breaches if you hold companies to that bar.


I would think it should be required to report. Just because you don’t know if a vulnerability was exploited does not mean it was not.


I assume cloud providers have hundreds of security issues that are found internally over the course of a year. Requiring reporting would certainly be a step forward and testing in production for software would maybe be seen as what it is, an engineering anomaly and failure to perform due diligence.


That’s fair. I suppose I would aim for a distinction between minor and major flaws. What would be a reasonable threshold?


You are going to be off by at least an of magnitude. You'd see multiple reports per day for a diligent company.


I don't see the benefit. We'd all just end up with a barrage of emails we don't really care about.


The notification don't need to be email. If there's no evidence of a breach, I think it would be reasonable for them to disclose into some kind of vulnerability database. Maybe someone could later determine if the vulnerability was exploited based on some data dump found on the dark web or something.


They only kept 2 weeks worth of logs. So the correct statement is: in 2 of 156 weeks since the bug was accessible no known breach occurred.


So the joke is finally true? Google+ hacked, data of all 20 users exposed.


Maybe 20 actual users but google+ is filled with accounts from google users and automatically posted with content from other sites like comments on youtube.


Now that Google+ is going away, can we have the +string operator back in Google Search, to force inclusion of a single string (instead of having to use double quotes)?


is that why that was changed???


Yep, that was exactly why (or at least, very strong circumstantial evidence):

https://productforums.google.com/d/msg/websearch/H4XbbwWmtAY...

I don't think it was ever officially announced/admitted anywhere, though. But it was exactly around the time that Google+ was rolled out.


one can say it was used more


Related discussion here: https://news.ycombinator.com/item?id=18169243.

Normally we'd treat these as dupes of each other (and initially we did that), but there seem to be two stories here: one about the data breach and one about Google+. So I guess we'll leave both of them up.


Maybe even three stories; there's another bit in that Google blog post about Google making their API permission prompts for Gmail, Drive, Calendar, and contacts more fine-grained and locking them down with policy measures.


This is extremely problematic for us. We're bootstrapped and now Google is suddenly asking for up to $75,000 (or more) for a security audit.

This despite the fact that we've been publicly asking for more limited OAuth scopes for years (c.f. my HN posting history and my tickets on the Google issue tracker), and the fact that we've had zero security incidents in over three years. All to ensure that we're not risking exposing user data that we don't even have, have never accessed, and shouldn't need access to in the first place.

https://cloud.google.com/blog/products/g-suite/elevating-use...


Appears that (wording is somewhat vague) apps which only store user data on the device do not need to go through this external security assessment.

Agree that this is Google pushing their security costs onto developers and effectively killing bootstrapped SAAS apps.


Does this limit things like Zapier integrations as well? Surely they can't confirm the end security of every usage?


> Surely they can't confirm the end security of every usage?

If attachments are going from Gmail to Google Docs then they're probably fine, I'd imagine one audit could cover all those types of apps. For things that send email to Slack or whatever I'd expect that Slack would need to pay to have that audited.


I used to work in a place where we had Trello boards, database integrations, and spreadsheets all triggering emails through Gmail (via Zapier) - it was startup-hacky, but it worked.


It's a fancy bit of PR-fu right here from Google, like releasing a jobs report right after a big hurricane hits so people don't notice it. A data breach is one thing, but the cover-up should put the nail in coffin of Google's image as benevolent good guys. They are basically Comcast now.


HA... Google is definitely doing Evil.

I for one experienced personally when they invited my friend & I to discuss buying our app. They just wanted our secret sauce & after baiting us with promise of working together they us kicked us out and said the race is on.

Now there was no guarantees we’d be working together but two guys with no connections is tricked by Google and told the race is on? Us vs. Google who later is granted patents for what we met them about.

Now everyone says that’s just how Silicon Valley is...its expected. Hmmm things change!


[flagged]


Well your employer was granted patents for what we met them for and emphatically said to us the race is on.

So your saying it held no value to your employer and they have every right to treat the little guy dreamer inventors who do not have the right connections like dogs?

Also it wasn't nor isn't an idea rather algorithms we created in 2013 (have improved since) that just worked and work now... demo videos below...

Turn audience & their devices into a stereo system https://vimeo.com/71647538

Drive In Movie app(listen to movie's audio on your device) https://vimeo.com/93899424


I'm not aware of any Google product that has this functionality. What is preventing you from selling your apps?


We are still around (improving our algorithms) and offer our tech to clients for various events. We all have full time jobs and families to sustain.

We'd love to find those connections who sincerely want to guide us (are well connected) so our next big meeting with a FANNG or another tech company is a win for all! I believe if we went in the meeting well connected in the Valley things would have been different! Not treated like dogs!


Why do so many Googlers on HN have such unpleasant personalities? "Did it ever occur to you" that you make your employer look even more awful?


Isn't making accusations of theft and industrial espionage is unpleasant? This kind of accusation in the industry crops up time and time again and I don't think it should be accepted unchallenged, neither between small companies, or large ones.

I've seen this claims so many times, people claiming some investors took their deck and gave it to their portfolio companies, people claiming another company copied them. The instances where this actually happens usually occurs when an already proven market product is copied (e.g. look at FB copying Snapchat recently), I've never seen it with a pre-success product.

Think of it this way, if I had the idea for capacitive touch smart screen phone in 2004 and met with Apple, could I claim they stole the iPhone from me? There is so much more to an iPhone than just the 'idea'.

Now, if you have some non-obvious algorithm, which if you asked a senior engineer to design, could not come up with it in a few weeks, my opinion would be different. Like if you invented a fundamental new type of homomorphic encryption, which, could be explained on a single page, but enables a fundamentally new type of distributed computing.

There are indeed, some ideas which are very 'dense' in value purely by their description alone, but they're few and far between.

In any case, online forums are rife with people making conspiratorial claims, in economics, in politics, everywhere, and I think a technical community like HN should demand a higher degree of evidence.


Are you saying I was not pursued/invited and met with Motorola ATAP(Google had bought them in Jan 2013) now Google ATAP in April 2013? I am making this up?

I made up this NDA I signed https://ryanspahn.com/motorola-google-Expired-NDA2013.pdf and this letter when Google absorbed ATAP https://ryanspahn.com/google.JPG .

Ive got emails from the jerk who invited us out there .. who baited us then said here is the door and by the way the race is on..goodbye.

I met with many others too ..like Samsung and that guy was an upstanding gentleman. Google was awful... treated us like dogs!

I have no reason to lie only to tell my story to warn others and highlight that Google no longer follows it's motto "Don't be evil."


I don't think you're making it up, it sounds like you had a bad experience, and someone from Motorola biz-dev/m&a was being a jerk, but without knowing how the interaction went, it's not really my place to pass judgement. In a company of 88,000 employees, there's a non-zero probability of getting a bad interviewer.

I'm sorry to hear you had that experience. Where I part company is the added interpretation. If I was in your situation, and had a negative experience, I'm pretty sure I'd be angry, and feel wronged, and feel 'used', it's only natural.

I mean, Dropbox tried to sell themselves to Apple and Steve Jobs didn't like the price, he went on to say 'you dont have product, you have a feature', actually worse than that:

"And so he started trolling us a little bit, saying we're a feature, not a product, and telling us a bunch of things like that we don't control an operating system so we're going to be disadvantaged, we're going to have to figure out distribution deals, which are risky, and sort of a bunch of business-plan critiques. But then he was like, 'Alright, well I guess we're gonna have to go kill you, basically.' Maybe not in those words, but pretty close."

But do I believe Apple "stole" Dropbox? nah, cloud based backups, sync, etc are pretty straightforward, and although there is innovation at the UI and syncing protocol layer, iCloud is not really a Dropbox clone.

BTW, I encourage you to continue to try and innovate around your idea. There may be a use case beyond 'speakers', like emergency alerts, security, think "California earthquake imminent in 10 seconds!". There could also be a use case for synchronized sound based gaming at parties. There's loads more to do, and having had some bad experiences shouldn't discourage you, it kind of comes with the territory of presenting ideas and startups, you really get shit on a lot.


I agree ideas are a dime a dozen and are not patentable... syncing audio between devices on the same network or separate networks as an idea is not novel. It's all about the steps taken and if they are unique enough to be strong IP, as well you the inventor have access to capital for the patents and the right connections to truly make things happen. Stuff my friends and I are working on to better amidst daily life. Access to capital & more importantly those who can help here in Baltimore for audio syncing technology isn't easy to come by. Though more importantly we have reached out to those in our network who have sold companies yet none have had any experience in making deals with Google, Samsung and others(what strategy to use when all just want to know your algorithmic steps & if they are unique). It's been a crap-shoot for us and our meeting with Google was a learning experience, but a very unnecessary harsh one and worse compared to meetings with others like Samsung; all very professional and respectful!

Indeed we have not given up and the recent news that Google was awarded patents for SpeakerBlast type technology has lit a fire under us even more.

Well I'd enjoy learning what you do at Google. Are you on the Chrome Audio team ;-)

*Edit: weird your first post here was flagged.


Well, I came to Google via an acquisition in 2010, I worked for years on the GWT (Google Web Toolkit) compiler, and related web compilation tooling. Then I switched to the Apps team (Inbox and Gmail) for a few years. Now I work in Research and Machine Intelligence.

I'm not on the Chrome audio team, but I have had experience with Chrome Audio, as I implemented an OpenAL layer for GwtQuake and the web version of Angry Birds for Chrome using it. See https://www.youtube.com/watch?v=F_sbusEUz5w for example PlayN, a cross-platform (Web, Android, iOS, Flash) library I worked on in 20% time, and https://www.youtube.com/watch?v=aW--Wlf9EFs which details some early experiments with image processing, and the porting of Quake to Chrome in 2010.


Thanks for sharing.

Looks like you lived in Maryland too at one time. I grew up in Towson(now in Bel Air in Harford County) and all my family is here. I’d move out there for a dream job; inventing/building/designing interesting tech for a FANNG company. The road to there looks to be thru an acquisition.

Well it was nice chatting with you!


Is it a data breach if they know that nobody used it?


It is not a data breach if they know, but they can't really know bc they don't keep the logs for more than 2 weeks


No, I get surprisingly decent service from Comcast plus I'm obviously their customer not their product. How about "more weaselly than Facebook"?


> I'm obviously their customer not their product

This is the same comcast we're talking about right? The one that's spent millions lobbying for the right to monitor what their customers do online and sell it to advertisers?


It's a rigged game for sure. Would Comcast and the other ISPs be fighting so hard for these rights if they didn't have Google and Facebook as models of success using them?


Probably not; but regardless of "why", you're definitely also one of their products :)


> I'm obviously their customer not their product

Are you sure you are not both? ATT gives you a discount on your monthly fee if you allow them to inject their own ads. It was opt-in, but it would not surprise me if they don't just do it to all users. Plus, if you are using their DNS, I'd assume they are slurping in all of that data.


> surprisingly decent

Ay, there's the rub, innit?


thanks!


A few years ago, I had an Google account. My account got disabled because I’ve tried to buy an app from Android Store. Besides the fact that I never managed to get my account back something funny happened a few years after.

I had teo Blogger blogs connected to the account and my Twitter was connected to my Blogger as well. I’ve lost those blogs too, and I couldn’t get them back. At some point, I’ve realized some suspicious tweets in my timeline and realized that they are from my blogs! So, Google freed my blogs but didn’t removed its connected accounts. Whoever got the account probably didn’t have any idea that these addresses are connected to a Twitter account as well! But the share to Twitter was on, and whatever she/he was positing were ending up in my twitter account!

Point being, Google is leaking from strange places! Add Google+ to your Blogger and you’ll risk much more I guess!


I think the these three passages really sum up where Google's moral compass is these days:

>"A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica."

>"The document shows Google officials felt that disclosure could have serious ramifications. Revealing the incident would likely result “in us coming into the spotlight alongside or even instead of Facebook despite having stayed under the radar throughout the Cambridge Analytica scandal,” the memo said. It “almost guarantees Sundar will testify before Congress.”"

>"Internal lawyers advised that Google wasn’t legally required to disclose the incident to the public, the people said. Because the company didn’t know what developers may have what data, the group also didn’t believe notifying users would give any actionable benefit to the end users, the people said."

These statements and tactics seem to be taken from the same playbook that Big Pharma, Big Tobacco or any other soulless Mega Corp uses. As long as it it's legal they don't care if it's right. Did their arrogance prevent them from entertaining the idea that disclosure would have provided users with the "actionable benefit" of considering whether or not they wanted to delete their Google accounts?


Huh why did this post get removed from the front page? It's less than 1 hour old, had acquired a lot of points + comments in the meantime. Yet I can't find it in the first 100 pages as of now.


It was buried as a dupe of https://news.ycombinator.com/item?id=18169243. We're just trying to figure out which URL is the best one for this story. It's a bit harder than usual. In the meantime, I've restored the current thread.


So, I couldn't understand what "exposed" means in that article. Was any user's data obtained by someone not authorized to do so, or merely access to the data was possible?


They don’t quite know:

> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API. We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.

https://www.blog.google/technology/safety-security/project-s...


Just possible. Similarly, the recent FB hack didn't actually penetrate 50 million accounts -- that was just an upper bound estimate based how many accounts were "exposed to the risk" of being compromised, probably because they were noted as being touched by the buggy "view as" function.


Update: It looks like on October 2nd, Facebook clarified that 50 million users actually had their login credentials stolen, and an additional 40 million were unconfirmed but known to have been touched by the buggy "view as" feature:

https://newsroom.fb.com/news/2018/10/facebook-login-update/


The buried lede is that Google is shutting down Google+. (EDIT: for consumers: Google is keeping it as an enterprise product)


Technically it's shutting down all consumer functionality for Google+.


And from this day forth, Google+ will sit along with Google Reader as part of the pantheon of betrayals that HN commenters will bring up every single time Google announces a new product.


I doubt anybody cares about Google+, honestly. In fact I'm happy to see it die, after all the pain it's caused me (YouTube integration)


So, are you actually interested in seeing less hating on Google? Or just what?

I did consider replying to the buried the lede comment that "I guess I can stop occasionally wondering if I should get familiar with Google+." Then decided to tweet it instead because that's insubstantive, or could be interpreted as such.

I'm actually a fan of Google, but never got into Google+. I don't openly fangirl the company in part because it gets so much open hatred here.


This is a fair clarification per Google's followup: https://www.blog.google/technology/safety-security/project-s...

> At the same time, we have many enterprise customers who are finding great value in using Google+ within their companies. Our review showed that Google+ is better suited as an enterprise product where co-workers can engage in internal discussions on a secure corporate social network. Enterprise customers can set common access rules, and use central controls, for their entire organization. We’ve decided to focus on our enterprise efforts and will be launching new features purpose-built for businesses.


What's the point, then? Google will (or should) spend just about as much effort keeping it live for enterprise users as it would for the rest of us.

I don't use it often, but occasionally find useful communities there, especially concerning technical subjects. Now all of that is going to disappear.

It's annoying that Google apparently prizes the opinion of enterprise customers enough to half-abort the plan to shut down Google+, while for some reason maintaining a stubborn insistence on removing its access for the rest of us. Yes, this will be another point to add on the list of reasons to never become invested in a new Google product.


> 90 percent of Google+ user sessions are less than five seconds.

If most people are visiting by accident and immediately leaving, it's probably actively causing usability problems and should be shut down.


I don't understand how that follows. Google doesn't need to fix its purported usability problems.

It can let the non-corporate users enjoy the fruit of their labors at keeping Google+ running for corporate users at almost no additional cost.


So, pivot to Slack-alike. I'm sure it'll last.


They could have cornered that market with Google Wave, but killed the project before it took off.


If it makes you feel any better, whether something is popular or not has never been a barrier to Google killing stuff.


What does "consumer" mean in that context?


Corporate/internal use of Google+ for G Suite customers will probably continue. Google uses G+ internally, so it's unsurprising they intend to hang onto the corporate side of it for now, at least.


If a social network falls in the woods and no one's there to log out, did it really go down?


What happens to all the content and communities on it??


Yes they should have announced it no matter what the logs said.

Depending on the logs is the worst idea ever in terms of breach determination. I don’t know how many times we’ve had 40 IoCs, but just because there isn’t a log file (often because no one splurged for the SIEM and the syslog collector broke beyond repair months ago) management acts like they’ve won the legal liability / cyber security lottery.

Obviously it’s not as black and white as that, but the burden of proof should be on the companies to show that no malicious use happened right after they go public with a breach.

Going public with this kind of information, even if nothing happened, could have driven much better behavior across the United States if not the world by setting the example. But Google chose the path of self-protection and short-term gain.


Wait, was this a vulnerability or a breach? Because if every vulnerability is now a breach, there are millions more than we know about.

Microsoft sends out monthly security patches. Each fix is in there is fixing a vulnerability. Every Windows server has multiple vulnerabilities fixed every month. Is every company that uses Windows now required to determine if any of those vulnerabilities were actually used? This seems like a bottomless hole.


A lot of the time I feel Google gets a bad rep on HN as the comments are so often filled with hyperbole. In this case however Google did a very poor job of disclosing this leak in their sunsetting Google+ announcement post. I would have much preferred an incident report explaining what really happened, even if they couldn't find any examples of abuse.


Conveniently for them, they only kept 2 weeks of logs (this is a 3 year old bug). I might implement that at my company. Take two weeks to patch and test the security hole, then review my two weeks of logs for any evidence of a breach. Then tell customers we haven't found any evidence of illicit access.


Not only that but does anyone actually believe they only kept two weeks of logs? I find it very suspect that the company known for amassing data only keeps some data for two weeks. How convienent for them.


Just this weekend, I setup a domain name, setup email, and setup apps and accounts to replace Google with open-source software and servers I control, generally (I use some 3rd party services that I feel I can trust, like Fastmail and Namecheap). I then turned off and deleted all of my data from Google that I could without deleting my Google account (I need to forward this long-standing email to my new email and I don't want to lose my Google Music ratings and playlists right now).

It wasn't hard, just about 5 hours of work and then a few hours to set everything up as I like it. I pay ~$5 per month for email/calendar/contacts through Fastmail, ~$10 per year for 2 domains (each), and ~$5 for an Android app to sync my CardDAV/CalDAV accounts with my Android phone. I have almost completely deleted/disabled Google apps on Android, although I'm not ready to run LineageOS quite yet. I even use an OSM-based maps app, which doesn't work as well as Google Maps, but it is sufficient; navigation sucks compared to Google Maps, but that's the price you pay for doing this sort of thing.

I'm not super-paranoid about government surveillance and I didn't care about Google tailoring ads to me like most folks here, but after all of the data breaches and such, I decided that controlling my own data is worthwhile just to make me feel better. Now, I am able to do most of the stuff I could do before, maybe 70-80% as good as with Google for some things (like maps), but I have peace of mind.


Every time I read comments like this, I shake my head. Despite recent breaches, I still trust the big players--Google, FB, Microsoft, etc.--with my data from a security perspective far more than I'd trust myself to be able to manage security properly on my own servers or trust a smaller shop.

Security is hard. There are many, many more compromises of small firms and self-maintained servers than of these big players, it's just that they don't get major media coverage in 99% of cases.


Yes, I trust Google, Microsoft, Apple, etc with data reasonably. I would however like one breach to not be catastrophic. Also Google discontinuing Inbox made my Fastmail experiement permanent.


I would explain it with sampling bias.

Take 100 people, and suppose 1 of them decides to stop using Gmail, replacing it with a custom setup. 99 decide to stick with Gmail. The 1 person who spent hours on a custom setup is more likely to leave a comment sharing their experience, tips and tricks, etc. The 99 won't have something noteworthy to post about.

End result is you see disproportionally more comments from people who do something drastic and unusual compared to ones who don't.


Yes, the very vocal minority is absolutely a thing and tends to create an inaccurate impression.


The problem is that you need to trust a lot more than just Google.

1) Google

2) Every government that has the power to compel Google to release your private information, from Chile's to the Cayman Islands

3) Every agent empowered by any of the governments from 2)

4) Every person and/or organisation that could be furnished with your data as part of some sort of "discovery" process by any of the agents listed in 3

(for the US: essentially anyone and everyone you've ever had any sort of commercial relationship with)

This means that if you, say, have a divorce case, expect your entire Gmail contents to be used against you by your significant other.

The problem is well explained here:

https://www.stangelawfirm.com/Articles/Facebook-Evidence-and...


>This means that if you, say, have a divorce case, expect your entire Gmail contents to be used against you by your significant other.

This has nothing to do with Google. In a divorce case, you would be compelled to disclose your email, not Google, so using another provider wouldn't matter, because you would be compelled to turn over any emails there too.


Sure but you have a lot of options if you control your own emails that you don't have if someone else does. For instance, you can actually delete your emails, and know they are deleted (obviously I'm saying before any such case). The same option does not exist on the cloud.

(incidentally this is exactly why most corporations these days have a pretty short email retention policy, something like 3-6 months. As long as you delete the mails before a complaint, or at least before discovery is granted, there's nothing wrong with deleting them, even if it is to avoid them being used against you later. But there's other advantages too: it helps me get organized, and it prevents me from obsessing over things that have slipped so far down the priority list they'll never happen. Which is very soothing. Plus it breeds the good habit of not storing important things in your inbox)

I would like to point out that often the criticism is leveled that you'd only do this if you're guilty. Aside from that that is the "if you've got nothing to hide ..." argument against privacy, it comes with a lot of false assumptions. For instance, that such information will not spread, that laws won't change in bad ways you can't control, that you can trust law enforcement infinitely and indefinitely, and so on and so forth.

Also, a wise man once said: "In theory, theory and practice are the same. In practice however, ...". What you control is yours, and with proper security measures no power the police or justice system has can break that control. That's as it should be.


I've been working my way down this path. Fastmail for email and CalDAV/CardDAV, DavDroid to sync, K9 as my email client, OsmAnd for maps, Firefox instead of Chrome. I try and remember to use SkyTube not YouTube but it isn't flawless. I use KISS as my launcher on Android.

I still haven't eliminated Google Photos, I do appreciate the free sync on my Pixel 2 XL for peace of mind, Google Translate is still super useful for offline translate and the camera translate feature and I haven't yet been able to replace Google Maps for transit instructions. I still use a Chromecast which I guess is next to go, maybe I'll actually setup a HTPC of sorts, or perhaps just use Kodi on my Xbox One S as that seems to work well enough.


Your security is only guaranteed by your obscurity. The moment this becomes standard practice and people start using popular software to handle personal services, this version of security will become laughable again.


I think even that is being too generous. If you’ve ever set up any kind of public-facing server, no matter how obscure, you know that scans for vulnerabilities are constant. Whoops, you didn’t install that urgent Apache update because you were on vacation with no SSL access? You’re pwned.

Centralization does have drawbacks, but it in terms of security it is a major step up from homerun servers in many ways.


I can understand this position, but I'd be curious what your thoughts are on how to best (I realize there is no perfect) keep your data private from snooping employees, hackers, or law enforcement.

I've thought about this over and over, and it's hard to come to a solid conclusion about keeping personal data safe (in this context I mean emails and files you may store in the cloud, not browsing history, social media posts, etc.). There are so many options with downfalls for each, and I'm not a security expert. So every time I get excited about trying a new service geared towards privacy, or setting up my own instances, inevitably somebody points out the terrible pitfall in it and I get discouraged.

1. Don't use the internet or internet services, period. <- Not tenable for most of us.

2. Use services who market themselves as geared towards privacy. <- Can't actually trust those services, even with E2E encryption because they could be running different code from what you think they're running.

3. Use regular cloud options, but stack stuff on top - VeraCrypt volumes or Cryptomator with Google drive, GPG for email, etc. <- Really difficult to setup and have a nice reliable way of accessing data on mobile/desktop/etc. No security audits on a lot of the open source software.

4. Host your own services - i.e. a Nextcloud 14 instance on EC2 with an S3 backend, then use client-side E2E <- Difficult to make sure you set the service up in a safe way, and not even a fraction of as much resources in auditing code as, say, a giant corporation.

5. Spread what you do out over multiple services - FastMail for email, DropBox for cloud storage, Standard Notes for notes, etc. <- A real pain.

I know there will never be a consensus on this, but I'd love to hear what your thoughts are on the best way to keep my personal files and notes personal to me. Let's assume I'm not a target of any spy agencies or whatnot, but I want to make it very, very difficult for anyone to read my person notes and files but me.


I've been considering a switch to Fastmail myself and I have a domain on their free trial right now. I'm going to be pulling the trigger end of this month. Only thing that sucks is going and changing my email address everywhere. Like you I'm doing it for piece of mind knowing I can switch providers elsewhere if I need to down the line. I'd rather pay $50US each year then be as tangled up in Google's services as I was.



The cynic in me wonders how long the two week log policy has been in place


Person 1: Google exposed the private data of hundreds of thousands of users of the Google+ social network. Person 2: What is Google+?



Now that G+ for consumers is going away, can we have Reader back?


as not-exciting some of the 'business requirements' I've been developing at work have been, just remember you could have been working on google+ for the last 8 years


So was Google+ basically just kind of dragging along as a zombie and this PR nightmare caused it to suddenly not be worth even keeping it around?


Google's Project Zero is always taking others to task for security yet not a peep from them on this. Their front page today is focused on bugs in Safari, Linux and Windows. Perhaps they should focus on their own products.

Its complete hypocrisy to affect commitment and then slink away when it comes to your own products. That raises questions about conflict of interest and credibility.


The Project Zero team is specifically supposed to find Zero Days in other products to make everyone more safe. Google has other security teams for finding bugs in their own products. This isn't Project Zero's fault, nor does it really have anything to do with them. Project Zero is one of the decent teams at google trying to do good by everyone; they can't find every single bug, and if they changed their mission to just focus on Google products we might not have found all sorts of big bugs (eg. heartbleed) for a lot longer. Probably best to leave them alone and let them get on with their business.


Because they didn't find the bug. Why would they report it?

They've reported plenty of Google/Chrome bugs in the past. Your claims of hypocrisy are very off base.


I’m surprised Google’s own Project Zero did not caught this one.


When your purview is as wide as theirs, you can't catch every single vulnerability; I'm sure there are plenty of things in openssl, linux, etc. that they (or anyone else) haven't caught yet too :)


Yes but is it damn right that they should start in their own backyards.


Wow the Wall St. Journal and Murdoch really does not like Google.

This article makes this sound like this is something that it appears to not be.

Did this all start when Google fired Damore? Or does it date further back?



No mention of the Google-Murdoch spate is complete without a link to this classic: https://europe.googleblog.com/2014/09/dear-rupert_25.html


>We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug.

They only kept two weeks of logs, yet this bug was accessible over a three year window. So out of 156 weeks they can only rule out 2 weeks where data was not accessed. I think that's pretty pathetic for a company that stores your precise location, search history, photos, email, text messages, calendar, social network, date of birth, etc.

This is a very big story. Hence why Google executives sat on it for six months.


GDPR proving to be great once again. The case for a US equivalent gets stronger. And more importantly, all these fuck ups will ensure that whatever bill gets drafted isn't just what the Facebook/Google lobbyists find acceptable.


What data was breached? If the answer is none, there is no GDPR action to be taken.


I don't know, but Google doesn't either:

>Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time.


Looks like "we don't know if data was leaked" is now a standard language that accompanies every security bug disclosure to avoid GDPR fines.


Then, to be clear, is your position that companies should be punished (fined) if there has ever been the possibility that user data was compromised? It's possible that a time-traveling quantum-powered encryption-breaking mind-reader from the future has seen your personal data. Should we fine everybody who knows anything about you?

Reckless endangerment deals with the possibility of something bad happening, but notice that word "reckless."


I didn't say GDPR would apply in this situation, and the WSJ story suggests it wouldn't because of when it was discovered. All I said was that GDPR was great (obviously we'd only see the benefits from it after it had gone into effect), and this latest scoop bodes well for the political movement to enact equivalent legislation in the US.


Non-paywall link?




You liked the wrong article. Fixed: http://outline.com/mNDfrH


See no evil, yeah?


What would the EU fine for Google be now GDPR is enforced? 2.2 billion dollars?


A company in my country got fined recently for not protecting data of ~735k users (7% of the country's population) (leaked email, password, phone, full name ...) they failed to disclose properly to the regulator and the users. They got ~$60,000 fine. GDPR was part of the reasoning for the fine.


GDPR enforcement would come into effect only if there was a breach and it was not handled.

From what is in the story and from what we know, there has been no breach.

As 'tptacek has noted, it is very unusual to announce a security bug without a resultant breach.


What in the story indicates that there was no breach. The story says that they didn't keep a large enough set of activity logs to determine whether data was improperly accessed, not that there was no breach.

> Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said.


This is not what the article says, "Because the company kept a limited set of activity logs, it was unable to determine which users were affected and what types of data may potentially have been improperly collected, the two people briefed on the matter said. The bug existed since 2015, and it is unclear whether a larger number of users may have been affected over that time. "

'We don't know who was affected and what data may have been collected' is way different than what you're saying. It also opens up lots of questions, such as why a company with the resources of Google would not persist security critical logs indefinitely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: