According to the FAQ, "It’s also possible that Google does not use student data for any of these purposes—but unfortunately, Google has refused to articulate the reasons" so it seems like the EFF's position on Internet-hosted applications is that the specific uses of each kind of data should be described in a privacy policy.
I think it's a tenable but extreme position, because basically they are objecting to Google reserving the right to develop new features in an empirical/data-driven way.
I think most people don't think of e.g., their privacy w/r/t tax data being compromised when their tax prep software company mines it to make data entry simpler, or to make it easier to understand the consequences of various filing choices by visualization, etc. Similarly, I don't think Google is invading my privacy when it takes my search queries and uses not only to produce SERPs for me but also to notice that when people type cyombinator it is likely a typo for ycombinator.
As I said, I think reasonable people can disagree about it, but I think they are demanding that the wall have a shape like "this data will be displayed back to you while you work on it and to your teacher during the grading period and otherwise it will not be used," whereas many people are totally happy for the wall to be "we will use this data to help you learn and allow your teacher to evaluate your progress." Lots of people object to advertising to kids, not so many people object to building a better word processor or kids-oriented theorem prover or whatever, including ideas that haven't been thought about yet but are (in the developer's honest opinion) wholly motivated by the purpose of education.
EFF basically doesn't trust a cloud software company to have any discretion, but I think most people are willing to take an informed risk when they entrust their data to someone else's instructions or computational resources; otherwise they'd write the software themselves and run it on their own device and so forth.
Your example seems fine to me. My kids use Google Docs for school, and use it both for collaborating with other students at home, and to submit homework to the teacher. That seems to fall in your example.
The risks that you describe also sound reasonable for adults to assume. However, Google, being who/what they are (organizing the world's information), shouldn't be surprised when "watchdogs" ask more of them, particularly in this case. They deliberately entered this education space. It's not just about plain advertising (to me). Whatever profiles are built from children's use of Google's services (that they are required to use through school), should be carved out from their normal data harvestinf and user profile nurturing. It should be up to Google to develop a "win/win"model whereby they can protect students and properly monitor app performance. It doesn't sound too challenging for a company like Google.
The problem I have is Google is doing this A/B testing on their public apps in the wild. I don't know if I'm getting the best interface at this moment or am being a guinea pig for some experiment. And not only UI experiments. Remember when Facebook reordered timelines to measure the affect on the reader's mood?
I think experiments should be done in controlled conditions with mock data and informed participants. When you then sell something to the public I expect it to be a finished and stable product. I can then build my workflow around your product and know that it won't be made obsolete overnight because you applied some enhancement in a patch.
When Google used to explicitly mark their apps as "beta" we'd joke about them being in eternal beta mode. It really looks like that's not a joke.
> The problem I have is Google is doing this A/B testing on their public apps in the wild. I don't know if I'm getting the best interface at this moment or am being a guinea pig for some experiment.
My problem with that reasoning is that you're assuming that there is a best interface and that what it is is already known. A/B testing is usually done to determine which choice is the best, how will they know if they don't test? Also who decides what is best? Does best mean that it's incredibly easy to use for 90% of tasks but terribly hard for the other 10% for some reason, or that it's moderately easy to use for 100% of tasks?
> I think experiments should be done in controlled conditions with mock data and informed participants.
The problem with this is that the mock data won't get you applicable results in a lot of cases. Along with that, an informed participant affects the selection further reducing how well the results reflect the real population of users. You've also likely agreed to this kind of testing in the TOS, so you might be considered to be the informed participant that you mention.
Businesses have experimented with "other people's data" forever. The clerk at the video store chats with customers, learns which ones are chatty, what recommendations work out, etc. This is considered to be good service. But a cloud software company does the same thing with an algorithm and a CPU and suddenly it's an outrageous violation of privacy. Suddenly we need informed consent protocols to change the signage on the store front or the font on the web page. Seems like an overreaction to me, assuming that all cloud companies are out to take zero-sum advantage of you.
Do you seriously not see a difference between the video store clerk recording people who voluntarily information and personal data such as browsing history?
Nobody cares about Amazon using their own sales records and server logs to generate recommendations. The problem comes when technology companies decide that means they get to use any data.
If you're fine with google using personal data, are you also fine with FedEx opening up every package they deliver to you?
> take zero-sum advantage of you
Nobody said "all" or "zero sum", which doesn't apply here, but "take advantage of you:" is pretty much a description of capitalism. On HN, this is usually called "monetizing".
Why is the boundary of a firm relevant? If Amazon buys a shoe store, does that make them more legitimate in their use of your shoe shoppng data than before? This seems like an arbitary choice that is biased in favor of big companies.
Incorporation isn't the boundary. Again, why are you conflating business data with the personal data of someone using a product that has nothing whatsoever to do with the business transaction.
If Amazon buys a shoe store, they get the sales records and any other related data. They do not get to know where you walk with their shoes.
If there is any confusion here, it is because of the recent trend in Services as a Software Substitute that makes the business's server necessary for normal use of their product. Some people seem to think this lets them open the packages they are conveying or storing.
Of course they do. Traffic analysis is always a problem. They obviously know the src/dst addresses and the package dimensions (including weight), as those features are necessary for the the delivery.
They still don't know what's inside the package. I really don't see why this boundary is hard to understand. With snail-mail (fedex, usps, etc) there are even laws that protect the boundary between the envelope and the private contents. Why would you think software would be different?
I think it's a tenable but extreme position, because basically they are objecting to Google reserving the right to develop new features in an empirical/data-driven way.
It's depressing how perfectly normal and natural the idea of tracking users has become.
I know this isn't a popular opinion on Hacker News, but why should there be any tracking whatsoever when using an OS? Are people saying that they're perfectly fine for their Windows/Mac/Linux distribution to track everything they do? The apps they use? The sites they visit? Because that's basically what ChromeOS does. It even tracks the documents you print to your desktop printer (routed through Google's cloud print service).
You aren't even anonymous when you use ChromeOS - you must be signed into your Google account. Just as a reminder, your Google account = your name, your date of birth, your location and optional phone number. In other words, some of your most personal and private information all tied to the activity you conduct on ChromeOS.
Google states that it does not read the content of your emails in GMail (emails are scanned by computer), but you could argue that your browsing habits are just as private and personal. No such assurances on how they handle this data. Does Google disassociate the identity of the user from their activity? If not, then who sees this data at Google? How long is this data kept for? These are important questions, but you won't find answers in Google's privacy policy. (And you're unlikely to find many in the tech community asking Google about them either.)
I would never recommend an OS that tracked the activity of students so relentlessly, especially when many students are too young to understand the horrendous privacy implications of using ChromeOS. Other companies like Microsoft are rightfully criticised for their tracking in Windows 10, yet Google gets a completely free ride. It's just baffling.
The point is to avoid the problem, rather than to stumble into a state from which it is difficult to restore the status quo ante.
Please do not mistake "anticipating risks" for "fear of uncertainty and the unknown." This a risk for which the likelihood currently appears low, but the consequences could be grave and hard to reverse. At least to my mind, that warrants a bit of attention.
The point is that there isn't a nightmare problem supported by evidence, and that irrational fear causes a cognitive bias which leads to such slippery slope fallacies (e.g. Y2K scare).
You're more likely to be negatively impacted by slipping in your shower, yet I bet you have spent more energy hand wringing over tracking. Just as parents have given overt attention to vaccines causing autism.
Many of the "nightmare scenarios" are from the recent past of the West, or non-Western countries. Hoover's FBI, the Stasi, and so on; but also Erdogan's Turkey and current China.
I'd like to give this some thought actually. For now here's an observation.
Situations which produces large amounts of "externalities" seem to produce this strange scenario where we fight against potential nightmare scenarios. Externalities being defined as something that occurs as a result of the new process outside the expected system. Global warming from burning fossil fuels or secondhand smoke are prime examples although easier to think about because the outcomes are tangible.
This brings up the question, "How do we handle this thing that effects people on a large scale?" It's useful to ask, "What's the worst outcome of this we can imagine" which might be along the lines of 1984. Terrifying because the government is normally the vehicle that is used to regulate large amounts of externalities and data collection could provide it with a large amount of power, something that the government often desires. Perhaps unsurprisingly, the government started to abuse this power very quickly and in secret; although to a lesser extent than 1984. It's limited largely by technology (which is improving at a rapid rate). How we deal with this problem now may help us avoid this dystopian situation once the technology catches up.
On the other hand, we also have to look at what we get in exchange. Burning fossil fuels is so essential that maybe it's worth raising the temperature of the earth until we can develop something better. Or maybe it's just a large exercise in game theory. In the case of data collection we get a few extremely convenient services (search, automation, etc.) in exchange for lack of privacy in nearly all forms and the systems that are abstracted above us (businesses, government, economy, etc.) are shifted towards optimizing for their respective interests using the information they can glean from the collected data. These systems become arguably more efficient more powerful.
These conversations effectively shape how we handle the new thing and are overall good. Privacy (and the effects of privacy) isn't really an easily quantifiable thing, but it does appear to be something that people think is worth having. The end result is most likely to be somewhere in the middle of the road; we can burn fossil fuels, but there are certain standards on the emissions to limit the externalities.
So where exactly is the tradeoff "worth it"? It's different for everybody. For me, I think all my data that is seemingly private for me (I wouldn't let my friends read my email / texts; if it's a file in a password protected account) should be encrypted and impossible for anyone outside of that conversation to access; currently this is not the case. Facebook is a bit of a grey area because people seem to use it as if it was private and the friends feature implies that it is, but the reality is that it is not. All of this falls apart completely when everything we previously assumed was private is clearly not when it comes to organizations such as the NSA which is an organization that provides a very questionable amount of value.
Comparing tracking with climate change is a new one as far as nightmare analogies go.
This comment is a good example of how fear, uncertainty, and doubt works. Your entire argument premised on "externalities" could also apply to vaccines causing autism, or a GMO-driven dystopia.
To a degree. Vaccines and autism have some different qualities. One, there doesn't appear to be any link between vaccines and autism and two, if there was a link it's closer to the "do we kill 5 people or 1 person" ethical question with a game theory twist.
I don't know enough about what a GMO dystopia would be to comment.
I like the fuel / climate change analogy because we know the general outcome (increased temperatures) and a number of likely/possible effects, but it's hard to say exactly how much life will change for the majority of people and there is a very obvious short-term benefit to going along with it. There is current evidence that it is actually a real problem as well. Fossil fuels are the revenue model for many industries, which is also the case with data collection.
While it's a lesser problem, data tracking has similar qualities. We know we lose privacy, but how much and what will the effects of losing that privacy be? There is incentive to use these 'free' services. The NSA, constant data breaches by hacking groups and government trying to strong-arm companies into weakening encryption show that it's a pretty big deal. Lavabit was effectively shut down in a very shady legal manner because it didn't want to provide data on all of it's customers when just one was being targeted.
If privacy really doesn't matter, give me read-only access to your emails. Then tell me when you leave the house each morning and when you arrive home. Machine learning is pretty fun, we can often learn more about you from data you think is meaningless than by reading your emails. Some portion (perhaps most) of the information on you will eventually make it's way outside of wherever it was initially stored. Oh and it's stored forever and you couldn't possibly imagine the uses or incentives on how to use that data in the future because you haven't imagined it yet. Let's have the discussion and decide on some kind of limitations and ethical practices please. Relying on people to be reasonable got us the current NSA.
Also, if you're so afraid of tracking, why don't you live remotely in the wilderness? That's exactly how pointless these types of non-sequitur, hyperbolic questions are.
They don't read all emails because it is easier for a program to look for certain phrases/words/etc... but in the end, they are still looking at them even if it is indirectly (unless you encrypt everything).
While I think it's great (but insufficient) that students and kids have greater privacy protection than adults, it begs the question why all of us shouldn't enjoy the same level of protection.
Just because one becomes an adult does not mean that an adult is less deserving of the same level of protection.
One difference is that an adult can choose not to use Google. They can also (usually) choose to install ad-block or ghostery or a similar plugin and thwart the tracking.
If using Google is mandatory for a class, then the students don't have any choice in the matter. And I think school computers are locked down to prevent installing browser extensions, so that option isn't available either.
Also, because it's a pet peeve of mine, it raises the question.
Google can also be mandatory for a company your work for, or a client you provide services to. Yeah, you could opt out to work for that company/client, but that's no different than attending another school.
There's really no difference. It's all about network effects. We're seeing Google docs, Dropbox folders replacing what Microsoft Office was in the past. Vendor lock-in by network effects.
You may like it or not yourself, but let's not pretend it isn't something that it is.
Plus there are scenarios where choice is not available, for example, Comcast as the only ISP in town etc.
I suppose one could choose not to use the Internet in that case but that's not really a viable solution.
It's an asymmetric fight which individuals are on the losing end and given what we now know post-Snowden, I'd argue that greater privacy protection is absolutely necessary and available to all those who desire it.
this argument is irrelevant; as whether adults can give informed consent or not is not important (at least) here. e.g. adults have voting right, at least in some countries,including USA (whether they give informed vote or not is not even checked just assumed) but the children don't have this right. It is outright wrong and unethical on google's part to steal a kid's information especially after declaring they are not doing it.
Is the EFF's problem that this information is stored on Google servers?
Syncing settings to an account seems like one of the prime selling points for a school using chromebooks. A child loses their chromebook, or gets issued a different one the next September all their favorites, apps, etc are there ready to go when they sign in.
Let's try this exercise. Replace "Google" with "Lenovo".
Lenovo sells computers to schools. The computers upload everything entered onto the computer to Lenovo by minors without consent. Everything uploaded is data-mined. You can switch it off in some obscure setting.
You would have people carrying pitchforks here instead of saying "really?". Google doesn't get a pass. This is shady. Should be off by default.
The entire premise of the Chromebook is that nothing is stored locally and you're almost exclusively using web services. That's how they're expected to work. You can't use one without knowing that your data isn't stored on that specific machine because in many cases you can't even access your data without an internet connection.
I've seen no evidence of data mining - I can't even think of a use for this data, much less a use that would be worth the risk considering their growing popularity and how well Office in schools worked out for Microsoft. I imagine just them using Gmail and Google apps is far more valuable than any fruitless advertising to people that can't spend money.
> The entire premise of the Chromebook is that nothing is stored locally
A piece of hardware doesn't have a premise. I installed Linux on my Chromebook. Is that wrong?
At what point did we stop decoupling software and hardware?
I'll pass on that notion.
> I've seen no evidence of data mining
You're kidding, right? I promise you, Google is keeping tabs on every byte passing their servers. So is everyone else. Hell, so am I with my servers.
> I imagine just them using Gmail and Google apps is far more valuable than any fruitless advertising to people that can't spend money.
What? Google makes money through advertising. They can't advertise if they don't know what you like. They only get paid when you click. So yeah, they data mine GMail and all the other Google Apps to know what ads to show you. And they show them on all of their properties.
That's why there's a switch there to stop sharing with other services, etc. and that's why they're now promising to have that off by default.
From the article: "Google does not use student data for targeted advertising"
So... what's the problem?
The data is uploaded for a clear and legitimate need, the ability for school's to loan out chromebooks on demand. The data is not used for advertising.
Near as I can tell the complaint is this nebulous "it's being data mined" with no elaboration or evidence.
"EFF’s filing with the FTC also reveals that the administrative settings Google provides to schools allow student personal information to be shared with third-party websites in violation of the Student Privacy Pledge."
"Google told EFF that it will soon disable a setting on school Chromebooks that allows Chrome Sync data, such as browsing history, to be shared with other Google services."
The data still exists in aggregate form. Even if Google went against their business model of using as much data as possible for targeted advertising, as recent data breaches have demonstrat4ed, it is foolish to think the data will stay with Google forever.
Data that is sitting around unused is tempting to use the next time they hit a financial hardship, and it's a very tempting target for various types of thieves and governments with national security letters.
The proper way to handle this problem would be for Google (or whomever) to be liable for mishandling the data they hold. If google wants to have this data, they would then have a strong incentive for keeping it secret. Even better, it would encourage Google to only keep data for as long as it was needed ("leftover" data becomes a potential liability risk with no benefit).
Yes, because it has to, to serve the sync purpose. And the EFF is not challenging the sync purpose. The sync purpose is the entire reason schools are using chromebooks in the first place.
So no, the problem is not that the data exists on Google's servers.
> it's a very tempting target for various types of thieves and governments with national security letters.
OK now you must be trolling. Government NSL requests for 2nd graders chromebook data? Seriously?
> Yes, because it has to, to serve the sync purpose.
If you bothered to read my post, I never said that couldn't be done. They simply need to be liable (both civil and criminal) for any problems that arrive.
The idea that Google gets to collect all the data on people they want without any responsibility for how they handle that data is insane.
> OK now you must be trolling.
The fact that you think my post could be trolling suggests you aren't taking the risks from data aggregation anywhere near as seriously as you should be.
I'm not - those are examples, not an exhaustive list. That said, data stays around forever, while 2nd graders eventually grow up. If you want a more likely example, try insurance companies.
> They simply need to be liable (both civil and criminal) for any problems that arrive.
OK, but why bother saying that when they already are? Like, yes, Google is liable for data Google holds. Water is also wet.
> The idea that Google gets to collect all the data on people they want without any responsibility for how they handle that data is insane.
That idea appears to be something you made up, though?
> I'm not - those are examples, not an exhaustive list. That said, data stays around forever, while 2nd graders eventually grow up. If you want a more likely example, try insurance companies.
Since you're sticking by this please name a single thing a 2nd grader could do that literally anybody but their parents would give a shit about a week later, much less 10 years later?
> Since you're sticking by this please name a single thing a 2nd grader could do that literally anybody but their parents would give a shit about a week later, much less 10 years later?
Try government NSL requests for your high school data. Things you searched for, or wrote, or did otherwise. Don't think they wouldn't reach for it if they know its there and can be used against you in any kind of way. Maybe that data is gone when you grow up; or maybe data storage is cheap and they just collect everything.
> context here is primarily about under-13 year olds
My mistake, I didn't see that that. It's certainly more of a stretch that they would go after pre-high school data; but it might not matter, if in fullfillment of the NSL google has to provide said archived data anyway.
> And I'm still gonna go with nobody is issuing NSL requests for your high school drama.
That's not how counter intelligence works. You gather everything you can about a subject because it tells you more of who they are and what makes them tick, or break.
My problem is you're trying to exercise "reason" here. it's unreasonable to suspect that they would issue an NSL for school data. I agree. but that doesn't mean they can't, or won't. Or that such data wouldn't be exposed or used in another way. And that's the problem. It's not a non-issue because they're kids. It's less interesting because they're kids, but not easily ignored because of it.
Its rather obvious why people are worried. It would be like having a lion in a cage with a deer. Sure maybe the lion isn't interested in eating because its full, or maybe it never intends to eat the deer because it gotten used to other kind of meat. Or maybe it gave its word that it wont. All that is possible. But instead of having to spend every second of your life watching if the lion eats the deer, we should structure the system so that its impossible for the lion to ever be next to the deer.
Google is an advertising company, and slowly they have been adding spyware-like capabilities to their product (address bar - keystroke logging in chrome, injecting every single search result with javascript, "accidentally" listening to users microphones for hotwords without their express permission, etc)
It is perfectly natural to be very weary of such companies. I suspect people would be fine with Google providing the chromebooks as long as the contract legally binds them to never collect the data.
>Unsubstantiated claims are also called conspiracy theories.
No, they are not. Conspiracy theories have a connotation of being ridiculous and outlandish. That is how the general public understands the term. A company like Google whose bread and butter is datamining, being accused of mining additional data is not a conspiracy theory.
You overlooked that small detail where for services where the cloud does not require plaintext, Apple encrypts everything with a key that never leaves the client.
Now you overlooked that small detail where it's Google that is exploiting the data, not me. And doing so on behalf of itself, its partners, its future partners and assignees in perpetuity, and your ex-wife's lawyer.
Chrome's sync data is supposed to be encrypted at least with your Google account password. They could be doing all that (mining, tracking) but there is a clear, legitimate purpose to this feature: providing the same environment across computers.
Syncing data between chrome environments is different than mining usage data across google's properties. It sounds like that's the beef here. Children are required to use the Chrome systems, and parents have no say in the matter, or control over how Google uses the data.
By default only passwords are encrypted afaik. There's a setting you can choose that will encrypt everything, but then you need to provide a passphrase.
People seem to be conflating "tracking user behavior" with "tracking user behavior without consent or transparency". Yes, there are plenty of people opposed to the former, but that is a moot point with respect to Google.
The latter, however, is symptomatic of a much bigger issue: there is a pervasive belief amongst many of the silicon elite that users simply aren't capable of making effective decisions regarding tracking, and therefore it is best if they are not allowed to make those decisions. I have heard this directly from many people, and each time it leaves me surprised.
If Google were to provide real transparency into the information they track about me, that would be fantastic. I likely wouldn't even look at it, but I would know that organizations like the EFF and ACLU would serve as ombudspeople for the public. Furthermore, whichever of the big internet players does this first will likely generate a tremendous amount of brand loyalty and free marketing.
I suspect current behavior won't change without legal intervention, which will potentially be adversarial, which is a shame since there are people at these companies who are much more qualified than lawmakers to anticipate and plan for the future.
It's amazing that a company (and industry) that self-identifies so strongly with taking novel approaches to solving hard problems can get mired in such status quo bias.
> you can see all the data google collects about you. You can export it or delete it.
A) In no way do I believe that is all the data Google tracks about me. It's just what they choose to present to you.
B) How do I have any guarantee that it's actually deleted?
(Tedious disclaimer: not speaking for anybody else, my opinion only, etc. I'm an SRE at Google. I can't respond to most of the things in this thread, so don't bother asking.)
"If Google were to provide real transparency into the information they track about me, that would be fantastic."
"In no way do I believe that is all the data Google tracks about me."
The second comment here is why there's no point in doing the first one. It doesn't matter how much information we release about this, when people are determined not to believe it.
It is a frustrating experience for the people who work on them to pour time and effort into making sure a privacy policy is really precisely accurate about what is happening, and then see threads like this where people will go looking for loopholes that aren't there, and because English is fairly ambiguous they'll eventually find a way to misinterpret the words to support what they wanted to believe all along.
Here's an interesting hypothetical question: if the primary effect of the conspiracy theorists is to throw bricks at the people who are trying really hard to make sure the bad things never happen, then which side are the conspiracy theorists really on? (I make no claims about whether this is what's happening, because I can offer no evidence, I just think it's an interesting question)
Honestly, I used to trust Google a lot. Then things like the NSA leaks happen, and other questionable Google changes, and I'm sorry, but I lost that trust. I use as little closed-source stuff as possible now.
I'm sure it is frustrating, but you'd have to thank the people in charge for that distrust. I wasn't born with it, the cloud-providers earned it. I used to think this was tin-hat territory myself.
IMO, the solution is to stop storing things in the cloud, and start giving me the tools to do it myself. I have no idea how to work that out financially. If google made a "maps app" that was open-source and I could run on my own server, but charged for it, I would probably buy that because I like the service so much. But as it stands, I try to avoid using it as much as possible, because I simply can't trust Google.
EDIT: Forgot to answer this
> if the primary effect of the conspiracy theorists is to throw bricks at the people who are trying really hard to make sure the bad things never happen, then which side are the conspiracy theorists really on?
At the end of the day, Google just doesn't do enough to make me think that they are "making sure the bad things never happen". I'm sure there are a lot of people who are trying hard to make that true, but how can I know that is true for everyone? What can you offer me other than root access to your servers? I'd honestly like a good answer to this, because I just don't have one.
And the amount of data/power that Google has is just too much for me to think that they won't "get greedy" some day. If up until today, Google was a good company and wasn't abusing anything in any way, and suddenly tomorrow they turned evil, why would they erase my data first? They still have it, and I have no control over whether or not Tomorrow-Google will keep it.
> ...then which side are the conspiracy theorists really on?
The side of trying to validate whether or not things are as they say on face-value. Whether thats Google's side or not is up to them.
"Press reports that suggest that Google is providing open-ended access to our users’ data are false, period."
"Any suggestion that Google is disclosing information about our users’ Internet activity on such a scale is completely false."
And you have chosen not to believe a statement made in the strongest possible terms. If you won't believe this, then I do not think there are any words that you would believe, so there's no point in trying to get more published.
> What can you offer me other than root access to your servers?
Indeed, and we can't do that because it would invalidate the very security that you want.
> If up until today, Google was a good company and wasn't abusing anything in any way, and suddenly tomorrow they turned evil, why would they erase my data first? They still have it, and I have no control over whether or not Tomorrow-Google will keep it.
The easy solution here is to not keep any of that data in de-anonymised form for longer than is necessary, but then people don't want to believe the privacy policies which say this is happening...
I'm not saying Google deliberately handed data over to the NSA. I'm saying the NSA (and others) are try getting to it one way other other. It's not necessarily that Google is untrustworthy, but that if the data does get out, however that happens, it will be quite devastating.
> And you have chosen not to believe a statement made in the strongest possible terms.
A) It's not in the "strongest possible term". Quite a bit of weasel-wording there IMO.
B) How does a statement's strength validate it? I can strongly say any lie I want.
> Indeed, and we can't do that because it would invalidate the very security that you want.
I agree. And I would like a solution to this. There's just not a good way for me to trust cloud providers.
> The easy solution here is to not keep any of that data in de-anonymised form for longer than is necessary, but then people don't want to believe the privacy policies which say this is happening...
> The easy solution here is to not keep any of that data in de-anonymised form for longer than is necessary, but then people don't want to believe the privacy policies which say this is happening...
I don't see how that's at all relevant.
A) Just because one asshole on the internet doesn't believe the policy doesn't mean Google should just give up on it.
B) That doesn't address my point. Tomorrow-Google will still get access to my data, unknown to me. I can't verify the "trustworthiness" of a company every day.
There's history.google.com where you can see pretty much everything that you've ever done with a Google product. You can even delete everything and disable tracking if you need to.
"… Google’s “Sync” feature for the Chrome browser is enabled by default…"
"… since some schools require students to use Chromebooks, many parents are unable to prevent Google’s data collection."
Doesn't "enabled by default" mean that parents should be able to disable the sync feature?
That being said, I would assume that any tracking features would be separated from syncing features on a machine built for a student. Google appears to be attempting to correct that, after EFF's prompt: "Google told EFF that it will soon disable a setting on school Chromebooks that allows Chrome Sync data, such as browsing history, to be shared with other Google services."
> EFF’s filing with the FTC also reveals that the administrative settings Google provides to schools allow student personal information to be shared with third-party websites in violation of the Student Privacy Pledge.
What? Is the EFF complaining here that Google gives schools the ability to share their student's data with third parties and that is wrong (by Google)?
It is not actually true. The school district acts as the agent for the child and it is their responsibility to spell out in contracts what the vendor can do with the data (in most cases). They are bound by various state and federal laws on what they can allow as well as what they must require of vendors but in general the parents don't have to provide consent on a case-by-case basis. Of course, parent can talk to the district about it and potentially have the child not use the service but this is not something vendors would ask of each parent.
Note that "The surveys were supposed to be anonymous, and when concerns were raised about pupils being identified, assurances were made they would be destroyed." turned into the survey results being handed to the thought police.
(There are various programs trying to combat "islamic extremism" in schools in the UK, all of which seem to be proceeding in clunky bureaucratic ham-fisted ways.)
Honest question here, why detach it from the post it was responding to? It's makes it a totally out of context post unless you've read the post it was originally linked with. I mean, if you want to flag the response as off-topic, fine, but why detach it as well?
We do that because off-topic tangents lower the quality of the subthread they're embedded in. Providing a link to the original parent gives everyone access to the original context if they want it.
> Begging the question. I do not think it means what you think it means.
The transitive use "begs the question X" (where X is some actual question) is commonly used that way and well-understood (and this usage is, while idiomatic in structure, fairly obvious from the usual definitions of the words involved.)
This is readily distinguished from the intransitive use "begging the question" or "begs the question" to refer to the petitio principii fallacy (an idiom based in what was, at the time, a poor translation into English that hasn't gotten better with age.)
Even better, the intransitive use can be seen as (though this reverses the etymology) a special case of the transitive use in which the question begged is the original one the argument sought to address -- the justification for the conclusion which has been assumed in the petitio principii fallacy -- so that the transitive use that you complain about both generalizes the special intransitive case and rationalizes it with common usage.
We already have a perfectly fine and valid phrase: "raises the question X".
I guess there is a difference here between American and British English, not so much in what is formally accepted, but in common usage. American English seems to quickly adopt grammatically/syntactically/semantically incorrect phrases that become popular; "touch base" or "call out" or even "${X}gate" comes to mind.
> English seems to quickly adopt grammatically/syntactically/semantically incorrect phrases that become popular
If either of the uses of "begs the question" is more grammatically/syntactically/semantically incorrect, its the original transitive use, which is an abomination of failed translation into English, through Latin, of a straightforward Greek description which entirely fails to capture the meaning of the original and was an abuse of the English language at the time it was created. Its only claim to any kind of correctness is its history of use through which a clear understanding has developed despite its problems.
Of course, the newer, transitive use also has a solid (though shorter) history of use, is clearly understood, and, moreover, is structurally distinct from the transitive use, so it doesn't create any ambiguity or confusion with that use.
If you think British English is as pure as you claim maybe you haven't spent enough time outside of southern England or listened to too much BBC. Just go hang out with some chavs for twenty minutes, and I'm sure your world will be turned upside down.
England is full of bad English speakers (and, no, that wasn't some jab at immigrants). In particular if you aren't middle class or above.
To clarify: I'm not actually British, but I have lived in England, and I'm perfectly aware that many people in England speak bad English. I was referring to written communications being at least semi-formal, like a professional blog post or a newspaper article, and I merely suggested that the appearance rate of neologisms is greater in AE than BE.
"Begs the question" is now commonly used in that way. This comment does not really contribute to the discussion. It feels like a "Well-actually" from Recurse Center's social rules.
I agree it's off-topic, but I'm not sure I agree it's a "Well-actually". Language is imprecise enough as it is, we don't need to make it even more so by letting common mistakes become accepted practice.
Bad analogy: imagine if the Python docs were to recommend handling try-blocks with "except: pass" simply because many people do it.
Off topic comming up. I feel the one thing that will really get Google in trouble is this hypothetical senerio--right out of a bond movie:
Google has the ability to look at all our internet history. They have the ability to read our emails. That can match up ip addresses with street address. They can most likely figure out what most people do for a living. (yes, some of this spying in illegial, unless advertising purposes?)
Could you imagine looking at the Internet history, and emails of the Titans of business around the world? Looking at the information that stock/investment types pass around.
Looking at all this information, collating it, data mining it, etc., and then buying and selling stocks/bonds/etc.?
Yea, I know it's illegial. It just seems like it would be tempting? I know the SEC is probally, or I hope they are, watching out for this kind of hypothetical behavior, and no--I don't think the founders of Google would ever even think about doing someting like this. If I worked there, and had access to sensitive files on those servers; It just seems like it would be hard to not look at that information, and make a few bets? Yea, I know they have great internal security, and have strict policies, but there's always a guy who would be willing to break the rules? I don't think I could not look at that information, and try to predict the future? (In reality I would never do anything like I proposed, but it sure would be tempting?)
People are attacked physically over tiny sums of money. Occasionally someone is murdered over sums small and moderate sums of money.
If people can be tempted by money to commit terrible crimes of violence, we should expect it would be easier to tempt someone with either larger amounts of money or less risk of getting caught.\
Mining data - from any source - is a lot less risky than murder, with a very low probability of being caught. Sometimes, selling data can be incredibly profitable.
What is the probability that all of the current and future people at google will overcome that temptation? Even if people currently at google somehow resist the temptation, the open-ended nature of the data means that someone will eventually break.
There is a degree of fear, uncertainty, and doubt whenever a new technology appears, but there are consumer watchdogs and regulatory bodies that keep it in check - whether that's the EFF/FTC with tracking or the EWG/EPA with GMOs.
The majority of the populace is okay will these technologies, because honestly, you're probably more likely to be injured in a car accident than be negatively impacted by tracking.
I think it's a tenable but extreme position, because basically they are objecting to Google reserving the right to develop new features in an empirical/data-driven way.
I think most people don't think of e.g., their privacy w/r/t tax data being compromised when their tax prep software company mines it to make data entry simpler, or to make it easier to understand the consequences of various filing choices by visualization, etc. Similarly, I don't think Google is invading my privacy when it takes my search queries and uses not only to produce SERPs for me but also to notice that when people type cyombinator it is likely a typo for ycombinator.