"Opting out doesn’t work. It’s nonsense to tell people not to carry a credit card or not to have an email address. And “buyer beware” is putting too much onus on the individual. People don’t test their food for pathogens or their airlines for safety."
This is a brilliant line and it works so well because he is absolutely right: it should not be on the burden of the consumer to make sure their products are safe, be it from listeria or a spy device disguised as a children's toy.
I don't know what needs to happen for people to get outraged, but privacy needs to be a mainstream political issue. Perhaps we need to see what Google and Facebook actually have on us...
The comparison that Bruce Schneider makes is not correct.
Governments are likely to control food pathogens and airline risks because they need living, productive, happy voters.
But governments have also a fundamental distrust for all individuals and therefore want as much surveillance as possible. By allowing big companies to collect that, they have access as well.
Therefore I think governments are not motivated to improve privacy at all. On the contrary.
Governments care about food pathogens because ultimately they are held accountable by the people. They are not a malignant force, parasitic on society. Governments solve collective action problems by getting everybody to play by one set of rules and punishing people who don't.
I know what you say is popular in certain political circles, but it's a false idea about government. Worse, it's a cynical meme that corrupts people into making government less effective, by both stopping people from trying to hold government to account, and not trying to change the rules that governments enforce.
The best way to make your idea of government true is to stand idly by and do nothing.
That was actually a very nice read! I just finished the book and certainly remember the meat factory scenes the most, but couldn't help but feel Upton would have been disappointed from the message he was trying to ultimately convey.
Don't let the perfect be the enemy of the good. I know he was disappointed with the outcome, but it was still an improvement during a fairly capital-friendly administration.
(Though Teddy Roosevelt certainly pissed off monied interests when he felt they were acting immorally)
People downvoting you are probably taking your statement as normative, whereas it's purely positive. It's not about what the govt should do. It is what it does.
For example, the police, the IRS, DHS etc etc. (and hence the state) WILL be helped in their jobs by increasing surveillance, and so will always be trying to increase its scope.
Whether it's _worth_ the loss of privacy is of course a valid debate.
It is not a positive statement. It sounds like one, by it's actually a judgement about how effective government is. I'm quite certain that it is false.
It would certainly be interesting to hear some evidence or justification for this position, given that the record of governments with respect to privacy over the Internet era (at least) has been uniformly dismal and getting worse.
It certainly seems like the positive statement "government is not motivated to protect individual privacy and will not effectively do so" is well-supported by current experience.
That's an explanation of how this is happening, not an argument that it should happen. What governments do is not some mysterious black-box that we can't influence.
I think we've all seen the story over and over: populist politician gets in power and forgets everything. They start to sacrifice principles in return for staying in power in order to ostensibly do the right thing later. But later never comes.
So, maybe theoretically we can influence, but understanding the nature of what a self-interested, pragmatic government beaurocrat values is incredibly important.
I think the parent comment and idea behind it needs to be explored further. Maybe there is a solution, but it will perhaps require an entirely new type of government. Democracies, as we know them, don't seem capable of solving this problem.
Politicians aren't the problem. They don't exist in a vaccuum. They're a symptom.
Democracy is only the problem insofar as we (here in the US) do not have one, we just pretend we do. Democracy and Republic both mean a government by the public. Voting once a year, donating money, and writing stuff on the net are utterly inadequate for that end.
Public government needs educated and involved citizenry. It means putting in quite a bit of time.
> populist politician gets in power and forgets everything. They start to sacrifice principles in return for staying in power in order to ostensibly do the right thing later. But later never comes.
Remove the word "populist" and you've just described most politicians ;-)
1) This mechanism is already present in the current electoral system and is very evident in the election of Trump.
2) This is the reason to only replace 25% if the parliament per year so that not all experience is lost. It gives more power to non-elected public officials which can be good or bad.
3) By removing reelection you also remove career politicians and reduce corruption. No need for campaign contributions or campaigning promises. Also, the drafted people would go back to their social circles to be judged by their peers after ended term of office where they would have to own their decisions.
1) Not nearly to the extent it would be. Unless you'd like to take the viewpoint that the upper class don't buy better educations than the rest of the country.
2) You still have a maximum 4 year experience level. As someone who grew up in a college town, I'd estimate having to explain the basics to 1/4 of the elected body every year would require significant resources.
Additionally, handled incorrectly, you imperil the stability of the legislature. Imagine if every year were Trump election year: Mr. John and Ms. Jane Doe go to Washington intent on toppling the system as it stands because they're unhappy with the way things are.
3) Fair, but the social censure of being judged is a few orders of magnitude smaller than the power these people would wield while in office. If you think politicians are corrupt now, imagine if Exxon offered $x million in bribes to an average person off the street to approve oil drilling in wilderness refuges.
There is a lot wrong with the wealthy monopolizing political power, but at least it semi-insulates them against outright financial persuasion.
If jury duty is random why not political duty? I think your idea (is it your idea?) deserves a lot of attention. Maybe 2 years of service. 4 is a lot. That combined with Swiss-style direct democracy would go a long way I feel. And universal basic income.
Jurors are asked to do a couple fairly common tasks that most people do frequently throughout their lives: (1) listen to different sides of a story and decide which side to believe, and (2) decide the motives for a person's acts.
A legislator is asked to judge complex policy questions often requiring extensive specialized knowledge. The arguments for and against are often long and complex. It's not a job you can expect to take some random adult off the street for and have them do it competently.
I don't think it's a problem of defining ever more detailed or different government rule sets, but a problem of the feedback between the people governed and the people governing. The institutions of law provide some help - mostly by beneficially slowing down how fast things can change, but the maintenance of a stable government is a balance of politicians and leaders do, and what people allow. A much more dynamic, and fragile relationship than most of us care to contemplate.
There are more than "politician corrupted by power" stories, there are also many politician does a good thing and fades into the background, or politician leads well for a long time stories too. These are less exciting and usually receive far less press, because they are remarkable in their unremarkable outcomes.
Honestly, I'd prefer to have a government repeatedly fail to implement its own agenda than one which whips violently to and fro like a tree in a storm. When we let the mob into the halls of power we get a lot of bloodshed and little good. The intransigence of the bureaucracy is a feature, not a bug.
Furthermore, since corporations are not subject to FOIA, governments routinely outsource to avoid transparency, accountability, oversight, responsibility, etc.
I don't think it quite works to try to talk about governments as if they are individual, coherent entities.
There are government agencies whose business is to collect information. The people working for these agencies will do so _to the extent that they are allowed to_, because that is what they perceive their mission to be. Other people's job is to set limits, and they will set exactly the limits they think they have a mandate to set. They get their sense of that mandate from many possible sources, but in a republic one of the most important sources is public opinion.
Which is where you get to the real kernel of the issue: In the USA, surveys indicate that most Americans remain in favor of large scale state surveillance.
There's a curious abstract thinking process of "government can do some good in the world" that spawns the bureaucratic leviathan capable of all kinds of mischief.
I read some of ancestors' comments and wonder why they always seem willfully blind to it. Is there a still small voice in them that says, "A little waste is to be expected" or "WE are in control of it, I am certain"? Utopian dreamers and busybodies, ugh.
Governments are made up of people and what governments want are afraid of or are motivated by is the sum of what those people want, are afraid of and are motivated by.
I don't think that's true, even under your model of a malicious government. Officials don't want their private conversations monitored, so they ensured that it would be a crime to secretly record conversations.
Likewise, once these tools are used to compromise their privacy, they will demand better safeguards.
In a less cynical world, what the government actually needs is the mandate of the people to rule. Therefore protecting their personal interests could be of interest if it was demanded by the people.
This is the crux of the whole privacy issue for me.
I understand there is way too much money in advertising, and therefore data collection, to ever go away. What I want in an ideal world is transparency. If I opt-in to using web services that collect + sell my browsing data, I want to know what data is collected, how much, and who it is sold to and for what. Then I can make an informed decision about the services I'm utilizing and if I think they are handling data responsibly.
As far as I'm concerned, targeted ads are fine. If you're going to berate me with visual pollution for crap I don't need, at least let it be somewhat relevant to my life. What I don't want is "targeted information", "curation", or any other buzz word that describes showing me filtered content based on an algorithm. Of course that is already happening, but it could be much, much worse.
Ideally instead of a wall of text preceding the use of a service, or even worse hidden away somewhere under a fine-print footer link, I'd like to see services have to do something like this:
"You've just entered some search text. May we store this text and future search queries?
[ ] yes, anonymously to help suggest searches for other users
[ ] yes, along with your user name to help refine future searches, and to improve the targeting of our advertising
[ ] oh god no, forget I asked"
When clicking a search result:
"Before we redirect you to this page, may we record the fact that you clicked on it?
(similar choices)"
And as they build up history:
"From information you've previously allowed us to save and information we've bought from authorized third parties, we've determined that you are a 35-year old green-skinned right-wing vegetarian with the following sexual fetishes: <redacted>. May we share this information with our advertisers?"
I would hope things would change if permissions were this explicit.
If you got asked 5 questions for every page you visited, what would end up happening is people would start clicking yes without reading. You would get overwhelmed by requests that you would start paying less and less attention to them.
This would quickly become the incessant prompts about cookie usage that the UK enforces. Like most alerts users quickly become blind to them and click through to get to what they are trying to accomplish.
This is on the right track. I'd like to see it go farther though. Each of these pieces of information has value of X to an advertiser. Flip the relationship management, so vendors treat their users as content producers, and pay some fraction of X to access the user's information. This would need better trust technologies, but is the right path. It's the path groups like the Project VRM folks at the Berkman Center for Internet and Society are working toward, and ties in to Cory Doctorow's concept of Whuffie. Also ties into my concept of Communitivity enhancing software technologies - technologies that help people do community.
It's easy to think "more logging/transparency never hurts" until you actually try it out and see how much data you get that you don't really care about. The EU cookie law is nothing but an annoyance already - making it even more laborious and universal is not going to help anyone. It'd basically be EULAs everywhere.
Perhaps if companies that collect and share data were required to maintain a list of their transactions, press release style? Statements could be generalized such that it is understood how all users were affected collectively by a particular decision. It might be a workable middle ground.
> If I opt-in to using web services that collect + sell my browsing data, I want to know what data is collected, how much, and who it is sold to and for what.
Hm...
1. I install an Android app.
2. Android app requests access to data sources A-Z.
3. I can safely assume that the company which owns this app is collecting all data their app running on my device sends it through data sources A-Z.
4. I can safely assume that the company is leveraging all that data in whatever way suits them, including selling it to third parties.
5. As far as who it is sold to and for what purpose, I can safely assume I won't be getting access to that information, ever. There was a very simple question asked on this forum awhile back merely requesting the names of the companies that are in general trafficking in people's personal information. You had to scroll way down the list to even see someone willing to list some of the obvious data brokers.
It is very hard to turn privacy into a mainstream issue when billions of people actively share everything they see and do online to everyone all for a dopamine "like" fix it. Of course, they want it both ways. Share only what they want and all other things should be private.
Humorous anecdote: A few years a ago, an old colleague of mine got fired and posted all kinds of negative things about the company all over Facebook. The company got word of some of this (slander? libel?) and contacted her via an attorney to stop or be sued. This person then made a post on Facebook about how "... people need to mind their own business!". I think the irony was completely lost on them.
There was a great piece on NPR this weekend: They developed StingRay and what-not cell tracking devices to fight terror, then Obama expanded it, now Trump is "doubling down" on using surveillance tech to find and deport illegals -- and the government is paying police depts to have the tech...
Snowden was right, and now, its way worse.
What is even more abhorrent than this, is that tech companies (yes even ones that YC is funding) collude with government for expanding even deeper.
> Perhaps we need to see what Google and Facebook actually have on us...
A great approach to spreading awareness is doing exactly this. There are groups like the one linked below who have been staging immersive theatre pieces which simulate a scenario where a devious actor has gained access to your data. And they do this with the audience's actual social media data (voluntarily given and erased after the show) so that the implications really hit home to them.
The elephant in the room is that most of the information people get today is filtered through the same entities that are benefiting from all of these privacy invasions. The conflict of interest is real and intense.
That doesn't even touch on the other prong of the problem, being that the ones responsible for legislating privacy rights also benefiting greatly from the lack of it.
> privacy needs to be a mainstream political issue
No way. The political process will not produce a workable, sustainable, reasonable solution.
The only way that privacy becomes important to existing large companies is if their users demand it -- and leave for competitors if they don't get it. I'm afraid that average people don't understand why privacy is important, and it will take a change in cultural mindset to improve the situation (read: a generation).
Another (and better) way for users to regain privacy is a technological shift in which new companies with a better business model and technology replace the current batch of incumbents. This is the more likely outcome, as billion-dollar profits tend to make companies conservative and ripe for disruption.
The political process should represent the will of the people and find solutions that represent the will of the people. The free market is not necessarily the best solution because it is arguable that we don't really live in a free market; consumers aren't all that educated or rational. The problem is that companies will find ways to subvert their users into thinking privacy is not important. They already have a large captive audience and they can condition then to think as they please.
I do not share your faith in democracy :-) At best, the government represents the will of a voting majority and imposes it upon the rest of the population. So what you really have is minority rule.
Look at public funding for science, NASA budget, etc to see the "will of the people" in action on technical topics.
> consumers aren't all that educated or rational
Be that as it may, I would rather make decisions for myself than have them made on my behalf, "for my own good," by someone who doesn't know me and whom I will never meet.
An idealist to the end, eh? Without money to throw around, those forms of "participation" are - especially at the federal level - about as useful as earning participation trophies.
Less, even; since the latter at least get you free paperweights.
--
Besides which, arguing for them means you've effectively bought into class based discrimination, as many working poor wouldn't be able to afford anything but an email or letter. And if you think those form a reliable bedrock of democracy I'll stop humoring you as having anything useful to contribute to this discussion.
"...you've effectively bought into class based discrimination... And if you think..."
PS- That's actually pretty offensive. My friends worked on DREAM Act, expanding our local voting rights for Latinos (and everyone else), protections for sex workers, better conditions for migrant farm workers. I support as able (donations, petitions, warm body for hearings, etc). I see the effort involved AND their results. And, frankly, they're the only groups that are getting the job done these days. So I can't speak to what you think is and isn't possible. But maybe take a look around you, find your local heroes, lend a hand.
I encourage you to read the book The Waxman Report, to better understand how policy and legislation are forged. It's a lightweight, layperson intro. TLDR: It takes decades to research, build support, reach consensus. And then big changes apparently happen all at once.
People I know have been working on paid parental leave and domestic care legislation for EIGHTEEN YEARS. Last month, they were happy to report that their legislation was passed and signed.
Prior examples are marijuana legalization, marriage equality, background checks for gun purchases, education and healthcare for immigrant children. Etc.
--
What is privacy? What does it look like?
No one knows. There is no agreement. There is not even consensus on the questions, much less the answers. So imagining a technological solution (implementation) isn't even remotely possible.
This privacy policy stuff is wicked complicated. I've been working on voter privacy and protecting healthcare records since 2004. I'm still learning new things, causing me to revisit my positions.
And only now, some 12 years later, I've finally figured out what to ask for, a proposal on how to fix this mess.
Being the operative word. If you wish to pretend America's political process can approximate a hypothetical ideal on paper you're lying to yourself.
Until campaign finance regulations are back in place (the FECA amendments in the 70s would be a good starting point), until parties are wrested away from being the nigh-privatized dichotomy that led to the latest election's outcome, until the revolving door between government and the private sector is stopped, none of the rifts between that hypothetical and reality will be mended, because they are all symptoms of those key issues.
> Perhaps we need to see what Google and Facebook actually have on us...
Maybe we should hit them where it hurts by actively promoting and cajoling friends and family to use adblockers. Collect all the information you want, Big Internet, but good luck monetizing it.
It's impractical to opt out, yes. But one can compartmentalize. So I have a rich online life using my real name. But it's not in English. Everything that Mirimir does online is interlinked. But it's not at all linked to my real name. Or to my other online personas.
For sure, I'm a zealot about this. I use nested VPN chains and Tor, and multiple VMs on multiple hosts. But anyone can use a VPN service that respects privacy. Such as AirVPN, IVPN or PIA. Not HideMyAss, however ;)
It was stupid mistakes that led to takedowns of Silk Road, KickassTorrents, The Love Zone, Sheep Marketplace, AlphaBay, and Sabu de LulzSec.
However, Operation Onymous did rely on Tor deanonymization data provided by CMU researchers to the FBI. The CMU group had exploited a bug in Tor to get ISP-assigned IPs for numerous users and onion servers.
Even so, Operation Onymous would arguably have failed if those users and onion servers had accessed Tor through nested chains of at least three VPN services. And used firewall rules, and isolated VPN and Tor gateways in separate machines/VMs. Isolated from workspace machines/VMs. Also, operators and users of those onion servers would have remained safe if they had done the same.
In a few other cases, such as Freedom Hosting and Playpen, the FBI used NIT malware to deanonymize users of onion sites that had already been confiscated. The malware bypassed Tor to send IPs and other information to FBI servers. But only Windows users were affected, because the NIT only ran on Windows. Also, users that used firewall rules and/or separate Tor gateway machines (such as Whonix) were unaffected. And finally, even Tor browser users would have remained safe if they had accessed Tor through nested chains of at least three VPN services.
So yes, use nested chains of at least three VPN services to access Tor. And watch your OPSEC. Compartmentalize, compartmentalize, compartmentalize!
So, the approach is insufficient only if you're sloppy.
>I don't know what needs to happen for people to get outraged, but privacy needs to be a mainstream political issue.
Almost totally orthogonal to discussions about how bad lack of privacy is getting, but the answer to the question posed is certainly any tangibly bad results of lack of privacy in my everyday life, ever. Which there really are none, so far. Certainly none that the average person would attribute to lack of privacy.
The question is, can the curation and rating be done by private agencies like Zagat instead of the Meat Inspection Act? Can it be turtles all the way down? Or must there be monopolies?
Yes, money talks. Which is why free software that is guaranteed to stay free is a resilient solution if we can develop it.
Privacy violations exist so middle men can make money. Cutting out the middle men solves the privacy problem AND motivates people by saving them money.
> Cutting out the middle men solves the privacy problem AND motivates people by saving them money.
Too many middle men. How many middle men are inbetween you and me right now with this post? Probably 5 companies at a min? People are interested in saving money but not at the expense of their time or convenience. I agree with you, but it's just never going to happen.
We have representatives that are supposed to care about these types of technical issues so we don't have to. In our republican form of government their job is to make sure that government is working for us. The real problem is that billionaires and corporations have convinced a significant portion of the public that government is evil and should be destroyed and that any kind of commercial restraint is un-american communism.
I am in that camp. Except, I'm very much on the left side of the political spectrum. I don't think the government should prohibit this behavior. I would rather they established regulations that ensured notice was given and loss/abuse of this data was prosecuted.
Not really. Surveillance as the primary way essential services are paid for is, in relative terms, an extremely new business model, and one that's already adverse to meaningful competition. It's escaped regulation so far mainly as a product of public ignorance and private lobbying.
The EU and independent US states are already starting to legislate for consumer/citizen protections. The public response has generally been positive.
It's easy to forget that 'surveillance capitalism' was seen by most outside of our bubble as a trivial little industry until fairly recently.
Surveillance? If the government doesn't monitor us directly they will just go after all the products and services that we use that monitors us already. I dont even think Im being pessimistic here but the line in the sand for legitimate privacy was drawn about a decade ago and we crossed that line. I really think this is the new norm, it's like asking us to use on-prem hardware instead of AWS, sure it's still an option but hardly anyone cares where the hardware is... This is just how it is now.
Wouldn't people have said the same about cable TV in the 90s, CFCs in the 80s, slavery, child labor, etc. etc. ad nauseam?
I believe that the system is democratic enough that if people are organized and ask governments nicely, they can have privacy protections.
It's really only Google and Facebook that profit substantially from this crap, and there are good arguments that it would be beneficial to the rest of the economy to knock a bit of wind out of their sails for a while (a more diverse range of competitors, a stimulant to VC-investment to try new monetization strategies).
Cryptography is not magic, and it will become the standard. Privacy can either be provided, demanded, or created. Since nobody wants to demand it, and it is not being provided, there is only one option.
Yes, cryptography will become the standard. No, that won't protect you from the little brothers mentioned in the interview.
Your email will travel securely, so the government can't read it. It'll travel securely from Chrome on your Android device to a server somewhere in a Google datacenter. Your web browsing will be encrypted with https, so your ISP will need to use their DNS logs and traffic analysis in order to deduce parts of your browsing history. If you turn off location services, your location won't be stored with Apple, but your Telco needs to know in which cell your phone is located in order to route calls there. And it'd be a terrible waste to just throw that data away the day after, now wouldn't it.
Encryption helps. And we need encryption. Everywhere, and as quickly as possible. But privacy is also political issue, and a technical solution alone won't resolve it, no matter how advanced.
Oh, and not to forget: Ease of use is the way to mind share, not security. WhatsApp is by far the most widespread secure messenger. Not because of its security; in fact its security used to be terrible. The way in was ease of use, with security added to the mix at a later time. And, more importantly, without disrupting the user experience.
I would add that unless data security is the default, it is ineffective. Moreover, it's demonstrably false that data security is hard on usability. Skype became popular because easy-to-use data security made it possible to build a p2p communications product. Most users didn't know or care they were using encrypted communications.
I appreciate Bruce Schneier's pragmatism and his acknowledgement that the problem is bigger than an individual can reasonably be expected to solve, if that individual wishes to participate in modern society. Too often, privacy concerns are met with, "Use Tails + Tor + a hosts file + a burner phone + a burner laptop, etc. etc." But Grandma isn't going to do that, and frankly neither am I. While an individual chooses to use online services, at a certain point societal and career expectations make it not really much of a choice at all. There must be a better way than placing all of the burden on the individual.
One particularly horrifying example of this is dating. In Brooklyn where I live, dating has moved online so thoroughly that many women I've talked to simply refuse to consider men they meet in person as potential partners. It's definitely still possible to date the old fashioned way, but it's obviously more difficult. I don't date online, but I've been blessed with fairly good looks, a successful career, and decent social skills. I can definitely empathize with guys who feel they have to choose between giving up their privacy and giving up finding relationships. And that's a completely unreasonable choice.
The way we form relationships is one of the most personal parts of our lives.
> many women I've talked to simply refuse to consider men they meet in person as potential partners.
There's no way for me to phrase this question without it sounding very insulting, so I apologize in advance, but - is this something women have told you, or are you inferring this from being turned down by women you meet in real life?
It's something women have told me. Women have good reasons; it's much safer to vet people online before you risk interaction in places that you go in your day-to-day. What I said should not be construed as a criticism of women's choices to date online. If anything, I think online dating is the way of the future; I just hope it moves toward decentralized models that give people more control of who they share their information with.
If I were in your place, I would ask them what prevents them from vetting someone online after meeting them offline. Is a Tinder profile any more likely to contain truth than my words?
I'm not going to do that. If they're not interested in me I don't see any reason to persuade them. I would rather date people who are enthusiastic about dating me.
There's also an implicit assumption you're making that the women I've talked to about this are women I'm trying to date. That's not usually the case.
> There's also an implicit assumption you're making that the women I've talked to about this are women I'm trying to date. That's not usually the case.
Yeah, that was part of my potential insult.
I'm married with a kid, and most of my friends are paired up, so this topic rarely comes up for me. Thanks for letting me pick your brain.
There's a difference between getting to know a prospective partner and loading your personal information into a computer system so it can automatically match you with potential partners (and also do who-knows-what-else).
Perhaps this is a good reason to have some sort of technocracy element to the federal government. I lean extremely far right, but in this case, this seems like one of the few things that the federal government should be doing-- breaking up stout monopolies that can't be competed with.
I'm unfamiliar with the decision, but why split up a company like Microsoft in 1999, but leave Facebook and Google alone?
And as Jim Clark, co-founder of Netscape, said in a recent interview on This Week In Startups the problem was Microsoft was not just bundling their browser (not a big deal), but pressuring OEMs that they'd be unable to sell Windows or lose access if they bundled Netscape too.
They used their monopoly position to force vendors to prevent Netscape from being bundled. Thats where they went too far.
It's such a strange arbitrary law though. At the root, it's about preventing companies from using their position of power to gain an unfair advantage in other markets.
But huge corporations like Google, Facebook and Amazon are using their scale and their positions to take over markets. They can easily take control of any market they want, for them it's just a matter of doing so in a way that doesn't upset regulators. Regulators are already the bottleneck for them.
"AT&T was, at the time, the sole provider of telephone service throughout most of the United States. Furthermore, most telephonic equipment in the United States was produced by its subsidiary, Western Electric. This vertical integration led AT&T to have almost total control over communication technology in the country, which led to the antitrust case, United States v. AT&T. The plaintiff in the court complaint asked the court to order AT&T to divest ownership of Western Electric"
i.e. AT&T used its network monopoly to maintain control over hardware manufacture.
The breakup was a settlement of an anti-trust lawsuit over them leveraging their handset hardware monopoly to subsidize network costs so as to protect their telephone service monopoly, IIRC.
No, it can be against the law merely to have (and keep) a monopoly.
It is against the law to acquire, or to perpetuate, a monopoly by any combination or conspiracy in restraint of trade.
Although you're quite right that the move into the browser market was a big part of the case against Microsoft, there were other pieces to that case. There was a whole bunch of work around APIs/ABIs and in particular, denying other parties access to secret or privileged APIs in order to cripple potential challengers to the existing OS market (e.g. Java/Sun).
(Notwithstanding that the Microsoft strategic work around the browser stuff was a very correct reading that the browser was destined to become the de facto OS.)
Yes, it's complicated and there's a whole century+ of interesting jurisprudence. But it's not sufficient to just declare it's OK to have a monopoly -- you can be at risk of antitrust suits even just 1. having a de facto monopoly and 2. doing the "normal" smart business things to hang onto it.
I disagree with this argument – it can be illegal to participate in uncompetitive business practices while holding a monopoly position (even when those practices would be competitive in another market) but that does not make it illegal to have a monopoly position. The parent is right - having a monopoly position in a market is not illegal.
This is a good point. And with all the big internet companies doing so much to "help" us by setting up complex AI filters to catch and filter hate speech, what they're effectively doing is shutting the door to competition. They're raising the bar to the point where there will be no new Facebook or Google. New companies won't be able to afford the CPU cycles or manual content reviewers.
"I lean extremely far right, but in this case, this seems like one of the few things that the federal government should be doing-- breaking up stout monopolies that can't be competed with."
Doesn't sound like you are far right at all then. I'm wondering if we're actually a good judge of our own political leaning. I think that I'm libertarian (little 'l'), but Facebook ad policy thinks I'm far left. Maybe Facebook knows me better than I do.
Sounds to me like the GP believes in minimally constrained competition, where monopolism is one of the many undesirable constraints on their way of life. They are correctly identifying large corporations as analogous to overbearing governments. That, to me, is a classic right-wing belief, and one that I can empathize with, as a staunch Decentralist Green.
The original libertarians were actually far left. There is a large portion of the left that hates the State as much as American "Libertarians" do, but from a different perspective (which I believe is far more consistent). It seems to surprise a lot of Americans to learn that you can be socialist and libertarian at the same time:
Eh, I think it is more a problem with "broad umbrellas" and simple labels being inherently incapable of accurately describing something as complex as ones own politics, rather than lack of self judgement. I know precisely what my politics are but the right/left dichotomy is to simple for me to even pick a side for example, both tribes have ideals that I very much agree with.
That test was accurate in pinning what I do believe my political leanings are, but the questions were awful and loaded. In the surrounding exposition and in the questions themselves the authors demonstrate what I believe to be a staunch leftist / progressive leaning. A lot of questions I read thinking "just because of how they worded this I have to say X because they are making an absolutionist statement on something I'm not absolutionist about".
He has a point though. Right now if you have an android smartphone, use gmail and google search you're telling google basically everything you do, where you are, what you have an interest in and the people you know.
Each of these services in isolation can know a great deal about you but being able to correlate the data makes it so much worse.
That's why I try to avoid putting all of my eggs in the same basket, I have an android phone but I use duckduckgo for search, my own server for email and firefox for browsing the web. If Mozilla, my server host or ddg decides to betray me (or gets hacked) at least they only have access to a slice of my life.
That's why I try to avoid putting all of my eggs in the same basket
This is what I had been trying up until six or seven years ago. At that point it just got too complex to build and maintain.
With ISPs selling our location and traffic data, I think there's no engineering your way around the problem now. Perhaps the best we can do is damage mitigation.
Can you recommend a good source of info about running a personal email server? I have heard that it is extremely difficult to get mainstream email providers to accept email from unrecognized sources, but if that is not the case, or if there is a good method for dealing with that madness, then I am very much interested in managing my own email.
I just use dovecot, postfix and spamassassin. It did take me many hours and a lot of googling to configure it but it's very low maintenance after that.
I don't have any problem getting my emails accepted by gmail and friends.
The trick is to use an IP range that's not "fishy" (that basically precludes hosting email on your home connection, everybody expects spam from those and they're blacklisted everywhere). Then use DKIM, SMTPS, DMARC, SPF and be very careful not to allow any kind of open relay for spammers and you should be mostly fine, at least in my experience.
There are many websites online that offer to test your email setup for obvious flaws (open relay, missing headers etc...), for instance https://www.mail-tester.com/ and https://mxtoolbox.com/diagnostic.aspx . You should also check if some blacklists have your IP or domain blacklisted for some reason, and then request a delisting (after making sure that you're actually not sending spam because of a bad config): https://mxtoolbox.com/blacklists.aspx
It's definitely not plug-and-play but it's pretty interesting if you don't mind system admin. You also have a lot of flexibility if you want to filter and automate your emails in any way. I was also pleasantly surprised by the efficiency of spamassassin, properly trained there are very few false negatives and almost no false positives.
This is the right answer. Collecting, retaining, using, and selling all this crap should simply be illegal. For Google, for Amazon, for Target, for Visa. Any of 'em.
The more pragmatic libertarians ought even be on board with this, since even if they don't care about massive corporations having all this stuff, allowing its collection gives de facto access to it by the government, and you can't effectively opt out even if you take extreme measures like not communicating with anyone except over encrypted comms and not having any kind of cell phone—you are still in pictures others post to social media, for example, or mentioned in others' unencrypted messaging/email conversations, et c. Your only hope is to become a hermit, basically, which is unreasonable.
...and a police force with arrest powers, and corporate compliance offices that follow the letter of the law, and federal investigators that try and parse code...
And once you get big, you can't get any more revenue.
Those will fail on first amendment grounds. I've been saying for years that you won't have privacy without a constitutional amendment/rewrite, but that's looking more and more likely every day.
Well, I suspect that prohibiting data-brokerage basically helps big players like Google, because it eliminates smaller players who don't have access to such large amounts of data. See [1].
On the other hand, prohibiting ad-based monetization schemes lowers the incentive to excessively track users.
Of course, you could say that everything is proportional to the number of users, but I suspect that's not how it works.
Data-gathering companies can and do co-operate, or someone wanting to learn something can just buy the data from several of them - price per bit of data should be the same. So breaking them up isn't really a solution to this problem.
As for why monopolies aren't getting broken up, I'd say a combination of increased corporate control of government, and global trade necessitating ever larger companies to compete, because tariffs are 'evil'.
People want to carry around little location aware Internet connected computers studded with sensors and that run third party apps, and there is just no hope of securing these systems against all possible vectors for unauthorized snooping. There is no technical solution. Ultimately I think this can only be fixed with legislation.
If there is money to be made in invading peoples' privacy, it is going to happen unless there are regulations in place that make it costly by imposing fines.
Here's a simple starter idea: extend HIPAA type protection to the most sensitive forms of PII like location information, photos not explicitly shared, microphone data, and health sensor data. Sale or other release of this information without explicit per-sale or per-release user consent is illegal. Leaks or intentional distribution results in fines that start at $10,000 per incident.
Gather your users' locations and sell them? That'll be $10k per user per 24 hour period in which any location data points were leaked.
Microphone and camera data should be subject to further protections. It should be illegal to store such data for longer than what would be needed for legitimate algorithmic uses or to use such data for other than its explicitly intended purpose unless the user explicitly shares it. So something like Siri could leverage cloud compute to parse your verbal commands but it better throw that data away afterwords... leaks would be $10,000 per user per audio recording.
The only exemptions should be for things like IP addresses since this would require fundamental re-engineering of the entire Internet. These do reveal some location data but it's nowhere near as accurate (and hence intrusive) as device location data. There also are techno hacks like VPNs that can be used to obscure such data if a user wishes to do so.
Edit: as far as government snooping goes that also must be fixed at the legislative level. There are legitimate reasons for governments to conduct surveillance but these must be subject to strict regulation and oversight. It's the only way. Government agencies like NSA, CIA, and FBI (and their equivalents in other places) are well funded and very good and there is no hope of preventing them of leveraging the Internet for surveillance unless the legislative branch explicitly regulates their actions.
TL;DR: the only solution here is the rule of law. Techno-fixes won't work and are a cop-out to avoid confronting the dysfunction of our political system.
> Snowden’s revelations made people aware of what was happening, but little changed as a result.
Things got worse and that's expected. After all these things were made public, if there was no mass outrage the perpetrators learned a valuable lesson - they can increase their activities without much fear. Pretty sure at this point NSA doesn't even have to play games with masking / unmasking filtering US citizens, they might as well stop jumping through hoops and just record and search everything they like.
I recently kicked off every google app from my phone. The dark UX pattern of shared logins (login to google express = login to gmaps etc) was too annoying to deal with.
It's not that I've stopped using their services, but instead switched to their web version. Same with Twitter et al. Even Uber has very serviceable web versions of their app.
As a bonus I get better battery life, and the use of Safari's content blocker, so YouTube web is even better than the app. I'm also able to silo these services into their own browser if don't want to keep logging in every time due to private browsing.
I totally agree! It's easy and convenient to run a PC on FOSS, but it's a real pain to do the same with a smart phone. I just tried to run a FOS OS on my phone. I gave up because in order to flash the OS image I had to install non-reproducible binaries from some website and there was no sane way around.
The difficult part isn't getting people to understand that Internet privacy is important or at risk; nearly every person that I know (tech-savvy or not) understands that companies like Google and Facebook are in the business of collecting their data, and don't want them to. However, companies like these have human habit on their side: most people use these services daily, and will continue doing so unless it's easier or more convenient to do otherwise. That means that in order to reverse the trend of users continuously using these services, an alternative must be created that's better in every way, and default on all platforms that people use.
In the case of Facebook, it's a chicken-and-egg problem: many people that I know will not use another social media platform because nobody else that they know uses it, but nobody else that they know uses it because they don't use it.
It's worth remembering how Facebook got to the userbase that it currently has -- they didn't start out targeting "everyone on the planet", which is an almost impossibly hard challenge because of the network effects you note.
Instead, they started off providing closed social networks for existing groups of users -- colleges, starting with a small number of elite colleges and slowly broadening out from there -- until eventually it became a public social network and the de facto standard at least within the US.
The networks that have attempted from the start to get everyone to sign up have mostly failed. E.g. Google Plus, which had some feature advantages over FB at the time, started out much too ambitiously in my opinion and suffered from a "Potemkin village" issue where (unless you had a circle of friends who all joined it) it felt empty, and I suspect many users never used it more than a few times.
The only recent network I've seen to go the Facebook route is Nextdoor, which at least in my anecdotal experience seems to be becoming a thing. By providing a semi-closed social network based on physical proximity (neighborhoods), it provides immediate value to a new user. I'm not sure of the details of their rollout strategy, but at least at one point they insisted on mailing you a physical postcard in order to verify your address (much like FB's original validation on specific .edu email domains).
Anyway, if anyone out there is thinking of challenging Facebook on the social-network front, I would put some significant thought into the rollout strategy and aim not to compete with Facebook circa 2017, but instead to compete with Facebook circa 2004. As other networks have opened themselves up, there's a constant vacuum at the lower, more-specific, more-exclusive, closed end of the spectrum which provides a lower barrier to entry.
I think discord has better potential to be the next social media juggernaut, I know most of my social media interaction goes through discord. As our lives move more and more online discord seems to be the digital equivalent of the roman forums and I appreciate the ability to jump into a conversation with my friends in the same way I would IRL.
Mostly I appreciate the ability to be parts of different groups, the ease with which you can join groups, and the fact that you can host your own friend group. I realize this isn't Discord's mission statement or intent but it is a position they find themselves in.
I love how, in a thread about the uncertainty of internet privacy, you're here slinging praises to a walled garden service that owns all of your chats. Another walled garden interested in collecting information and slinging adds to pay back VC funds is the last thing the internet needs and the fact that it's so successful is a better indicator than anything else that the consumer doesn't care a lick about privacy.
But there are better chat solutions out there. Discord is pretty awful, terrible audio quality, limited room sizes, the aforementioned privacy concerns stemming from no self hosting and lack of interoperability with other chat protocols.
When technologies like Matrix [matrix.org] exist and we've had superior dedicated voip programs (mumble et al) for decades there's nothing appealing about discord is the AoL mail of chat clients; great for people with 0 interest in technology, but, why.
Maybe someone should start a new social network just for college students again as facebook seems to have shifted focus from that specific demographic. The twist could be on allowing a hybrid approach where you can have the choice of anonymous posting and the exclusivity of verified accounts.
> and don't want them to...an alternative must be created
Of course, collecting user data is fundamental to providing these services, for FREE - without this, Youtube's algorithm would be awful and Google Maps wouldn't be able to show real time information, and you would be paying for all of the services users want for free.
If it's free, you are usually the product, and information just so happens to be the currency of choice. I very much doubt people would pay for search, maps, youtube, keep, plus, email, translate, android, allo, duo, docs/sheets/slides, chrome, earth...
I'm already extremely pessimistic about these things.
Computers were a mistake and represent the greatest threat to freedom in human history. The ability of those in power to mass produce perfectly obedient machines that can perform complex tasks without rest allow for a nightmare society. Additionally machine learning asymmetrically benefits those with the resources to fully leverage large amounts of data collection and compute power AKA not you or me.
Any state in history would have loved to have been able to watch its citizens at all times and know what they're doing and likely thinking. It just wasn't feasible until now. The big last line we haven't but will inevitably crossed will be the automation of force.
But at that point, society would basically be full-on Cyberpunk. Screw suicide, form a gang of runners and start planning heists on manufacturing facilities and factories, lay traps for driverless truck shipments or hijack them electronically, etc. Hide in the margins, among the slums and tenements to which overpopulation and consolidation have relegated vast swarms of people. Heck, I've already gotten transdermal implants set and removed by people who maybe weren't, you know, licensed practitioners.
The moral rule of law is already out the window, and financial laws had already may as well not exist. The only practical compulsion that law has left is that many people can still have an acceptable life while complying with it. Enriching yourself by skimming the margins of those with enough capital to print money wouldn't exactly be immoral.
Automation of force, or something along those lines, would pretty much cinch it. I think plenty of people would decide it's time to have a chat with a local lathe/mill owner, stick a flamethrower on their quadcopter, and loot the few places where opulence still existed. Get away with a couple decent jobs, and you could pretend you were in that upper echelon of 'people with lots of money' all along.
Right now, if you're good at a trade like construction, welding, medicine, engineering, etc, you can probably find stable employment options and at least afford to exist. But when those people of practical means start to get systemically marginalized, I'll bet we'll see a huge resurgence of people trying modern versions of The Italian Job.
Well sure if we're talking about cyberpunk 'heist' novels/rpgs like Shadowrun & Neuromancer. But it's really interesting to see just how 'cyberpunk' our society has become isn't it? We even have most of the technology in place nowadays.
(Well other than the advanced tech they use like cyberdecks and stimsims which aren't really feasible)
> Computers were a mistake and represent the greatest threat to freedom in human history. The ability of those in power to mass produce perfectly obedient machines that can perform complex tasks without rest allow for a nightmare society.
I think about that a lot. It suggests to me that the real problem lies not with technology, but with something unavoidable and self-destructive about human nature.
Look at the root cause. Humans are driven by the exact forces that caused evolution. Those forces are to gather resources and maintain social dominance. Anything other than that will never be a force that shapes the world.
"Not being fully clear about their means" is just a manifestation or symptom of someone's quest to acquire more resources. I repeat that you must look at the root force and not the symptom.
What about the fact that we're the only species to hoard way more resources than we'll ever need, to the point where we're quite literally destroying the environment and life around us in order to maintain our over-consumption of resources?
I like reading your darkly pessimistic perspective on the issue as I think a huge component to the problem is the fact that so many aren't pessimistic enough about technological development.
Everyone seems to think that, prima facie, or even de facto:
technological progress == good.
But there is nothing that guarantees that's the case.
A shiny new device or service launches and everyone is gung ho to jump on it--never mind questioning its negative impacts--what is the environmental and economic impact of this service (e.g. cryptocurrencies consuming electricity, a 'material' resource, and producing nothing but a representation--e.g. consumption of material with no production), who does the service marginalize, how does it propagate further cultural and global divisions...the list goes on
No one ever stops to ask these questions--or those who do are not nearly loud enough. We seem to follow a policy of progress by all means until we've developed our way into a future in which the continued subsistence of humanity (at least on earth) is untenable.
> We seem to follow a policy of progress by all means until we've developed our way into a future in which the continued subsistence of humanity (at least on earth) is untenable.
I just don't see how you got to this conclusion. In what way could technological progress make it untenable for humanity to continue?
Not him but you see this very thing now. Look at pollution and CO2 emissions for an example of how technological progress can directly threaten humanity.
How did you come to your conclusion about computers so firmly?
Also in what way is this worth taking your life over? Does it really make it impossible to enjoy it?
I'd go farther: I'm unsure that communication technologies past approximately the telegraph have actually improved quality of life more than they've harmed it. At least in developed countries (simply don't know enough about life in un-/under-developed countries to have an opinion).
"Snowden’s revelations made people aware of what was happening, but little changed as a result."
Actually I'd be willing to bet the level of spying the alphabet agencies are doing now is 10 times worse than it was when Snowden stole those documents.
I once asked Bruce Schneier at a conference to scare me.
He told me to consider the associations between data. Bits and pieces that Google knows about me, that Facebook knows about me, that Amazon knows about me...think of all of those little meaningless bits of data being associated all together to build a picture perfect model of my life, and then sold to advertisers, who know enough about your life to attempt to manipulate it at every step. And if advertisers can pull up every saleable bit of data about me with enough accuracy to sell me products that I actually want, then anyone with enough money and desire can get that same data, use the same associations, and understand more about me than most of my closest friends, all before taking one step away from the computer.
I gave it a thought and decided that I'm not afraid about the data that I know that others know (or may know) about me.
Well, based on the fact seeing any actually helpful advertisement is an extremely rare occasion (just my experience), which usually requires explicit training of a suggestion engine with multiple rephrased queries explaining what I want to see, I'd say either they don't know any much, or the association part is not yet here. Not that it matters, though - that could change in future.
What I'm actually afraid is that there is some data I'm not even aware others know about me. Or, more specifically, data I don't want others to know about me, that could've somehow leaked despite lack of my consent or even me being informed.
E.g. I know that my phone sends various data to the third parties, that I've authorized it to send. I've evaluated it and had actually decided that I'm OK with the pros/cons. However, if, for example, that phone somehow eavesdrops and sends an audio stream that I've never consented to share - that would be scary.
I like using the example of your healthcare premium rising ever-so-slightly when you make questionable decisions like buying a 6-pack of beer. All the data has been collected, it's now just a matter of legislatures/corporate lobbyists passing the right laws.
If advertisers have so much data, are able to know you so intimately and manipulate every step of your life then how come the vast majority of online advertising has such mediocre results?
Not trying to be snarky, it's just to me something doesn't quite add up in this picture.
I am honestly surprised there is not more ultra-targeted advertising to either ultra-high-net-worth individuals, or to individuals in key positions within large institutions.
E.g., there are ads all over the DC Metro for huge defense-sector projects, like a particular company's bid for fighter-jet engines. They're spending huge sums on these ad buys, which presumably are aimed at only a handful of people who actually have influence over the procurement process. Similarly, there are lots of ads in trade publications aimed at buyers, of whom there might only be a few dozen in a particular niche industry. It only follows that this is, the ad-buyers believe, the best they can do.
But consider what they could do if they really drilled down and tried to target the specific individuals with control over the money: instead of a shotgun ad buy in the subway or in a magazine, they could build a model of that person's life -- where they go, what they buy, what makes them happy (at least, happy enough to be externally perceptible), what pisses them off enough to complain about it, etc. And then you could Skinner-box the living shit out of them.
In the limiting case -- I'm thinking here of someone who works in government procurement; maybe not even the person who makes the ultimate decision, but the person who builds the briefing slide deck for the person who makes the decision, or the advisor, or the advisor's assistants -- for the price of a big ad buy, you could probably hire up a bunch of unemployed acting students and follow them around for a few months. Every time something good happens to the contract or in negotiations, make sure they have a really, really good day. Someone offers them a seat on the train, or lets them into traffic, anonymously buys them coffee, randomly compliments their shoes, pulls out of a parking space just as they're looking for one... every little thing just goes right. And every time the negotiations aren't going well, make sure they have a really shit day. They get cut off in traffic, get coffee spilled on them, yelling everywhere, can't even get the machine they want at the gym, takeout place is closed for a special event, rental house down the street is having a loud all-night party again... Pretty soon you'd condition them that when things go right for your company, and when things move fractionally closer to the outcome you want, they have a good day. And when they don't, they don't. It's advertising by gaslighting, basically.
AFAICT the only reason this isn't done is because nobody's really tried it yet, perhaps out of some remaining shred of propriety. I'm not even sure it would be illegal, necessarily (you'd have to get some lawyers to work around anti-stalking laws, I suppose, but they are pretty weak in a lot of states). While there's nothing that would have prevented you from doing this 50 years ago with an army of P.I.s to gather the information, now you could build up all the dossiers in advance and have them ready to go, pretty much turnkey, on anyone you thought you might want to influence. Or, more likely, a company could set all of that up and then offer it as an arms-length service to other companies looking to achieve a particular outcome.
No reason, I suppose, why it might not be going on right now.
Interesting premise. Could even be the plot of a present day sci-fi novel (an executive goes about their day of seemingly random inconveniences amidst a high-stakes battle between advertising firms directing the exec towards their clients). Almost reminds me of The Game, which was the greatest film ever made and I will hear NO argument.
That being said, most of the wealthy people I know personally tend not to use computers when they can call up concierge services to handle complex tasks for them. That most wealthy people I know happen to be older and not as used to using computers for every problem may just be more of an age rather than economic dissonance.
Why do you think there isn't such advertising? For example, in B2B sales, you can and do show specific targeted ads to the IP ranges belonging to particular companies or social media accounts identified as their employees (to which you want to sell your product), you also do social media profiling of particular people before you're going to visit them, so that you can tailor sales pitches to their personalities.
It's just that this isn't done in a scalable way, but as a part of high-touch sales activity involving human salespeople.
Well it definitely put the nail in the coffin of social media (except HN) for me. I was already creeped out after the Snowden revelations but hearing Schneier in person was like a smack to the face.
I use DuckDuckGo instead of Google in most cases, and I avoid Facebook (and particularly Messenger) unless absolutely necessary, and I have become much more conscious of my online activity.
Facebook is a glorified address book. Messenger is a convenient way to get in touch with friends from outside the United States, because we aren't always on Skype and regular cell/SMS costs money.
But nowadays Messenger is more of a way to start by saying "let's Skype"
There is no single good answer; that's why people struggle with it. The solutions are disparate, and vary depending on your threat model. Answers to these problems aren't available and packaged in a one-size-fits-all solution that your average middle-class family has the time or ambition to take advantage of. Some solutions exist, however--and more are being developed.
In a lot of ways, I see the issue boiling down to how much you're willing to be inconvenienced.
One point that I think comes too short in every discussion on privacy is that you are not always in control of your data and you never can be fully.
Say we consider a privacy-paranoid individual who is taking great care not to put his personal data online. And then we take a look at his or her parents or friends or colleagues who most certainly will keep his or her real name bundled with phone number(s) and (e-mail) address(es) in their address book synced to their Google or Apple or Microsoft account. It just happens, there is only so much you can do about it. Your data is out there and it's only the question of the security measures the data holder implemented that's keeping it safe.
(Basically, it's nothing new, many people have your data, but possibly only in the "offline world" -- think insurance, prospective or current employers, even your go-to car repair guy. It's just that we normally have a law for protecting the offline data and ways to enforce it and almost nothing alike in the online world.)
Privacy is something that has to be enforced on a population, and not simply recommended to an individual. The only institution that can reasonably do that is the government, by introducing data protection laws, abiding by them and allowing a third-party to verify they really keep their promise. Until then, we might not have a choice.
Just because people have accepted something doesn't mean they deem it "acceptable".
When your choices are "use a service and be monitored" or "don't use the service", it can be quite limiting if you really need that service to, say, do your job and feed your family. Yes I know there are alternative services, but often they either 1) don't work well enough to really be a viable alternative and/or 2) don't provide any more confidence that they are not monitoring you also.
Of course all of this is made worse by the fact that the monitoring is mostly happening in the background, so it isn't "in your face" all of the time. I think most people who are aware of it have chosen to just ignore it as best they can, because there doesn't appear to be any practical way of avoiding it.
> Just because people have accepted something doesn't mean they deem it "acceptable".
Actually, that is exactly, literally what it means. Perhaps you mean that just because we find something acceptable, we don't deem it moral, or optimal, or desirable. The distressing reality presented in the interview is that we accept non-benevolent, non-ideal options because they are good enough to be accepted, and not bad enough to be rejected. But accepting something, by definition, means you consider it accept-able, or, "able to be accepted."
Yeah I knew someone would come back with that -- but that's only true if you tie a very strict one-to-one meaning between every use of the verb "accept" and the adjective "acceptable".
Words have subtle meanings. If you paid attention to my post, I was clearly drawing a line between the fact that people accept the fact that they really don't have a good alternative to using, say, Google. But they may not find it acceptable that Google data mines their searches to sell their info to advertisers.
So while they "accept" their situation, they clearly don't find it "acceptable". Much as you might accept the fact that you have to work for a boss who is a jerk, because you have no other employment options. So you don't find the situation acceptable, but you have to accept it anyway.
I don't think people in an unacceptable situation 'accept' it. With your example of unacceptable workplaces, they express their resentment in other ways. Maybe they find the minimum they can produce without being disciplined and only do that much. Maybe they spread malicious gossip at the office. Maybe they go home and abuse their spouse or children. Maybe they become alcoholics, or political extremists, or religious fanatics. Maybe they become depressed and commit suicide. The world offers plenty of examples. People trapped in situations they find unacceptable don't suffer in silence; they become pathological, dangerous people.
The issues of unacceptable mass surveillance can and will, eventually, cause some kind of response by people who don't accept it. If there are a lot of those people, I doubt the response will be pretty.
Here again is another example of the subtlety of words. The fact that someone "accepts" a situation doesn't automatically mean that they are happy about it, and it also doesn't mean that they do nothing to change it. It can just mean that they realize that is the way it is for now, and they deal with it as best they can -- often while looking for ways to change the situation.
Also, while it is true that people do often respond to unacceptable situations in extreme ways, it is most definitely not the case that people always do so. There is a such thing as patience and perseverance, and people do often accept that they are in an unacceptable situation for a time (often a very long time), while actively seeking to change the situation. People can overcome adversity.
The phrase "People trapped in situations they find unacceptable don't suffer in silence; they become pathological, dangerous people." is most definitely not universally true. It does happen yes, but many people also just press on and keep looking for a way out until they find it.
And yes, some of them do suffer in silence without becoming "pathological, dangerous people". One common example is a case where an unacceptable situation permits them to achieve something that is more important to them. For instance, people work some pretty nasty jobs (either due to jerk employers/employees, or just the necessary work conditions of the job -- some jobs are hot/smelly/hard-on-the-body, etc). But there are people who take and/or keep these kind of jobs in order to feed their families. Often they have tried to find other work, but lack the necessary skills, or there just isn't any other work available. So these people work under these conditions only because that's the only route to feeding their families. It is an admirable thing to do, and I find it sad that so many people don't respect others who make that kind of sacrifice in order to take care of others.
When I place a sticker over my laptop's camera, people think I am freak or acting like a hacker. In fact, I just want to protect my freedom. However, society doesn't care freedom. They care just their popularity and don't want to be alienated. I think this is where problems come from.
I could wear a tinfoil hat and claim I was protecting my freedom. That doesn't make it a great idea.
Of course nobody wants to be alienated. We do a lot of stupid shit to fit into society. But with the range of potential attacks on your freedom, putting a post-it note on your webcam is right up there with tinfoil hats.
There are real actions you could be taking to defend your freedom - like contacting your representatives in government, or donating to organizations that fight for your rights, or telling your friends and family why they should care. You can vote with your dollars, and vote with your actual vote, and your feedback.
After all that's done, fine, put up the post-it note. But you can at least write "remember to get milk" on it so people don't assume it's a tinfoil hat.
I wouldn't call it "tinfoil hat" because the cost/benefit is extremely low. Negligible effort in exchange for piece of mind. It's a reasonable trade-off. You can never be sure what all your software is doing unless you piece together your own linux system from scratch.
That's what a tinfoil hat is. Nobody wants to watch your particular webcam, just like the government does not want to influence your particular brain. But people put on the tinfoil hat anyway because it's cheap and it gives them piece of mind.
You could also just implement strong security guarantees according to best practice for your OS, but the piece of paper is definitely much cheaper and more effective for this one purpose.
I believe "tinfoil hat" implies illogic as well. Reasonable trade-offs aren't illogical. Also, it's been proven that it's entirely inside the realm of possibility that state agents are infiltrating our systems and gathering sensor data, and it's far more likely than alien mind probes that can be thwarted with tinfoil...
What I never understood about tape people is that they almost always forget the existence of a microphone. The microphone is far more invasive than a camera in my opinion.
Use clear tape, electrical tape or some tape that matches the color of your machine.
Honestly though, your explanation should be sufficient. If it isn't, you'd have to wonder the tech. understanding of the person who's questioning you in the first place.
When a company includes in their TOS a clause that prohibits their customer to speak online negatively about their product, everyone draw the pitchforks because that's an unacceptable violation of the freedom of speech. Why doesn't it happen for the right to privacy?
Perhaps people feel powerless or don't care. People are also powerless against the food industry or don't always care about security. One of the roles of states and governments is to fix those situations.
Law makers should focus their efforts on strongly protecting the fundamental right to privacy. It should be made easy (and free) for anyone to challenge those abusive EULAs, TOS and other contracts that require end users to abandon their rights in order to use a service. If the contract is deemed abusive, the service should be blocked until the contract is rewritten.
> In the 1970s, Congress passed a law to make a particular form of subliminal advertising illegal because it was believed to be morally wrong. That advertising technique is child’s play compared to the kind of personalized manipulation that companies do today. The legal question is whether this kind of cyber-manipulation is an unfair and deceptive business practice, and, if so, can the Federal Trade Commission step in and prohibit a lot of these practices.
Three things here:
1. Nobody has ever proved subliminal messaging can actually subvert a person's will, which was reflected in court cases. Even a modern experiment set up by the BBC (apparently the only such study since the 50's) showed no effect.
2. The FTC has never said anything about subliminal messaging, so it's unlikely they would now.
3. Subliminal messaging never helped pay for users' free services.
Let's face it - we live in a different world. The old ideas of privacy, whatever they were, are erased when there's a carrot attached to it, and no stick. Schneider is doing a great deed in trying to drum up support for increased privacy regulations, but this is a stupid argument toward that end.
I'm dejected by two things - that privacy invasions are increasing over time and that there's barely any outrage (or outrage that lasts) against these. I feel more and more trapped when I see that most people I know don't care about privacy on the Internet or Internet based services. I point them to how it could be bad for them (including Martin Fowler's excellent article, "Privacy protects bothersome people" [1]), but it all falls on deaf ears. Or they shrug their shoulders in defeat saying it's a lost cause and that it's better not even to think about it.
I'm truly stumped, and can't imagine what tragic event or events will wake people up and get them to take action at a personal level, along with organizing and campaigning for privacy. As of now, I doubt if there will ever be a mass resistance for several decades (leaving the gates wide open for more invasions and power grabs).
I think there's something to be said about looking at the business incentives of the people you deal with on the Internet. Google, plain as day, is using you as training data. DuckDuckGo bills itself as the privacy-minded alternative. You shouldn't be surprised if Google violates your privacy. If DuckDuckGo does, that's the end of their business model.
Only thing people should fear are the downside risks of submitting themselves and becoming reliant upon walled gardens, people in mass only give such their power… perhaps most have decided that the convenience of such gardens are worth it for now.
The mass surveillance/advertising state runs on abundant and prevalent hardware and software full of flaws that can be used in favor for those who seek to exploit such, from nation states, to well financed actors, to individuals.
The common man has less to loose, than an general's affair with his aid conducted over gmail… and hey, when you are a relative nobody in a society, whats sitting on a couple of iOS 0days from your dev exp, waiting for a better day? I guess succumbing to fear porn from our best institutions is an option…
I've come to believe that total surveillance is the perfection of democracy, not its antithesis.
We are experiencing a fundamental phase shift in the entire structure of society.
The true horror of technological omniscience is that it shall force us for once to live according to our own rules. For the first time in history we shall have to do without hypocrisy and privilege. The new equilibrium will not involve tilting at the windmills of ubiquitous sensors and processing power but rather learning what explicit rules we can actually live by, finding, in effect, the real shape of human society.
Internet privacy is a farce. From the way the technology was designed to the way the information that's collected is used/misused. I do think there are equal parts buyer beware on the client side as well as ethics on the implementation side.
I think that "internet privacy" will be akin to "the war on drugs" in the long term. A good idea but in the end just another way for government line it's pockets.
Until the internet is completely redesigned the best hope we've got it encryption and VPN's and that's a strech.
> My hope is that technologists also get involved in the political process — in government, in think-tanks, universities, and so on.
How does one do this? What might Bruce Schneier be thinking of that the EFF, Liberty, Privacy International, etc... are currently not doing?
In some sense, it seems like this might be solved "generationally", once the majority of elected representatives are "natively" computer-literate. Based on the average age of politicians, that might take 30-40 years.
Google, for instance, has a ton of data about me. They don't sell that data directly to advertisers, because if they give up that data then it loses all value to Google because the advertisers would then be able to resell it. So instead they let advertisers target specific demographics of people, leveraging what they know about me. As long as I use an adblocker and individual information isn't sold to insurers for example, what's the harm to me?
Democracy and freedom, in general, depend on people who fight against the powers of the time from trampling rights and controlling people through different means. So even if you're somehow not troubled by anyone who has or can obtain more (aka damaging) information to use against you, there are many people who struggle all the time so your life can be better (and not get worse). And for those people, corporations that collect a lot of data, who are in turn answerable to government requests (like NSLs in the U.S.), are landmines that could trip any moment and endanger them and everyone else they work with. Society has a lot to lose if we don't value privacy, because privacy is fundamental to have and to retain freedom in the constant struggle that we have with various entities.
So if you care about freedom for humans, you must care about privacy. Otherwise it'd be hypocritical.
For a better and concise article on this, read "Privacy protects bothersome people" [1] by Martin Fowler.
The data is stored on Google's server, therefore can be accessed by the NSA, FBI, CIA and by the However-Many Eyes' intelligence agencies. All of those places can be breached, can accidentally leak your data or can even change their mind about keeping your data secret. And given that they're most likely storing this data indefinitely, that chance is 100%. It's going to become public at some point. How long it takes until that happens and how accurate/harmful that data then still is to your current life is a different question. But personally, I see no reason to unnecessarily risk that.
Consider lobbying your government officials to improve regulations concerning this. Maybe get together with other people in your city who care about this particular issue.
This needs to be changed at a regulatory national level, but it's usually the case that major cities or states lead the charge in the US.
The system can be made to work for you, if you're willing to work with it.
I've gone down the rabbit hole of trying to protect my privacy. Basically, unless you move to the mountains with no cell phone, no internet access, and grow your own food, someone somewhere will be tracking you. You'll go crazy trying to close all the loopholes (ready to fry the RFID chips on your car tires?)
Maybe fear isn't the right reaction. We have almost no privacy, and the world isn't ending. Many of us goto work or school, just as before we were so heavily surveiled.
Also, who makes money off us being afraid of not having privacy?
The already linked Panopticon shows that a lack of privacy can lead to severe mental problems.
The Stasi subdued an entire country via surveillance and that before the internet, before CCTV and before everyone voluntarily carried a microphone in their pocket.
Watching The Lives of Others (Das Leben der Anderen) had a profound impact on me. I was aware of ECHELON, Carnivore, DCS, Wm. Binney, etc. before Snowden, but since Snowden/Greenwald, it really became visceral. It's been a rough ride for me, and many people I know.
No one I know in my world uses Facebook and we only use Google for search because the experience with other engines are lackluster. Give me a better search engine and I'll use that if the obvious abuses aren't too egregious.
There is another way to opt out of data collection and
that is to deep packet rewrite every communication out of
your computer from every application (where suddenly encryption is your enemy) :P
a Basket of Figs and a Letter, did by
the way eat up a great part of his
Carriage, conveying the remainder
unto the Perfon to whom he was di-
rected, who when he had read the
Letter, and not finding the quantity
of Figs anfwerable to what was there
fpoken of, he accufes the Slave of
eating them, telling him what the
Letter faid againft him. But the In-
dian (notwithftanding this proof) did
confidently abjure the Fa&, curling
the Paper, as being a falfe and lying
Witnefs. After this, being lent again
with the like Carriage, and a Letter
expreffing the juft number of Figs
that were to be delivered, he did
again, according to his former Pra-
ctice, devour a great part of them
by the way.
Can someone explain to me why this so-called "privacy" is such a big deal on internet? Can't internet, or at least a major part of it, be considered a public place? I mean, when I walk in the street, I do not wear a mask (I suspect that'd be illegal in my country), yet I do not worry about someone using surveillance cameras to track all my activities. Sure, it could technically be done but I'm not that important : nobody would bother. Therefore I vaguely consider myself anonymous when I walk in the street, not because my identity is hidden, but because it's mixed in thousands of others : "hidden in plain sight", as they say.
Couldn't things be similar on internet? Or do we really all have to hide our identity like criminals?
You know, every now and then there's a leak in voter registration data, and people start freaking out at the thought that others might know what their political affiliation is.
Well, that kind of leak is really no longer necessary. Your browsing, consuming, and posting histories can all be used to deduce that information and much, much more about you.
Try keeping any of your political opinions secret these days. Short of not saying anything to anyone, it's hard to keep it secret (especially from a determined, well-funded adversary) unless you jump through a million hoops.
The reason people freak out about voter registration data being leaked is that they know that once their political affiliations are known, they can be targeted by their political enemies. Now this kind of targeting is trivial without even needing voter registration data.
>Short of not saying anything to anyone, it's hard to keep it secret
To add to this, many people are doing just that on their social networks. Now that there is a precedent for potential employers to "Facebook Stalk" applicants, and we all know that the major world governments have under the table agreements with all the major social networks for data, our profiles are just another carefully curated facade of our lives.
It might take weeks for someone to identify you in real life based on a grainy still image from a camera. And then one would still have to correlate among people, other cameras, and locations to learn anything about you without surveillance.
With the internet, because of data aggregation, I can go on piple and find all of that about you and more in 2 minutes.
The power with which one may invade privacy over the internet is unprecedented. Let's say you frequent a coffee shop commonly visited by republican groups. In just a few minutes, in theory, someone could find out about every single time you visited, and use that against you. Replace republican coffee shop with sex shop, doctor, psychologist, democrat gathering...some things are better left hidden, because people are terrible.
The Jevons paradox notes that an increased efficiency of some process or system increases the total utilisation. In particular, it makes previously nonviable applications viable.
Postal junk mail, telemarketing calls, email spam, popover/popunder ads, malware, robocalls, fake news, chatbots, and more, are all responses of previously nonviable applications becoming viable.
The underlying limitation seems to be the scope of attention of attackers based on private / personal data, and/or of the systems in which they operate. The role of AI in extending that institutional bandwidth ... strikes me as rather frightening.
There've been previous discussion of similar topics which point to the prospect of, say, automated lawsuit filing, or debt collections (already a problem), or more. The prospect of some trained algo running over deep, rich data, seeking arbitrage opportunities, strikes me as undesireable.
You might not be important now. But you might be "important" in the future due to your affiliation with a political group, religion or even your skin color. You never know which way the political winds will blow. And data is kept forever.
For one thing, security cameras don't follow you around, and are generally poor quality images and possibly slow frame rate. I'd rather be watched through security cameras than through a movie camera following me around.
Before the internet, it was very hard for the average person to find out stuff. Maybe not having a ring on the finger meant they are single. Or are they divorced?
Now, it is just too easy. And commonplace.
Sure, all if our lifes events could have been collected. They were called biographers. Now they are called everyman.
Data had always been collected. Just never easily retrieved.
For me, i am appalled that some people i know think absolutely nothing about googling a person they just met. To me, they were not reared properly of they do that.
Not really that true anymore. Especially in cities like NY and London there are practically no public spaces not under 24/7 CCTV surveillance and all that video is cheap to preserve forever.
I don't think this is a good argument. Posting personal information on a public forum is completely different from letting it sit in some database that no human is likely to look at. The latter can be a reasonable trade-off to use a free service.
Continuing with my analogy that would be like shouting my name and address in the street. Why would I do that?
My point is that I tend to consider myself as anonymous on the internet because I believe nobody is interested enough in me to bother gathering the data. Or if it's done, it's a robot that does it for statistical purpose[1]. No human cares.
But if I publish personal infos in plain text, like email, phone number and stuff, surely there will be trolls that will have fun with it, or thieves that will try to use it for profit. I have zero reason to do that.
1. BTW I believe this has tremendous scientific value, from an anthropological point of view. It'd be a shame not to do it.
So you don't think it is a big deal because you believe your private data is not important.
The issue is that perhaps your data is not as important right now, but it is possible that one day you might do something that might upset someone in the power and any information about you might be used against you (and that would be years and years of your past since that data wouldn't be gone).
Just look at Snowden, how they tried to use every petty detail about him. He was careful though and did not leave much, but there were strong forces trying to discredit him.
Maybe you think you would never did anything like what Snowden did, but what's considered bad depends on what current administration thinks. With our current president it feels like insulting him on twitter might be good enough cause.
Second issue is that Big Data done on you, it can infer a lot about you based on the data that you provided, often it can know more about you than yourself.
That data then can be used against you, here's one example of a company that does this and in fact not only tries to learn about people but actually influence them[1]. It's suspected that they are behind Bexit and Trump victory.
This is a brilliant line and it works so well because he is absolutely right: it should not be on the burden of the consumer to make sure their products are safe, be it from listeria or a spy device disguised as a children's toy.
I don't know what needs to happen for people to get outraged, but privacy needs to be a mainstream political issue. Perhaps we need to see what Google and Facebook actually have on us...