Hacker News new | past | comments | ask | show | jobs | submit login

But what if the vast majority of the population is right?

What if there are no detrimental long term effects to our society when our digital privacy is compromised? What if the majority of the population ignoring the risks and simply being productive in the areas they work best is the best thing for society?

I'm not arguing this point of view, but I think risks are often be exaggerated by the security conscious.




> But what if the vast majority of the population is right?

Honestly what would be the odds of that?

The track record of the vast majority of the population isn't exactly stellar. We tend to be concerned about whatever some smaller minority tells us to be.

(Note that I say "we". We are not immune: Just look at the amount of strictly incompatible opposing viewpoints on HN. At least half of those must be wrong for each viewpoint, making the aggregate of even this collection of relatively smart people, dumber than a sack of bricks. Okay, maybe two sacks of bricks)

So what then? My point is, it's much better to base your assumptions and viewpoints on the particular merits and flaws of an idea, than on whether or not the majority of the population agrees with it.

It's just not relevant. Not at all. The only relevance might be how to steer the majority public opinion, if you want to affect change. A very wise man once said: THINK FOR YOURSELF, SCHMUCK!

My personal view is that the "security conscious" (which includes myself, to some extent, I guess) is in possession of a lot more facts than the majority of the public. Also their track record is pretty good. Especially since the Snowden revelations, nearly all of the things that used to dismissed to tinfoil territory turned out to be exactly right. Even RMS' "wacky paranoia" turned out to be not so crazy after all.

Heh, even the "tinfoil hat" itself turned out to be useful, in a sense: wrapping your phone in tinfoil prevents you being tracked (and it's easier than removing the batteries). At least this works perfectly for GSM signals (just try calling a phone wrapped in tinfoil), haven't tried with Wifi or Bluetooth.


> My personal view is that the "security conscious" (which includes myself, to some extent, I guess) is in possession of a lot more facts than the majority of the public. Also their track record is pretty good. Especially since the Snowden revelations, nearly all of the things that used to dismissed to tinfoil territory turned out to be exactly right. Even RMS' "wacky paranoia" turned out to be not so crazy after all.

Right, but I'm coming from a pragmatic stance when I put forward the position of the majority of people being "right".

That is, maybe everything the security conscious predict comes to pass. And maybe it has no practical effect on the quality of our lives. That's what I'm suggesting.

Maybe I still go to work, live in the same house, have the same family, and do all the same things I would have otherwise done. Only if I'm security conscious, I feel slightly more worried about it all.

Again, I'm not arguing this personally. Just entertaining the thought.


> That is, maybe everything the security conscious predict comes to pass. And maybe it has no practical effect on the quality of our lives.

I see your point.

Except, I--and the "security conscious" with me--believe that it merely has no practical effect on the quality of our current lives, until it does, and when it does, it's going to pretty horrible and also kinda too late.

That is, when your current surveillance police state suddenly turns into a much worse bad-wrong oppressive surveillance police state that has the habit of, say, arresting innocent people one or two degrees separated from "activists", keeping them in jail for a week or two, only letting them out on the condition they'll inform on whoever they suspect. This can happen in a flash. It's done so many times before in history, all over the world.

There was a Reddit post that very clearly described personal experience of such a change happening in an (unnamed) ME country: http://www.reddit.com/r/changemyview/comments/1fv4r6/i_belie...

So if the security conscious' predictions are also right about this, then it probably pays to heed their warnings.

What you seem to be saying is, maybe the security conscious were right about all those predictions, maybe they are right about new future predictions, EXCEPT the part where they predict the terrible consequences this ultimately will have on the quality of our lives.

Personally, I find that gamble a bit dangerous.


I think one has to distinguish between no privacy and privacy controlled by some central entity, though. If essentially everyone knows everything about everyone, that might work (though I have some doubts how one could get there from here, especially as it would need some fundamental changes in our economic system), but having the control concentrated in one place seems to me to be very risky indeed, as that is a huge pile of power in the hands of a few.


Is it power? What does it allow one to do?

If the average person in the population simply does not care if their private details are exposed, transferred, or looked at, then does the person who holds those details hold power over anyone?

Again, I generally think having some privacy is a good thing, and personally do not use my Facebook or other social accounts outside of a professional context. But sometimes I wonder whether most people care at all, and whether that is actually detrimental to society.

Security advocates can often sounds like doomsday prophets, suggesting that the downfall of society begins with private companies amassing personal information.


Yes, it is power, tons of it, in a wide variety of forms.

That someone doesn't care has little effect on how others can use the information - the only power that that removes is the power to embarrass. Any party that you depend on economically can still use the information to their advantage (and your disadvantage). And mind you that that might not only be for irrational reasons - any statistically significant correlation is a perfectly rational reason for some company to refuse you as a customer or to increase prices for you, for example. In securities markets, there even is a name for using private information about planned transactions to gain an advantage, it's called front running, and it's illegal, because it is considered to be essentially stealing the customer's money.

But also, much of the power is not power over individuals, but power over society as a whole, in that such direct access to inter-human communication allows you to find patterns in social dynamics and thus allows you to predict future actions, and what it would need to change the outcome. In essence, that is what marketing is all about - but of course, its applicability is not limited to selling you washing powder, but it can also be used to "sell" political ideas. And in the case of Facebook, they can directly manipulate what people get to see, of course.

And then, there is the intersection between the two, in that there are some people who themselves have more power than others, possibly over you - and if someone gains some power over them directly, that means they might transitively also be gaining power over you.


It would be good to hear about some more concrete examples of this sort of power use (or abuse). Selling political ideas already happens (FOX news), and they don't even need to know your personal details.

I do not like my personal information being collected, I find it tacky and generally only submit details when it's absolutely necessary. But I'm struggling to see the extreme short and long term consequences of a society which submits their private data in this manner.

Your suggestions about charging more for certain customers already happens, on airline ticket websites. But it's not a dire consequence, it's a tacky, classless act by greedy shortsighted people.

Sometimes I feel like the people collecting personal details really don't have any power at all. That personal details are overvalued and simply attract funding for these companies.


Concrete examples are difficult in the same way that concrete examples for the abuses of "non-democratic societies" are difficult. I mean, it's not necessarily difficult to find some, but it's difficult to see the big picture from most examples.

Yes, FOX news already happens, but I would argue it's not a good thing, and making it more effective thus probably is even worse?

And yes, I guess much of the risk in a way are "tacky, classless acts by greedy shortsighted people", but that does not mean that they don't have any real consequences. Corruption is similar - and the effects in some economies are quite devastating, even though the individual bribes might not be that expensive. Big effects can arise from small individual inconveniencees.

But I also think that much of the risk lies in the future, with improved analysis algorithms. I think a reasonable model to assume is one of computers that can think and learn similarly to a human, just with much higher input bandwidth for simple facts and a bit limited free reasoning ability. That assumption may go a bit to far, but I think it's still a much better model than thinking of it as an improved spreadsheet. Look, for example, at google translate - that is in essence a computer learning the translation between languages from humans, without actually being taught anything explicitly. It's no big magic, and yet the results are quite good overall.

And private companies aren't the only ones playing that game, of course, the NSA has a similar power dynamic, and the borders aren't all that clear anyhow, of course, as any data piles that private companies hold tend to also attract intelligence agencies and the like.

But let me try and show some concrete examples of where personal information is or could be used in order to gain power:

In the political arena, I think that gerrymandering is a good example: Parties use known correlations between personal information they know and voting behaviour in order to increase their chances of winning the election (instead of making the election as representative as possible, which would make a functioning democracy).

Or a company could primarily fire people who have predispositions for certain illnesses that could reduce their efficiency later on or they could right from the beginning only hire those who are not affected. If the pool of workers is large enough, that reduces costs. And as a company needs to be competitive, it actually might not even be able to avoid it once competitors start such a practice.

Similarly for insurance companies: From the perspective of the insurance company, their financial goal in a competitive market is to get rid of any customers that will cost them more money than they pay, so whatever data they are able to get their hands on, they probably will try to use for predictions, and as above they will be forced to do so once some competitor does it. From the perspective of society as a whole, though, insurance is particularly important for those expensive cases, as that stabilizes the social structure, while an insurance industry that only insures people who don't need an insuracnce is essentially worthless for society.

Or suppose a totalitarian leader gets elected. No easier way to make sure that noone challenges your power than to rank the social graph of your country by number of edges and putting in jail anyone who is too well-connected. One important historical case of this type was after Nazi Germany had invaded the Netherlands, where they had all the census information on Hollerith punch cards, including a person's religion. That information was collected without any bad intentions in mind, and yet it ultimately was used for easily finding the jews to kill them.

Or remember the case of Daniel Ellsberg? Nixon's people broke into his psychiatrist's office in order to try and steal his file, so they could use information from it to discredit him.

Also, how about the use of cellphone location data for drone strikes against people who have had no chance to defend themselves in a court, what the US government calls "targeted kilings"?

Well, I guess that's enough for now ... ;-)


Thank you for the more detailed examples.

I agree that your examples present a worse case than I had initially thought. Although I wonder if there are positive benefits for society that balance those out (I can't think of any in particular).

So if there is a net overall negative effect for society, how big is the effect? Is it on the scale of a nuclear war, or more along the lines of the anti-vaccination movement (causes real problems, but not the end of the world).


Well, on the one hand, there are positive effects of the technological development such as easy communication for people, which in turn might help strengthen social structure, which I would think, though, do not technically depend on such a centralized structure, but could instead be implemented as federated or peer to peer systems with much the same benefits but without the centralization, using cryptography where possible to protect information from eavesdroppers.

Then, well, yeah, arguably there are areas where lots and lots of centralized data collection in principle might be useful for solving real problems. For example, I would imagine that epidemiological studies would be much easier if researchers had access to all medical records of all people, and possibly that could be useful for fighting certain diseases. But then again, we do have some rules in place that allow collection of such data for the really bad stuff, and statistical analysis of anonymized data, so maybe we aren't really losing all that much.

I think the overall effect is more at the catastrophic end, though I would say it's more of a cold war than a nuclear war, at least in the short term: Surveillance does not directly kill you usually, but it can blow up with horrible consequences.

BTW, your anti-vaccination example might be chosen badly: If the anti-vaccination movement were to gain traction with a majority of people, that could indeed be pretty close to the end of the world, at least the world as we know it. It's only a relatively minor problem (on a societal scale) because relatively few people are taken in by it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: