Hacker News new | past | comments | ask | show | jobs | submit login

You would be surprised just how few people know Facebook owns WhatsApp. When I mention it to my non-tech friends, they are first surprised, and then nod their heads like its no big deal, and then a few weeks later exclaim utter surprise at some new privacy intrusion.

Someone should start a project with the sole purpose of mining all kinds of personal data about FB employees from Facebook/Google and publish it as a Kaggle dataset for mining. Wonder how they would feel about that?




I'm not sure what you see that helping.


The main issue, which you and I both see, is the sheer asymmetry of the whole thing. We are in this weird situation where the individual, the typical cognitive miser who even on his/her best day cannot possibly take all the preventive actions, is up against tireless machines with perfect memory and ability to generate extraordinary pattern recognition working all day to mine just that little bit more information to then hand out to the advertisers.

But I see your point, and certainly would like to see more constructive suggestions than mine.


I see so many potential ways of aggregating this kind of information in massively privacy intrusive ways on a day to day basis. And it's terrifying how many of them are just stopped by my lack of willingness to sacrifice my morals over it.

Because I know very well how easy it is for people to think "oh, well, but that one little thing isn't so bad, when faced with bills to pay, or a raging boss. Many of which really aren't all that bad in isolation. Except it doesn't take all that many "one little things" before you have a total privacy disaster.


If employees are having trouble saying "no" to unsafe, unethical, or unlawful projects, then a professional association or union is needed. A professional association can create duty requirements external to a company; it's easier to say no to your boss is have the excuse that "as a member of $ORG, I have follow $ETHICS_RULE".

Alternatively a union can put pressure companies to never ask for certain things or to meet a standard for any privacy issues. Unions are usually seen with hostility in the tech industry, but they are just another tool; a union can be made for specific purposes, and ignore e.g. wage or anything else.


If there is one thing our world does not lack, it's amoral people. Would you prefer that Facebook only hire them?


How often do you see doctors being hired that are not members of the AMA (or similar professional associations)? Their Code Of Medical Ethics[1] isn't perfect and certainly there individuals that have ignored it for $REASONS, but at least they have created a culture where it is expected that doctors will at least try to avoid unethical behavior.

> only hire them?

I suspect this is the knee-jerk hostility toward unions I was referring to. If a strong union was created that only addressed ethical behavior, how long would Facebook be able to hire from a dwindling pool of non-members? The entire point of a union is that it's a way to put pressure against specific business practices.


I see a few problems.

Dwindling pool of non-members: Facebook is an especially bad example here, because they have enough money and clout to get around this.

How often do you see doctors being hired that are not members of the AMA: Doctors need to be on location, but this restriction doesn't apply to software. Facebook can always find talent in a country that doesn't have an 'AMA.'


More importantly: You don't need to hire only amoral people. You just need enough people with "flexible" enough morals to be able to justify actions that in themselves may not even seem particularly amoral in suitable positions to be able to get certain types of functionality built without having to hand it to the staunch defenders of morality...

In most organisations "everyone" will know who are "difficult" when it comes to dealing with privacy and other issues. Sometimes that means they are the ones you go to, when you e.g. want to be certain everything is right. But if you have something you think is ok but you think they will raise issues with, they will just go to someone more "flexible" in the organisation instead.

Unless the organisational culture itself strictly punishes this kind of behaviour and rewards protecting privacy even in instances were doing so might hurt revenue, there will be plenty of room for amoral people to find each other and "work around" safeguards


Day 1; "Here at Facebook, we're only hiring amoral people from now on!"

Day 87; Facebook declares bankruptcy. None of the money can be found. The servers have already been stolen by the surviving employees. Administrators arrive at HQ to find only a few broken chairs and a vast pile of shredded paper.

(to explain the joke, there is a downside to hiring amoral people)


Amoral people aren't always stupid. Amoral cops refuse bribes when the (probability of being caught) * (cost of losing their job) is above the bribe amount, and parasitic employees know they will earn more long term if they don't kill the host company.

If Facebook takes care to only hire smart amoral people they will last much more than 87 days.


> If there is one thing our world does not lack, it's amoral people. Would you prefer that Facebook only hire them?

If they are currently hiring only amoral people and people who are afraid of expressing their moral outrage, then there is absolutely no difference than if they were just hiring amoral people to start with.


Isn't there? Do you see no possibility that the latter group might find cause to overcome their fear?

I mean, to be clear, I still think this whole line of discussion around the imaginary (im/a)morality of Facebook employees is pretty far off base. But the question bears asking all the same.


The problem is that it is not black and white. People will often get presented with some hair-raising proposition, turn it down, and later get presented with something slightly bad and go "well that's much better" and consider it acceptable even if perhaps it's pushing boundaries.

I agree with you, and e.g. in the UK we have the BCS, which does have ethical rules you are expected to know and apply (their membership is just a small proportion of the UK tech industry, though; in part because it is not prestigious enough for e.g. employers to ask for, while requirements for membership makes it a hassle to join for a lot of people), but at the same time it is not sufficient.

Especially give that a lot of things first become truly problematic in aggregate.

E.g. Developer #1 gets asked to ensure you pull in the phone contact list to tie your local contacts to your Facebook friends, to enable extra functionality (lets say a "call" button when you view their profile) that seems entirely benign.

Then developer #2 gets asked to match on phone numbers that have already been pulled in, possibly without even being aware that the phone numbers he is working on are not necessarily just phone numbers of Facebook friends but also unrelated contacts.

You can say that they should have verified, but often it is very easy to assume that it's fine, and not think about consequences. E.g. it doesn't seem so unreasonable to suggest friend-of-a-friend. The problem in the article is that it is not suggesting friend-of-a-friend but contact-of-a-contact, which is an entirely different relationship. But if you're told "here you can find a bunch of phone numbers for each user", build a "friend-of-a-friend" recommendation feature, it is not that strange if people assume it's actually "friend of a friend" - people like to assume the best.

Here's an example from my own past, that I did stop, but only at the last minute, when I realised what was about to happen:

And old boss asks me for a database dump from a "sort-of-still-client" that was leaving us. Nothing odd with that - they kept asking for more up to date copies to make their migration easier, and kept paying us for a year after they'd migrated their site in order to be able to continue to use their old reporting facilities.

So I prepared the database dump. Then I asked him how to deliver it, and he asked me to pass it to X. X was not the client, but someone in a new corporate parent. If my boss had instead asked me to deliver it to him instead of X, I'd have done it without further questions, and he would have passed it to X and the damage would have been done.

What X wanted to do was to mine it for potential customers. The almost-ex-client were not in any way competing with the new corporate parent, so it would not harm them was , but apart from likely violating our contracts with them, it was also a blatant Data Protection Act violation (UK).

My former boss thought this wasn't a problem because we were passing the data internally in the same company and we held the data in our system legally anyway. But the point is the data had been provided by the customers of our client for a specific purpose, and was handed to us for a specific purpose, and that purpose no longer existed. We certainly had not been given permission to use the data for sales. It was hair-raising when I realised what he wanted to do.

He accepted it when I explained why, but it was rather shocking that it took an explanation for him to realise it in the first place.

He was stupid to think his suggested use was remotely ethical, and that's the only reason I caught it: If he'd realised how unethical (and illegal) it was, and he still wanted to do it, he'd have asked me to provide the data to him, which I would have - that'd have been routine. If he'd asked me to put it up for download and provide a username and password, I also would have - assuming reasonably enough he was intending to pass that info to the client. Though after that incident I started being more sceptical about providing him with data without knowing the purpose first, and making sure the client had actually requested it.


It is easy to imagine that the people who build and maintain Facebook conceive of their behavior as immoral. I'm not sure it is accurate or helpful.


I would like to have more constructive suggestions to offer, too. It's not a simple problem, though, and it will not be quickly solved. Threatening Facebook employees (doxing people is a threat) does not seem likely to make anything better.


Well, facebook is "doxing" non-members by virtue of shadow profiles and by encouraging to tag everybody in the pictures. Counterintelligence could be a valid way to keep democratic society.


The cases aren't parallel. A shadow Facebook profile exists that describes me, but it would be absurd to imagine that Facebook will use this information to, for example, send a SWAT team to my house to perform a forced entry - something which has been known to result from the kind of action here discussed. If you make available the necessary information for 4chan and like ilk to do such things, 4chan and like ilk may very well then do so, simply because to do so will briefly amuse them. Is that something for which you're comfortable with the idea of being responsible?

Don't get me wrong. I have no love whatsoever for Facebook, and I would very much like to see a world where no Facebook does or even can exist. But there's a difference between recognizing the problems that result from Facebook's existence, and imagining Facebook and its employees to be deliberately inflicting such problems on people and thus deserving of threatening, even violent, action in imagined response.


Your aversion to threatening employees reminds me a bit of the old "just following orders" canard.

Developers are not sweatshop workers beholden to the company store. They have a plethora of employment options. If they willingly choose to work for such a company, the case could be made that they have made themselves legitimate targets for having made this choice.


That case could indeed be made. It has been in the past, many times, with results whose nature I do not find an endorsement. But perhaps you feel differently. If so, I would urge you to consider the possibility that immoral actions, in response to immoral actions, do not themselves become more moral. There's also the more utilitarian concern that to threaten people in this fashion is not likely to engender sympathy among the undecided, or those who have simply not considered the question, and it most certainly will not engender sympathy among those whom you choose to target.

I might also counsel a certain restraint in your rhetoric, such that you fight shy of hyperbole such as likening Facebook to the NSDAP; ideally that would be your lookout and no one else's, but since we're arguing at least nominally on the same side of the issue, your statements reflect somewhat on mine, and I would prefer they not do so negatively.


Your aversion to threatening employees reminds me a bit of the old "just following orders" canard.

It's not that. It's that in this very short life we have, it's not only not helpful (in the longer run) to pursue actions which knowingly hurt people for the sake of some perceived greater good (unless absolutely necessary) -- it leads one down a very dark path.

My solution? I'd prefer to educate people about the simple fact that most of these social media sites just don't do very much to improve our lives, are a huge soul-suck and time sink generally, and basically not worth the gargantuan amounts of time and emotional energy we invest in them.

So that eventually FB, WhatsApp and all the others will hopefully just die of starvation without a single shot fired (or employee being threatened or doxxed).


It's a moral hazard¹, or possibly an externality; the people writing the algorithms that violate people's privacy are not themselves the victims.

Normally in a market system you want to keep the chain between cause and damage short enough to be comprehensible for the people causing it; otherwise, there's no good way to make them avoid it.

¹ https://en.wikipedia.org/wiki/Moral_hazard


Aren't they? How common do you imagine it to be, among Facebook employees, not to have a Facebook account?


Of course they have FB accounts. That isn't the point. The authors of these algorithms introduce - often without conscious intent - their own biases. They bring their own background, morals, etc when they design an algorithm.

This is a general problem with creating an algorithm to supplement or replace anything previously done by humans. Even if the algorithm is given accurate and unbiased data (which is rare), the choice itself to use an algorithm in the first place and the design of the algorithm also contain bias.

Sometimes this bias is intentional such as "redlining" where housing loans were denied to blacks using various proxies for race. I suspect that in most cases the bias is accidental, which is why it is very important to check the results carefully for any unintended bias. In situations like Facebook, simply asking their users first (opt-in) if they would like to participate in "local friend discovery" would be a great start.


You're not wrong. Does it seem likely, though?

I mean, at this point you're asking Facebook to do something which is directly inimical to its interests, in that people opting out of "local friend discovery" truncates its social graph, or at least reduces the weights it can put on some edges, and thus makes its information less valuable for targeted advertising.

It would be nice to imagine that the people who make such decisions would make that one out of the goodness of their hearts. I do not think this likely. In the absence of a strong financial incentive to do otherwise, I would expect to see things go on pretty much as they have been, i.e., getting gradually worse over time. Threatening Facebook employees with physical harm seems like a severely counterproductive strategy toward applying such an incentive, but I'm not sure what to suggest in its place, because I've tended more in the direction of finding ways to convince people the problem actually exists - itself a regrettable necessity.


> people opting out ... makes its information less valuable for targeted advertising

I have very little sympathy for a business model based on surveillance and manipulation. Figuring out how to generate revenue is Facebook's problem. Lots of unethical behavior would be valuable in various business models. Facebook can police themselves or they eventually invite (probably less desirable) legislation crafted by pissed off people.

> Threatening Facebook employees with physical harm

For the record, I am in almost all circumstances I am a pacifist. I would never advocate for physical harm. That said, I have nothing against revealing the private information of the people who insist on doing the same as a business model.

> directly inimical to its interests

> I'm not sure what to suggest in its place

That's easy; you change arrange it so they want to do the necessary due diligence of making sure new any new algorithm is both necessary and safe. We accomplish this with liability. Data needs to be toxic. If you collect data and store it for long periods of time or aggregate it with other types of data, then you are responsible for problems that arise from your databases. In the case of this doctor, if any problems happen to her patients from Facebook's disclosures, then Facebook is the liable party.

They can decide the level of safety required. Either transit the data for the users blindly and enjoy immunity like a common carrier, or inspect the data and pay for the problems that derive from that inspection.

> finding ways to convince people the problem actually exists

That's always a good idea, but in the meantime it is not the responsibility of the user to understand information theory before critiquing Facebook's claims. Blaming the victim is never the right answer.


> I have very little sympathy for a business model based on surveillance and manipulation.

I have none whatsoever. But Facebook, as it is today, is a thing that is. I don't see that imagining the current state of affairs to be other than it is helps anything. I'm also not hugely in favor of looking to government for a solution to this problem, because the United States government, for all its many and various qualities, has an extremely poor track record on legislation related to technology, and I do not see any reason to imagine their response to Facebook would buck the trend. At best, it'll be ineffective in its stated aim. At worst, it will be that and also inimical to a lot of other businesses which don't actually belong in its crosshairs to begin with.

> I have nothing against revealing the private information of the people who insist on doing the same as a business model

This implies an inaccurate conception of Facebook's business model, which has really nothing to do with revealing private information in the way you describe. I don't think Facebook lies when it says that such disclosures are accidental. I don't think that honesty is any excuse here, but you seem to be imputing evil where there's no reason to believe any exists; the problem is not that Facebook schemes at inflicting misery, but that its financial drive to monopolize an ever larger swath of human interaction increasingly creates misery as a side effect. We can acknowledge this, and work to put an end to it, without erroneously painting anyone as a monster.

You claim, too, not to advocate physical harm, and to be in general a pacifist. Those are nice claims to make. I hope you don't find yourself in the position of having to defend them after a release of Facebook employees' personal information results in someone being SWATted, or driven to suicide, or otherwise assaulted, battered, murdered, or likewise mistreated, as a direct result of an action with which you say you see nothing wrong. You might protest at that time that your rhetoric is unrelated, and your responsibility nonexistent. After all, I'm sure you yourself would never actually dox anyone, even if you do say it's fine to do so. Such protestations are not likely to find many sympathetic interlocutors.

> We accomplish this with liability. Data needs to be toxic.

This is an excellent point! It deserves to be found in better company than you have given it here.

> Blaming the victim is never the right answer.

I invite you, quite seriously, to review my HN comments on the subject of Facebook - they are quite plentiful, you'll have no trouble finding them - and identify any case in which I may accurately be said to have blamed the victim. My entire perspective on this matter is what it is because I am a victim! How do you suggest anyone go about making any kind of beneficial change more likely, if no one recognizes the need for it? How do you suggest such recognition come into existence, if not by finding ways to explain to people that there is a problem? Would you rather just sit back and wait until there's enough of a critical mass, of people who've been chewed up and spat out by the gears of Facebook's advertising data generation machine, for a groundswell of public opinion to arise organically? That seems a bit cruel to me.


We probably agree on quite a bit. I'm not trying to accuse you of victim blaming - or anything else - so if I have implied otherwise I apologize; that was not my intention. It wouldn't be my first miscommunication.

My reference to victim blaming was targeted at the the ideas in the thread - and often stated by Facebook and others in the surveillance industry - that people should know not to use Facebook when they have not had an opportunity to learn about how modern technology works. Education is a great idea, but that takes time. (I've been spending 20+ years trying to educate people about the internet, encryption, and privacy in the modern age)

> to be in general a pacifist. Those are nice claims to make.

I have the scars and hospital bill to prove it. Fortunately I was lucky and the (tool assisted) beating didn't do a lot of permanent damage.

> an action with which you say you see nothing wrong.

I never said I saw nothing wrong with it, only that Facebook should to accept what they do to others.

I do understand Facebook's business model. I also understand some of the VP-level people involved, because I taught some of them how to program. These are people that are perfect examples of being born into "privilege", who need some real experience in how the rest of the world actually lives. I don't wish them harm, but I won't shed a tear if they get harsh dose of reality.

(I've probably not worded this optimally; I'm trying to restrain my language because these people piss me off)

> I don't think Facebook lies when it says that such disclosures are accidental.

I'm sure they're telling the truth. I'm suggesting that they are being negligent in their use of automation. If they had any experience in the problems that most people face in the real world, they should have know that problems like at this doctor's office would have happened.


I suspect you're right about the extent to which we probably agree. I also don't think it's so much that you implied I was victim blaming, as that I'm a bit more raw on this topic than I had suspected, and that made it easy for me to find cause for indignation where none in fact exists. I'll keep an eye on that in future; thanks for taking it so equably.

> I never said I saw nothing wrong with it

You said you have nothing against it. If there's a substantive difference between the two, I fail to see it. And while I can only consider it honorable, if admittedly also incomprehensible on a personal level, to choose to submit to a beating rather than betray a personal conviction on the subject of pacifism, it still seems at odds with such a conviction to advocate action which is well known often to result in the infliction of serious harm upon those who are its maleficiaries. I suppose it's possible there is a way to reconcile those, but if so, that's something else I currently fail to see.

On the other hand, it's clear that your perspective on at least some of the people we're discussing is vastly better informed than mine, and intellectual honesty would require that I respect that fact even were I otherwise disinclined to do so. The impression I've gathered in general is that most people who work for Facebook genuinely believe they're improving the world by doing so. Would it be accurate to say that that's especially true for the VP-level people you describe? And in general, it would be interesting to hear whatever else you'd like to describe about Facebook's internal culture and the effect it has on people who partake of it.

> I'm suggesting that they are being negligent in their use of automation

Another point on which we agree. I don't know that it merits the kind of punishment you seem willing to countenance. But I gather also that you're angry about this, in a way that I'm not, and that can easily produce a certain clarity of perspective.


"Aren't they?"

No they are not. For example, it is now common knowledge that Mark Z bought off all nearby houses in every direction to get more privacy. [1] Do you and I have similar access to resources?

Suppose your identity is stolen and you find yourself penniless because someone hacked into Facebook which also affected your friend who works at Facebook. Who is more likely to be in great financial distress the next day? Who is more likely to know the full impact of the situation?

Also, if someone in Facebook were to be negatively affected in some way, they probably have friends inside who can help them out. Do you and I have a direct line to a similar friend? In fact, we are likely to be the very last people to know of any such exploitation.

Besides, the closer you are to the algorithm, the more likely that you know how to circumvent it, even exploiting some simple bugs that others are not aware of.

And how about opting out? As a technologist, how hard do you think it would be for an insider to add himself/herself to the opt-out database, and also make sure that there were no hiccups in the process? Contrast that to something as simple as opting out of junk mail - have you been 100% successful?

I just made four observations about how you and I do not possess the same advantages as an insider at Facebook. What are the odds that, something can slip through four different test cases you set up and still turn into a bug in production? Minimal, don't you think?

You make really good points about not countering immoral action with more immoral action. But your notion that FB employees could somehow become unwitting victims of their own technology sounds seriously far-fetched to me.

[1] http://www.businessinsider.in/Mark-Zuckerberg-Just-Spent-Mor...


I don't know that Mark Zuckerberg's access to resources typifies that of Facebook employees in general, but I see what you're saying, and you make good points here which I'll have to consider at leisure.


Air pollution is an externality even when caused by people who breathe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: