Hacker News new | past | comments | ask | show | jobs | submit login

The main issue, which you and I both see, is the sheer asymmetry of the whole thing. We are in this weird situation where the individual, the typical cognitive miser who even on his/her best day cannot possibly take all the preventive actions, is up against tireless machines with perfect memory and ability to generate extraordinary pattern recognition working all day to mine just that little bit more information to then hand out to the advertisers.

But I see your point, and certainly would like to see more constructive suggestions than mine.




I see so many potential ways of aggregating this kind of information in massively privacy intrusive ways on a day to day basis. And it's terrifying how many of them are just stopped by my lack of willingness to sacrifice my morals over it.

Because I know very well how easy it is for people to think "oh, well, but that one little thing isn't so bad, when faced with bills to pay, or a raging boss. Many of which really aren't all that bad in isolation. Except it doesn't take all that many "one little things" before you have a total privacy disaster.


If employees are having trouble saying "no" to unsafe, unethical, or unlawful projects, then a professional association or union is needed. A professional association can create duty requirements external to a company; it's easier to say no to your boss is have the excuse that "as a member of $ORG, I have follow $ETHICS_RULE".

Alternatively a union can put pressure companies to never ask for certain things or to meet a standard for any privacy issues. Unions are usually seen with hostility in the tech industry, but they are just another tool; a union can be made for specific purposes, and ignore e.g. wage or anything else.


If there is one thing our world does not lack, it's amoral people. Would you prefer that Facebook only hire them?


How often do you see doctors being hired that are not members of the AMA (or similar professional associations)? Their Code Of Medical Ethics[1] isn't perfect and certainly there individuals that have ignored it for $REASONS, but at least they have created a culture where it is expected that doctors will at least try to avoid unethical behavior.

> only hire them?

I suspect this is the knee-jerk hostility toward unions I was referring to. If a strong union was created that only addressed ethical behavior, how long would Facebook be able to hire from a dwindling pool of non-members? The entire point of a union is that it's a way to put pressure against specific business practices.


I see a few problems.

Dwindling pool of non-members: Facebook is an especially bad example here, because they have enough money and clout to get around this.

How often do you see doctors being hired that are not members of the AMA: Doctors need to be on location, but this restriction doesn't apply to software. Facebook can always find talent in a country that doesn't have an 'AMA.'


More importantly: You don't need to hire only amoral people. You just need enough people with "flexible" enough morals to be able to justify actions that in themselves may not even seem particularly amoral in suitable positions to be able to get certain types of functionality built without having to hand it to the staunch defenders of morality...

In most organisations "everyone" will know who are "difficult" when it comes to dealing with privacy and other issues. Sometimes that means they are the ones you go to, when you e.g. want to be certain everything is right. But if you have something you think is ok but you think they will raise issues with, they will just go to someone more "flexible" in the organisation instead.

Unless the organisational culture itself strictly punishes this kind of behaviour and rewards protecting privacy even in instances were doing so might hurt revenue, there will be plenty of room for amoral people to find each other and "work around" safeguards


Day 1; "Here at Facebook, we're only hiring amoral people from now on!"

Day 87; Facebook declares bankruptcy. None of the money can be found. The servers have already been stolen by the surviving employees. Administrators arrive at HQ to find only a few broken chairs and a vast pile of shredded paper.

(to explain the joke, there is a downside to hiring amoral people)


Amoral people aren't always stupid. Amoral cops refuse bribes when the (probability of being caught) * (cost of losing their job) is above the bribe amount, and parasitic employees know they will earn more long term if they don't kill the host company.

If Facebook takes care to only hire smart amoral people they will last much more than 87 days.


> If there is one thing our world does not lack, it's amoral people. Would you prefer that Facebook only hire them?

If they are currently hiring only amoral people and people who are afraid of expressing their moral outrage, then there is absolutely no difference than if they were just hiring amoral people to start with.


Isn't there? Do you see no possibility that the latter group might find cause to overcome their fear?

I mean, to be clear, I still think this whole line of discussion around the imaginary (im/a)morality of Facebook employees is pretty far off base. But the question bears asking all the same.


The problem is that it is not black and white. People will often get presented with some hair-raising proposition, turn it down, and later get presented with something slightly bad and go "well that's much better" and consider it acceptable even if perhaps it's pushing boundaries.

I agree with you, and e.g. in the UK we have the BCS, which does have ethical rules you are expected to know and apply (their membership is just a small proportion of the UK tech industry, though; in part because it is not prestigious enough for e.g. employers to ask for, while requirements for membership makes it a hassle to join for a lot of people), but at the same time it is not sufficient.

Especially give that a lot of things first become truly problematic in aggregate.

E.g. Developer #1 gets asked to ensure you pull in the phone contact list to tie your local contacts to your Facebook friends, to enable extra functionality (lets say a "call" button when you view their profile) that seems entirely benign.

Then developer #2 gets asked to match on phone numbers that have already been pulled in, possibly without even being aware that the phone numbers he is working on are not necessarily just phone numbers of Facebook friends but also unrelated contacts.

You can say that they should have verified, but often it is very easy to assume that it's fine, and not think about consequences. E.g. it doesn't seem so unreasonable to suggest friend-of-a-friend. The problem in the article is that it is not suggesting friend-of-a-friend but contact-of-a-contact, which is an entirely different relationship. But if you're told "here you can find a bunch of phone numbers for each user", build a "friend-of-a-friend" recommendation feature, it is not that strange if people assume it's actually "friend of a friend" - people like to assume the best.

Here's an example from my own past, that I did stop, but only at the last minute, when I realised what was about to happen:

And old boss asks me for a database dump from a "sort-of-still-client" that was leaving us. Nothing odd with that - they kept asking for more up to date copies to make their migration easier, and kept paying us for a year after they'd migrated their site in order to be able to continue to use their old reporting facilities.

So I prepared the database dump. Then I asked him how to deliver it, and he asked me to pass it to X. X was not the client, but someone in a new corporate parent. If my boss had instead asked me to deliver it to him instead of X, I'd have done it without further questions, and he would have passed it to X and the damage would have been done.

What X wanted to do was to mine it for potential customers. The almost-ex-client were not in any way competing with the new corporate parent, so it would not harm them was , but apart from likely violating our contracts with them, it was also a blatant Data Protection Act violation (UK).

My former boss thought this wasn't a problem because we were passing the data internally in the same company and we held the data in our system legally anyway. But the point is the data had been provided by the customers of our client for a specific purpose, and was handed to us for a specific purpose, and that purpose no longer existed. We certainly had not been given permission to use the data for sales. It was hair-raising when I realised what he wanted to do.

He accepted it when I explained why, but it was rather shocking that it took an explanation for him to realise it in the first place.

He was stupid to think his suggested use was remotely ethical, and that's the only reason I caught it: If he'd realised how unethical (and illegal) it was, and he still wanted to do it, he'd have asked me to provide the data to him, which I would have - that'd have been routine. If he'd asked me to put it up for download and provide a username and password, I also would have - assuming reasonably enough he was intending to pass that info to the client. Though after that incident I started being more sceptical about providing him with data without knowing the purpose first, and making sure the client had actually requested it.


It is easy to imagine that the people who build and maintain Facebook conceive of their behavior as immoral. I'm not sure it is accurate or helpful.


I would like to have more constructive suggestions to offer, too. It's not a simple problem, though, and it will not be quickly solved. Threatening Facebook employees (doxing people is a threat) does not seem likely to make anything better.


Well, facebook is "doxing" non-members by virtue of shadow profiles and by encouraging to tag everybody in the pictures. Counterintelligence could be a valid way to keep democratic society.


The cases aren't parallel. A shadow Facebook profile exists that describes me, but it would be absurd to imagine that Facebook will use this information to, for example, send a SWAT team to my house to perform a forced entry - something which has been known to result from the kind of action here discussed. If you make available the necessary information for 4chan and like ilk to do such things, 4chan and like ilk may very well then do so, simply because to do so will briefly amuse them. Is that something for which you're comfortable with the idea of being responsible?

Don't get me wrong. I have no love whatsoever for Facebook, and I would very much like to see a world where no Facebook does or even can exist. But there's a difference between recognizing the problems that result from Facebook's existence, and imagining Facebook and its employees to be deliberately inflicting such problems on people and thus deserving of threatening, even violent, action in imagined response.


Your aversion to threatening employees reminds me a bit of the old "just following orders" canard.

Developers are not sweatshop workers beholden to the company store. They have a plethora of employment options. If they willingly choose to work for such a company, the case could be made that they have made themselves legitimate targets for having made this choice.


That case could indeed be made. It has been in the past, many times, with results whose nature I do not find an endorsement. But perhaps you feel differently. If so, I would urge you to consider the possibility that immoral actions, in response to immoral actions, do not themselves become more moral. There's also the more utilitarian concern that to threaten people in this fashion is not likely to engender sympathy among the undecided, or those who have simply not considered the question, and it most certainly will not engender sympathy among those whom you choose to target.

I might also counsel a certain restraint in your rhetoric, such that you fight shy of hyperbole such as likening Facebook to the NSDAP; ideally that would be your lookout and no one else's, but since we're arguing at least nominally on the same side of the issue, your statements reflect somewhat on mine, and I would prefer they not do so negatively.


Your aversion to threatening employees reminds me a bit of the old "just following orders" canard.

It's not that. It's that in this very short life we have, it's not only not helpful (in the longer run) to pursue actions which knowingly hurt people for the sake of some perceived greater good (unless absolutely necessary) -- it leads one down a very dark path.

My solution? I'd prefer to educate people about the simple fact that most of these social media sites just don't do very much to improve our lives, are a huge soul-suck and time sink generally, and basically not worth the gargantuan amounts of time and emotional energy we invest in them.

So that eventually FB, WhatsApp and all the others will hopefully just die of starvation without a single shot fired (or employee being threatened or doxxed).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: