Hacker News new | past | comments | ask | show | jobs | submit login
Facebook recommended that a psychiatrist’s patients friend each other (fusion.net)
346 points by deep_attention on Aug 30, 2016 | hide | past | favorite | 220 comments



This is one of the many dark patterns that Facebook uses. It simply does not respect any boundaries the user might wish to have in place...

Install it on your phone? Anyone you have in your phone's address book gets to see your picture under "people you may know".

Someone in your family joins Facebook and friends you? Now everyone you are friends with gets prompted about whether or not they know your family member.

Want to delete some pictures you uploaded to Facebook? It's extremely difficult and they must be deleted one by one.

Other than LinkedIn, I'd say FB is the prime innovator of UI dark patterns that exploit users' unwitting behavior for profit.

The youngest generation of internet users gets this which is why they largely do not use Facebook. Soon they will realize that IG and Whatsapp are connected, and will avoid those too.

What's interesting to me is that the recommendations are fundamentally not useful. It's easy to look someone up by searching for their name without the privacy-invading helpful suggestions.


Some girl from a dating site Googled my phone number, found my name, searched for me on Facebook and then Facebook suggested I friend her, providing me with her full name, which I did not previously know.

If you search for someone on Facebook, then Facebook will suggest to that person that they friend you. Seems a massive privacy hole to me.


I DID NOT KNOW ABOUT THAT PATTERN

I've been aware of how Facebook attaches people to you for a long time, so I deactivated that one, made a new Facebook several years ago primarily for development purposes, different name, different email address, friended a few people from one particular circle, never installed it on my phone

Occasionally I will get random friend suggestions about people in different chapters of my life

Facebook didn't have my address book, or a big enough graph to make these connections

I hadn't considered that those were the people merely searching for my name or variations of it


I didn't know about this either, but it sounds like you may have found another possible source of the leak: patients putting the name of the psychiatrist into FB search.

If B searches for A and C searches for A, does that imply a relationship between B and C? Especially if they live nearby? Who knows :(


In this case I'd almost certainly guess that it is through the phone number. LinkedIn is particularly creepy for this.


I noticed that too. The more often you search for someone and click on their profile, the more facebook will promote the connection both ways.

I think it's kind of interesting to 'figure out' how the facebook machine works. How a simple interaction, a location, a conversation or even a purchase on amazon shapes your news feed and the feed of people around you.


Its interesting to me you think youngest generation of internet users get this. I do hope its the case. I always thought it was the people around at start of net who would get it. I was around preNet and remember how dangerous and scammy everything was so I never sign up for things like facebook or use real name etc. My first thought when I saw facebook was its a great way to get stalked and killed or targeted for whatever people can think of.


> youngest generation of internet users get this.

They get it because in the jungles of many junior high and high schools, anything that can get you bullied will get you bullied, and young people quickly realized that all the accidental oversharing (by themselves or by their parents, elders, etc.) were easily exploited by bullies.

Thus they came to prefer simpler networks with simpler security/sharing models and features (like automatic photo deletion) that respect user privacy.


I know it's seems like it's 'dark pattern' week on HN, but not everything is a dark pattern.

Dark Patterns are user interfaces that are designed to trick users.

Facebook requests the permission to go through your stuff and if you read their data use policy, they go so far as to tell you in detail exactly the information they're taking from you, as well as how they use it.

Sure, it's a little bothersome when the information that you've given them goes farther than your personal preference, but it's not a 'dark pattern', it's just a feature that you don't like.


> it's not a 'dark pattern', it's just a feature that you don't like

It's a similar sort of dark pattern to the practice of putting the important details of a contract hidden in a massive block of text rendered in a tiny font.

Yes, they are technically being upfront about what is going on, in the same way that two pages of 8pt legal boilerplate informs the signer of the details of a written contract.

If it weren't a dark pattern it would be very easy to turn off the undesirable bits, and users would rarely be surprised by the consequences of the default settings.

Let's not forget that contrary to our poor performance on abstract logic puzzles, humans of all levels of intellect are superbly good at reasoning about potentially embarrassing social situations. Hence FB must work hard to de-emphasize the way FB actually works to make people consent to many of the default permissions. That is in my opinion the definition of a dark pattern.

It is the gray area enabled by these practices that makes FB's content interesting... because accidentally over-shared content is interesting to us about a small percentage of the people we are friends with. It's nearly a law of human nature that we are fascinated by obscure details of a small percentage of people for all sorts of reasons (sexual interest, jealousy/aspiration, schadenfreude, stalking, etc.) and we all have some small group of people who are interested in our obscure likes/posts for the same reasons. Rarely do we overtly interact with such people (in either direction) because it is socially awkward, but FB generates revenue/engagement off of the lurking that we all do and the blindness people have that they too are the target of such lurking by others (which is why the dark pattern works)... it's what makes FB scratch a particular voyeuristic itch for people and why it's been so successful. LinkedIn works the same way but for things like job changes, promotions, etc.


> Sure, it's a little bothersome when the information that you've given them goes farther than your personal preference, but it's not a 'dark pattern', it's just a feature that you don't like.

Convincing users to accept a feature they would otherwise opt out of if they had a reasonable choice and/or fully understood the feature seems like a textbook dark pattern to me. Hiding the data collection policy inside a giant EULA or using a carrot feature as a lure is indeed intended to trick the user. The fact that many services have adopted these tactics does not change their being dark patterns; it just means dark patterns have proliferated and become the norm.


Indeed, I don't believe the problem here was that Facebook was tricking their users here into handing over their phones/emails contact information (aka this persons client lists). FB is explicit about permissions in this sense, although most users agree to everything without thinking twice.

The real problem is how the information was utilized by the recommendation engine, which is known to be creepily effective at matching people (people who just met for the first time, for example). FB is investing heavily in AI here so this is the natural outcome - where the results are very effective but has some unintended side effects. The side effects are largely due to the fact this connectivity happens in the background, outside of a place where the user can control privacy settings on particular contacts.

So I'm not sure there is an easy solution here. Mining contacts and social information is Facebook's business. It's what you hand over to use the service and why many people stop using Facebook voluntarily - or carefully limit what information they allow access to. I never allow FB to access my phones contacts, for instance, and their mobile app still works fine.


Fundamentally though that's the problem with the modern web - the services that users get is not a transaction in the sense that the user knows what they're giving up for the service - it's all hidden under an innocent permissions check (if it's Facebook) or not said at all (if its LinkedIn). The user provides permission for a small pittance like their email address or phone number and it snowballs into having every want need and action tracked and catalogued to make the service owner money. A product not intent on tricking the user into giving up every bit of the data on their life would ask if it could use individual bits of information to serve them ads and sell their information.

It takes more than it asks, and the fine print is there to cover its ass, when in reality if users were asked about what information they were willing to share they would be much more uptight. The users's have no real idea what is happening with their data, what they've given up or how its used to make the company money. It may be true that most user's don't care, and some might even prefer the outcome to them in the form of "relevant" ads (if the choice is made over "irrelevant" ads or paying for the service). It certainly is transforming what is in the public sphere about people, and the lessening of privacy can certainly be used as a weapon (and it is, to the extent that it is a big powerful force arrayed against a person independently figuring out what they want to spend their resources on).


> I know it's seems like it's 'dark pattern' week on HN

OK, what is a dark pattern by your definition?

Dark patterns are a relevant topic on HN because many startups are measured in terms of user engagement and the growth of their user-base.

What is the difference between advertising and information? Growth hacking techniques and clickbait? Nudges and dark patterns?

These things are interesting because the line is blurry, and many patterns (dark or otherwise) that used to work suddenly stop working. This is why banner ads worked for a while and why interruption ads are becoming more and more common, and why adblock is becoming more and more common.

The world is not static, and so there is not ever going to be a consistent definition of what constitutes a dark pattern... it depends on the audience. In the first world, most 70 year olds are now on Facebook, and they are vulnerable to many patterns that the younger generations are not.

Just as scammers send senior citizens envelopes that look like social security checks but are actually ads, Facebook offers something that looks like a way to voluntarily share information but is actually often involuntary.

I think FB should take a hard line against dark patterns and be content to grow based on the massive network effect it can get without them.


Pretty sure we need a new term if a term is being co-opted to mean something it's not.

Something like "malign algorithm", or "encroaching design" might do.


Dark patterns ask you to sit in a chair, without telling you that you can get paid to stand. (masking benefit without restricting access to the benefit; misdirection but without total erosion of trust)

This is about exploiting the exposure of unique identifiers (phone number mapped to email), and an interloping tattle-tale ratting out their correlation to the same owner.

It's something more akin to a Prisoner's Dilema, except people aren't cognizant that "They Are The Product" so no one thinks of themselves as prisoners ratting out conspirators.


LinkedIn is pretty bad at trying though. I got a new job not long ago, they congratulated me on it, two days later or so, and even now still they keep sending me emails about my next future job. I barely have gotten my feet wet at this one... What in the world?


I had a recruiter on LinkedIn do that to me. They literally said "I know that you're starting a new job next week, but are you interested in hearing about other job opportunities?" No, but now your recruitment company is on my blacklist, thanks for the heads up.


What's their benefit to putting in the extra work to delay such emails based on your start time? What if some people actually wanted the emails regardless of a recent new job? What if they don't trust the start times that people claim?


This is exactly like this case: https://mako.cc/copyrighteous/google-has-most-of-my-email-be...

I used to freelance and FB started showing me one of my client as suggestion even though I use a work email for that, how? My client must have installed the FB app on the phone and his email client must have synced my email on the phone and now even FB had my email.


> Want to delete some pictures you uploaded to Facebook? It's extremely difficult and they must be deleted one by one.

THIS is a huge pain point for me. I would ideally like to delete all my Facebook photos and timeline/wall posts from all of history. However, I cannot find a greasemonkey or tampermonkey script which will actually accomplish this. There are a few that claim to, but none actually worked for me (outdated).

Has anyone figured this out?



I have coworkers' phone numbers on my phone so I assume that's how Facebook is recommending them to me. I like my coworkers but this is kind of creepy. Wish there was a way to turn it off, both directions.


Even if you would "turn it off" and fb would not recommend it to you, they would still know the connection, and those same people would get you as a recommendation. So "turn it off" only would mean "hide the underlying connection".

Only way to prevent this is to hide/obfuscate/limit information that fb is accessing about you. And that would be a huge feat in an of itself.


The whole premise of FB is to "connect people", whether or not you like it, so it's not an entirely unexpected behaviour from them. It's also the reason I don't use FB, and never will.


Well it doesn't matter if you use FB or not.

I bet the patients of the Doctor found themselves through each having her phone number. Hey they both are "friends" with this person maybe they should be friends as well.

This doesn't require for the middle man to have an account. You are inadvertently acting as a conduit for people to connect to one another.

FB likely has a "ghost" account for you anyway that they use to do this connection. So it is like you are using FB in some alternate universe.


and the thing that sucks about us non-FB using people is that Facebook still has a pretty complete profile of our photo likenesses, names, emails, phone numbers, and other website profiles.


And if he joined facebook he would be instantly be recommended many of the people he knows.


TLDR#1: The investigation still didn't reveal exactly how this happened.

TLDR#2: The recommendation to "prevent" these issues on the individuals side is, "Lisa’s medical community has started recommending that patients concerned about privacy not log into Facebook or other social media accounts at medical offices, or even leave their phones in their cars during appointments. "

This is about as practical as recommending people just figure out how to fly and occasionally levitate into the upper atmosphere to go out of the cell tower's range, move a few kilometers west, and then fly back down to earth to scramble all these tracking algorithms.


So basically don't install Facebook or Whatsapp try to use the Facebook website if you can or better yet don't use Facebook at all on your phone if you value your privacy.

It's sad that we are at this stage but it's mostly our fault for being so complacent with companies doing these kinds of things.

If people stopped using their service when they did these kinds of things they would change their behaviour really quickly but most people don't know or care that this is happening.


An individual's data sharing with Facebook is less of the issue, here, though. You personally not using it doesn't prevent you from becoming the common thread that ties others together.

Just because I'm not on Facebook (I'm not), anyone that's allowed Facebook to see their own contacts, in their phone or email, has shown that they are or are not connected to me in some way. Without me actually ever even having an account with Facebook they can correlate this data from users to see who is likely to know one another by a shared connection to me. Just because my particular node on the relationship tree has more blanks than it would if I was a Facebook user does not mean I don't create a node at all.

My guess for this Facebook issue in particular is that the Doc potentially did absolutely nothing herself, but rather all of her patients had mail and phone contact lists that included her and that common thread along with the same geographic area was enough to trigger a recommended match. In other words, this was equally likely to happen even if the doctor never had a Facebook page of her own.


I think we're in for a slow painful transition until people (in aggregate) intuitively "get" exactly how invasive and unfriendly data-correlation can be when you expose yourself -- and your friends -- to when you share seemingly-innocuous facts with our welcomed-digital-overlords.


That doesn't seem to be very important. It's not the doctor who wants privacy.

The people who want privacy allowed Facebook to scrape their contact lists and monitor their locations. They then expected Facebook not to correlate this data with others who contact and visit the same doctor. Why not?


> They then expected Facebook not to correlate this data with others who contact and visit the same doctor. Why not?

Because that would be a dick move.

But clearly that is not enough to dissuade companies from doing this kind of thing, because they have no morals.

And that is the crux of the problem. They don't give a shit what would be considered "reasonable behaviour" for a human being, because they are just giant correlating machines with access to data they shouldn't have been given access to by people who don't know better.

At the end of the day, we are allowed to have reasonable expectations of others, including companies, so I take issue with any implication that they should have known better. We are allowed to have these reasonable expectations. And we will be constantly disappointed. But we should maintain them, I might even say that it is a duty to do so.

Saying "they should have known better" is giving up the fight prematurely. They shouldn't have to know better. They should be able to expect that their privacy (a right) will not be violated.

It is an ideal, not a reality, but it is something to work towards. One step might be to sue the hell out of Facebook for this.


Suing Facebook for knowing something people told them ought to be interesting. Please tell HN all about it if you ever pull the trigger.


Sherman, set the wayback machine for about 110 years ago...

The people who want meat didn't demand tours of the meatpacking factories. They then expected the meat they bought to not be unsanitary and diseased. Why not?


> So basically don't install Facebook or Whatsapp try to use the Facebook website if you can or better yet don't use Facebook at all on your phone if you value your privacy.

I've been doing this for years now, and you know what? I don't miss it at all. I use the Facebook website from my computer, and that's A-OK.


I don't use Facebook at all and don't miss it.


Neither do I, but as I've described at tedious length here a couple of times recently, I do miss the social life I had a few years back, and which opting not to sign up for Facebook cost me.

It would be really great if the choice not to use Facebook did not often entail serious negative consequences. For one thing, I'd be a lot less annoying on the subject. Unfortunately, that happy state of affairs does not appear to obtain in either case.


> I do miss the social life I had a few years back, and which opting not to sign up for Facebook cost me

You don't know if it was "opting not to use Facebook" that lead to decline in your social life, because you didn't test whether "being on Facebook" would not have lead to decline in your social life.

(I've seen many articles that state exactly the opposite - that people who have less social interactions spent more time on Facebook.)


Actually, I do know. Rather than rehearse again the means by which I know, let me refer you to the two most recent times I've done so:

https://news.ycombinator.com/item?id=12009198

https://news.ycombinator.com/item?id=12362818

(Maybe it's necessary to point out that I'm aware of the possibility that all the apologies I ever received on this subject were lies. I'm aware of the possibility that all the apologies I ever received on this subject were lies. Given that no evidence exists to support that conclusion, parsimony would require it be disregarded even were I otherwise inclined to imagine that all my friends actually hated me and didn't want me around, and were willing and able to deceive me by presenting the impression of sincere regret for having forgotten to include me.)

I'm still working on a better metaphor than that of an abusive relationship; while that one speaks strongly to me, my experience suggests it does not do so to others, which renders it useless for my purposes. Any suggestions you might have to offer on a more effective replacement would be welcome.


One metaphor that comes to mind is that of an employer asking for the same data (personal email credentials). I'm sure some people may (to their detriment) comply with such a request while others would not. Some may be so offended that they would quit immediately or start looking for a job elsewhere.


I like that, especially given the perennial habit of some employers to mine Facebook for reasons to mistreat employees, and also to ask for prospective employees' Facebook account credentials. And there seems no reason to expect the same weirdness that comes from likening a web application to an abusive partner.

"I mean, imagine you're interviewing for a job, and the hiring manager asks you for the password to your Facebook or your email. Yeah, maybe you can say 'sorry I don't give that out' and still get the job - but at that point, do you want to? And what the hell is even going on that that's a question you get asked?"

Yeah, I think maybe you've given me my new metaphor. Certainly I look forward to trying it out. Thank you very kindly!


One of the requirements for getting hired by eBay was to disclose my eBay account(s), if any. Ordinarily that would be pretty intrusive, but I decided it was reasonable for them. ;)


I think the abusive relationship metaphor is perfect.


So do I, but it seems to weird people out pretty strongly when I use it. That could be to do with me rather than with it, but it's rare in my experience that I'm unable to bring people with me to at least some extent, so I've been going on the surmise that it's more a problem with what I'm saying than with the way in which I say it. I could be wrong about that, though, and how would I really be able to tell?


don't use Facebook at all on your phone if you value your privacy.

This has been my strategy for years, since the first time my entire contacts list got snarfed in.


>So basically don't install Facebook or Whatsapp try to use the Facebook website if you can or better yet don't use Facebook at all on your phone if you value your privacy. It's sad that we are at this stage but it's mostly our fault for being so complacent with companies doing these kinds of things.

Exactly: it's our fault. None of this privacy-invading stuff is secret, it's all over the news. At this point, if you get burned by Facebook, it's your own fault for using it.


You're being overly complacent. Facebook might find it harder to track people who don't sign up to it, but they still have shadow profiles for tracking non-users. You get tracked by what your friends and contacts share about you as well as what you choose to share. If anyone who put you in their mobile contacts let Facebook's app loose on their phone, then smile, you're already on Facebook.

Privacy is an environmental issue, not a transactional one. With the current system, there's really no opt-out short of opting out of social life altogether.


You'll need to stop everyone else in your life from using Facebook, too.


Facebook comes preinstalled on some phones unfortunately.


> ...it's mostly our fault for being so complacent with companies doing these kinds of things.

> ...most people don't know or care that this is happening.

You seem to contradict yourself.


Only if "our" is taken to include "most people". If we, the people who are aware of the situation, were less complacent, we would inform more of the general population (who don't read HN) and potentially convince others they should care too.


Yes. Hello. It's not a simple problem, at least not if you want to reach people instead of merely haranguing them.


Especially since odds are the data being correlated comes from the contact lists in her patients' phones. Even successfully confounding geolocation won't solve that.


The other tricky thing is, even a single lapse in forgetting to turn the phone off, or bringing it with you is enough to undo all vigilance ever. I don't see it as feasibly, especially for people that are anxious or distracted by something more important on their mind.


You know, it really says something about our industry that our preeminent modern accomplishment requires everyone to choose, blindly and unaware, among effective social nonexistence, espionage-grade opsec, or the kind of radical transparency that no one but maybe a performance artist would even have contemplated just a few years ago.


Yeah, I know this can get brought out a lot in these circumstances, but remember how everyone thought Richard Stallman / rms was crazy for being so disconnected from the Internet? Somehow that doesn't seem the case anymore.


Keep in mind, when you consider that, that Stallman is also almost entirely disconnected from the world, as well. Granted, that's by choice, and it seems to suit him. But it casts a great deal of doubt on the value of his perspective, especially his perspective on those people who choose otherwise.

It's sort of infuriating, if I'm honest. He has a lot of insight with regard to, for example, the extent to which Facebook abuses people who use it. But the best he can muster by way of response is "Well, don't do that, then, and if you do, then to hell with you." Which is, to say the least, not helpful.

Edit: And on further reflection, the insight he does have is hardly unique. The more I consider what I've heard and read him to say, the less I find myself able to see what he actually has to add to the kind of nuanced conversation which needs to take place on this subject.

Further edit: So your response, while of value, kind of misses the point I set out to make, in that Stallman's situation falls neatly into the "effective social nonexistence" category. The question I'm asking is larger, and more along the lines of: How the hell did we let our industry become something of which Facebook is the exemplar, and is this really something with which we're okay?


Eh. He and you have different lifestyles. I also can't fathom there lifestyles of the shepherds back home that go without human contact for months at a time, but they have a lifestyle.

Just because you don't want to switch your lifestyle, or we trade convenience for exposure, doesn't mean that stalman's lifestyle is wrong.

It's just different choice, and we are comparing apples to oranges here.


He satisfies himself with contempt for those who find it not so easy as he has to choose the life he's chosen. I won't be satisfied until we have a world which makes no such choice necessary. Hardly apples to oranges; more apples to orchards.

As it happens, I recently attempted to open a conversation on this subject with the man himself, in a public forum. I was polite, if uncompromising. He was profane, and preferred to have a minion disconnect my microphone rather than address the point I raised. Many people in the hours that followed found it worth their while to seek me out and thank me for making the attempt. Perhaps more thanked him for his response. I hope I may be forgiven for finding that improbable. In any case, he found no reason not to sign my Emacs manual when I approached him not long afterward, so I can at least hope that, however fundamental the differences in our positions on this matter and however unlikely the prospect of fruitful debate, there may remain at least some modicum of mutual respect.


Effective social nonexistence? Come on. As someone who does not use Facebook and Twitter, yet miraculously has an active social life, I find your dichotomy to be false.


That's great. As someone who also does not use Facebook and Twitter, and saw the result this had on the social life he'd enjoyed before they rose to such preeminence in the field of mediating human social interaction, I find your experience to be lacking in universal applicability.


Just so I understand: You're saying you used to have a robust social life, but now somewhat less so, because the people you used to interact with will no longer do so outside of Facebook and Twitter? I don't want to sound harsh, and maybe I'm just too old, but it seems kind of incredible to me. You can't just call them up or E-mail them?


I don't know what you mean by 'robust'. What I'm saying is that I used to have a satisfactory social life - on the order of parties or other similar events call it once a month more or less and "let's get together for beers" or similar on a reasonably regular basis besides - which in essence no longer exists.

I can certainly understand that it seems kind of incredible to you. It seems extremely incredible to me! I still haven't quite got wholly around it. And, yes, I can "just" call people up, or text or email them, and sometimes even get a response. I still occasionally do so, and still occasionally get together with one or two people at a time for a few drinks and to catch up. The problem comes in where you try to organize something on a larger scale, or where someone else does so. It would be technically inaccurate to say that to do so is impossible without using Facebook. But I've certainly found it ineffective to try to set things up via email, which was not the case a couple of years ago.

I would not be surprised to learn that I'm as old as, or older than, you are. Most of the people in my former social circle are somewhat older than I am. I'm pretty sure this isn't just a "people in their twenties" thing.

(Edited to add that I don't understand why you're getting downvoted, and I wish people wouldn't without explaining why they're doing so. Certainly, if it's out of some misguided assumption that I've chosen to put myself and my experience out there without being prepared to address people who express entirely reasonable incredulity and doubt about my veracity, let me take a moment to note that such action on my behalf, while certainly appreciated, is entirely unnecessary.)


Hmm, interesting. Thanks for your perspective. All I can offer is that I've not seen this phenomenon in my own social circle (age:40). I have always considered "You can't have a social life without Facebook" to be pure hyperbole from people who never knew a world without FB.

I suppose we can agree that "it depends" on who your friends are. Oh, and I don't sweat downvotes on HN--any quick way to bury unpopular opinions can be hard to resist for some people.


> I have always considered "You can't have a social life without Facebook" to be pure hyperbole from people who never knew a world without FB.

Oh, don't get me wrong! I don't believe it impossible to have a social life without Facebook, and I hope I don't come across otherwise. But I have found it a great deal harder than seems at all reasonable, quite aside from the fact that it's absurd in the first place to have to develop a new social circle because opting out of Facebook sufficed to estrange me from my old one.


Some social groups do not communicate outside of a single service.


Or have the office assistant provide a metal tin container to use as a Faraday shield. "Put your phone in the box to prevent it reporting that you're here today."


That would create very conspicuous network exit/entry points at the office. According to Zoz's DEFCON 22 talk[1] about modern OPSEC, these are specifically targeted by various agencies, so I'm sure Facebook finds entry/exit data points just as interesting.

[1] (warning: strong language) https://www.youtube.com/watch?v=J1q4Ir2J8P8#t=2291


Or, more interesting even. It is clearly someone taking the initiative to take their phone off the network at specific times in a specific place.


Or just disable location access for the Facebook app.


Which works fine for iPhone owners, and on the 15% of Android devices running Marshmallow, in the relatively rare case where someone knows this can be done and acts upon that knowledge. Everybody else is hosed.


You can disable the location access entirely on Android, and enable temporarily only for the short moments when you really need it.

Most Androids have a slide-from-top quick menu when you can toggle it with one click. Honestly, Google Maps is the only app on my phone that really needs location access.


Just because you turn it off doesnt mean it's not tracking you. Have a look at google location history, it doesn't use GPS but it can still pinpoint you very closely using cell tower triangulation. Stallman was right, we carry the worlds most advanced tracking device in our pockets.


Don't use the Facebook App, use the website

Don't share location via the web browser

Still poor substitutes but ....


So, I deactivated my account maybe 6 months ago, and uninstalled the app long ago. Since then, I moved halfway across the country and, using a brand new laptop, a fake name and number, and a throwaway email address, created another profile so I could use their API.

People You May Know still had old high school friends, my old real estate broker (??), and someone I starred on GitHub. I have absolutely no idea how they connected that account to my old one, considering Google Mail is the only other service I've used on that laptop.


If you're not using a plugin such as Facebook Disconnect, pages that have a "Like this on Facebook" embed or similar can accidentally or deliberately reassociate you with your prior identity. Consider this scenario:

1. You log in to your account and get redirected to http://example.com/?user=3834

2) That page has an embedded Like button.

3) When your browser requests the button from Facebook, the referrer is "http://example.com/?user=3834" which is a URL that you visited a lot when your old Facebook login was active, and it was never visited by any Facebook users apart from you.

There are other similar ways they could link you to an old identity if they wanted to, some not necessarily blockable by these plugins, but the above would be simplest.


You either logged your new account into a mobile device and it pulled your contacts, or you gave facebook your phone number for account recovery / two-factor and they already had your contacts either from your old account, or if you use whatsapp or instagram.


I don't use whatsapp or instagram and have not used the new account on any other device.

I did use my real number for the old FB account, but used 555-867-5309 for the new one.


You can't provide a fake phone number for account recovery / two-factor authentication, as they verify that you control the phone number before accepting it.


Fully block all the Facebook network, they have trackers on virtually everything. From ad network (including "free" apps) to all the sites/services in their network (WhatsApp, Instagram)

https://github.com/jmdugan/blocklists/blob/master/corporatio...


You're not using some kind of mobile router/dongle thing for network access are you?

Otherwise all that's left is you - how you type, click & otherwise use FB.

Edit: and the 1st thing the goddamn site shows me is a pop-up begging for a Like :\


I'm not, in fact I'm using a different ISP entirely. You really think my behavior patterns are uniquely identifiable at FB's scale? I can't imagine how many other users must have similar typing/clicking/resting/etc. patterns.


Interestingly, my comment's getting down-voted without refutation. C'mon, let's talk :)

Anyway, no, I don't know if FB can do that at scale.

What I do know is that sites, especially complex ones like FB, like to track user interaction to evaluate UX (hover targets, click targets, time it takes to find call-out etc).

If I were a data science type at FB, and knew FB was collecting that stuff, I think I'd like to find out what other questions I could answer with it.

Or more banal - did you use your laptop as a wifi endpoint and connect your phone & WhatsApp to it?


> Otherwise all that's left is you - how you type, click & otherwise use FB.

Wow, that makes a lot of sense.

Welcome to the post-privacy era, I guess.


The phonebook hypothesis seems most plausible to me (especially considering that WhatsApp is owned by facebook). All those apps gaining access to a phonebook is a privacy disaster.


You would be surprised just how few people know Facebook owns WhatsApp. When I mention it to my non-tech friends, they are first surprised, and then nod their heads like its no big deal, and then a few weeks later exclaim utter surprise at some new privacy intrusion.

Someone should start a project with the sole purpose of mining all kinds of personal data about FB employees from Facebook/Google and publish it as a Kaggle dataset for mining. Wonder how they would feel about that?


I'm not sure what you see that helping.


The main issue, which you and I both see, is the sheer asymmetry of the whole thing. We are in this weird situation where the individual, the typical cognitive miser who even on his/her best day cannot possibly take all the preventive actions, is up against tireless machines with perfect memory and ability to generate extraordinary pattern recognition working all day to mine just that little bit more information to then hand out to the advertisers.

But I see your point, and certainly would like to see more constructive suggestions than mine.


I see so many potential ways of aggregating this kind of information in massively privacy intrusive ways on a day to day basis. And it's terrifying how many of them are just stopped by my lack of willingness to sacrifice my morals over it.

Because I know very well how easy it is for people to think "oh, well, but that one little thing isn't so bad, when faced with bills to pay, or a raging boss. Many of which really aren't all that bad in isolation. Except it doesn't take all that many "one little things" before you have a total privacy disaster.


If employees are having trouble saying "no" to unsafe, unethical, or unlawful projects, then a professional association or union is needed. A professional association can create duty requirements external to a company; it's easier to say no to your boss is have the excuse that "as a member of $ORG, I have follow $ETHICS_RULE".

Alternatively a union can put pressure companies to never ask for certain things or to meet a standard for any privacy issues. Unions are usually seen with hostility in the tech industry, but they are just another tool; a union can be made for specific purposes, and ignore e.g. wage or anything else.


If there is one thing our world does not lack, it's amoral people. Would you prefer that Facebook only hire them?


How often do you see doctors being hired that are not members of the AMA (or similar professional associations)? Their Code Of Medical Ethics[1] isn't perfect and certainly there individuals that have ignored it for $REASONS, but at least they have created a culture where it is expected that doctors will at least try to avoid unethical behavior.

> only hire them?

I suspect this is the knee-jerk hostility toward unions I was referring to. If a strong union was created that only addressed ethical behavior, how long would Facebook be able to hire from a dwindling pool of non-members? The entire point of a union is that it's a way to put pressure against specific business practices.


I see a few problems.

Dwindling pool of non-members: Facebook is an especially bad example here, because they have enough money and clout to get around this.

How often do you see doctors being hired that are not members of the AMA: Doctors need to be on location, but this restriction doesn't apply to software. Facebook can always find talent in a country that doesn't have an 'AMA.'


More importantly: You don't need to hire only amoral people. You just need enough people with "flexible" enough morals to be able to justify actions that in themselves may not even seem particularly amoral in suitable positions to be able to get certain types of functionality built without having to hand it to the staunch defenders of morality...

In most organisations "everyone" will know who are "difficult" when it comes to dealing with privacy and other issues. Sometimes that means they are the ones you go to, when you e.g. want to be certain everything is right. But if you have something you think is ok but you think they will raise issues with, they will just go to someone more "flexible" in the organisation instead.

Unless the organisational culture itself strictly punishes this kind of behaviour and rewards protecting privacy even in instances were doing so might hurt revenue, there will be plenty of room for amoral people to find each other and "work around" safeguards


Day 1; "Here at Facebook, we're only hiring amoral people from now on!"

Day 87; Facebook declares bankruptcy. None of the money can be found. The servers have already been stolen by the surviving employees. Administrators arrive at HQ to find only a few broken chairs and a vast pile of shredded paper.

(to explain the joke, there is a downside to hiring amoral people)


Amoral people aren't always stupid. Amoral cops refuse bribes when the (probability of being caught) * (cost of losing their job) is above the bribe amount, and parasitic employees know they will earn more long term if they don't kill the host company.

If Facebook takes care to only hire smart amoral people they will last much more than 87 days.


> If there is one thing our world does not lack, it's amoral people. Would you prefer that Facebook only hire them?

If they are currently hiring only amoral people and people who are afraid of expressing their moral outrage, then there is absolutely no difference than if they were just hiring amoral people to start with.


Isn't there? Do you see no possibility that the latter group might find cause to overcome their fear?

I mean, to be clear, I still think this whole line of discussion around the imaginary (im/a)morality of Facebook employees is pretty far off base. But the question bears asking all the same.


The problem is that it is not black and white. People will often get presented with some hair-raising proposition, turn it down, and later get presented with something slightly bad and go "well that's much better" and consider it acceptable even if perhaps it's pushing boundaries.

I agree with you, and e.g. in the UK we have the BCS, which does have ethical rules you are expected to know and apply (their membership is just a small proportion of the UK tech industry, though; in part because it is not prestigious enough for e.g. employers to ask for, while requirements for membership makes it a hassle to join for a lot of people), but at the same time it is not sufficient.

Especially give that a lot of things first become truly problematic in aggregate.

E.g. Developer #1 gets asked to ensure you pull in the phone contact list to tie your local contacts to your Facebook friends, to enable extra functionality (lets say a "call" button when you view their profile) that seems entirely benign.

Then developer #2 gets asked to match on phone numbers that have already been pulled in, possibly without even being aware that the phone numbers he is working on are not necessarily just phone numbers of Facebook friends but also unrelated contacts.

You can say that they should have verified, but often it is very easy to assume that it's fine, and not think about consequences. E.g. it doesn't seem so unreasonable to suggest friend-of-a-friend. The problem in the article is that it is not suggesting friend-of-a-friend but contact-of-a-contact, which is an entirely different relationship. But if you're told "here you can find a bunch of phone numbers for each user", build a "friend-of-a-friend" recommendation feature, it is not that strange if people assume it's actually "friend of a friend" - people like to assume the best.

Here's an example from my own past, that I did stop, but only at the last minute, when I realised what was about to happen:

And old boss asks me for a database dump from a "sort-of-still-client" that was leaving us. Nothing odd with that - they kept asking for more up to date copies to make their migration easier, and kept paying us for a year after they'd migrated their site in order to be able to continue to use their old reporting facilities.

So I prepared the database dump. Then I asked him how to deliver it, and he asked me to pass it to X. X was not the client, but someone in a new corporate parent. If my boss had instead asked me to deliver it to him instead of X, I'd have done it without further questions, and he would have passed it to X and the damage would have been done.

What X wanted to do was to mine it for potential customers. The almost-ex-client were not in any way competing with the new corporate parent, so it would not harm them was , but apart from likely violating our contracts with them, it was also a blatant Data Protection Act violation (UK).

My former boss thought this wasn't a problem because we were passing the data internally in the same company and we held the data in our system legally anyway. But the point is the data had been provided by the customers of our client for a specific purpose, and was handed to us for a specific purpose, and that purpose no longer existed. We certainly had not been given permission to use the data for sales. It was hair-raising when I realised what he wanted to do.

He accepted it when I explained why, but it was rather shocking that it took an explanation for him to realise it in the first place.

He was stupid to think his suggested use was remotely ethical, and that's the only reason I caught it: If he'd realised how unethical (and illegal) it was, and he still wanted to do it, he'd have asked me to provide the data to him, which I would have - that'd have been routine. If he'd asked me to put it up for download and provide a username and password, I also would have - assuming reasonably enough he was intending to pass that info to the client. Though after that incident I started being more sceptical about providing him with data without knowing the purpose first, and making sure the client had actually requested it.


It is easy to imagine that the people who build and maintain Facebook conceive of their behavior as immoral. I'm not sure it is accurate or helpful.


I would like to have more constructive suggestions to offer, too. It's not a simple problem, though, and it will not be quickly solved. Threatening Facebook employees (doxing people is a threat) does not seem likely to make anything better.


Well, facebook is "doxing" non-members by virtue of shadow profiles and by encouraging to tag everybody in the pictures. Counterintelligence could be a valid way to keep democratic society.


The cases aren't parallel. A shadow Facebook profile exists that describes me, but it would be absurd to imagine that Facebook will use this information to, for example, send a SWAT team to my house to perform a forced entry - something which has been known to result from the kind of action here discussed. If you make available the necessary information for 4chan and like ilk to do such things, 4chan and like ilk may very well then do so, simply because to do so will briefly amuse them. Is that something for which you're comfortable with the idea of being responsible?

Don't get me wrong. I have no love whatsoever for Facebook, and I would very much like to see a world where no Facebook does or even can exist. But there's a difference between recognizing the problems that result from Facebook's existence, and imagining Facebook and its employees to be deliberately inflicting such problems on people and thus deserving of threatening, even violent, action in imagined response.


Your aversion to threatening employees reminds me a bit of the old "just following orders" canard.

Developers are not sweatshop workers beholden to the company store. They have a plethora of employment options. If they willingly choose to work for such a company, the case could be made that they have made themselves legitimate targets for having made this choice.


That case could indeed be made. It has been in the past, many times, with results whose nature I do not find an endorsement. But perhaps you feel differently. If so, I would urge you to consider the possibility that immoral actions, in response to immoral actions, do not themselves become more moral. There's also the more utilitarian concern that to threaten people in this fashion is not likely to engender sympathy among the undecided, or those who have simply not considered the question, and it most certainly will not engender sympathy among those whom you choose to target.

I might also counsel a certain restraint in your rhetoric, such that you fight shy of hyperbole such as likening Facebook to the NSDAP; ideally that would be your lookout and no one else's, but since we're arguing at least nominally on the same side of the issue, your statements reflect somewhat on mine, and I would prefer they not do so negatively.


Your aversion to threatening employees reminds me a bit of the old "just following orders" canard.

It's not that. It's that in this very short life we have, it's not only not helpful (in the longer run) to pursue actions which knowingly hurt people for the sake of some perceived greater good (unless absolutely necessary) -- it leads one down a very dark path.

My solution? I'd prefer to educate people about the simple fact that most of these social media sites just don't do very much to improve our lives, are a huge soul-suck and time sink generally, and basically not worth the gargantuan amounts of time and emotional energy we invest in them.

So that eventually FB, WhatsApp and all the others will hopefully just die of starvation without a single shot fired (or employee being threatened or doxxed).


It's a moral hazard¹, or possibly an externality; the people writing the algorithms that violate people's privacy are not themselves the victims.

Normally in a market system you want to keep the chain between cause and damage short enough to be comprehensible for the people causing it; otherwise, there's no good way to make them avoid it.

¹ https://en.wikipedia.org/wiki/Moral_hazard


Aren't they? How common do you imagine it to be, among Facebook employees, not to have a Facebook account?


Of course they have FB accounts. That isn't the point. The authors of these algorithms introduce - often without conscious intent - their own biases. They bring their own background, morals, etc when they design an algorithm.

This is a general problem with creating an algorithm to supplement or replace anything previously done by humans. Even if the algorithm is given accurate and unbiased data (which is rare), the choice itself to use an algorithm in the first place and the design of the algorithm also contain bias.

Sometimes this bias is intentional such as "redlining" where housing loans were denied to blacks using various proxies for race. I suspect that in most cases the bias is accidental, which is why it is very important to check the results carefully for any unintended bias. In situations like Facebook, simply asking their users first (opt-in) if they would like to participate in "local friend discovery" would be a great start.


You're not wrong. Does it seem likely, though?

I mean, at this point you're asking Facebook to do something which is directly inimical to its interests, in that people opting out of "local friend discovery" truncates its social graph, or at least reduces the weights it can put on some edges, and thus makes its information less valuable for targeted advertising.

It would be nice to imagine that the people who make such decisions would make that one out of the goodness of their hearts. I do not think this likely. In the absence of a strong financial incentive to do otherwise, I would expect to see things go on pretty much as they have been, i.e., getting gradually worse over time. Threatening Facebook employees with physical harm seems like a severely counterproductive strategy toward applying such an incentive, but I'm not sure what to suggest in its place, because I've tended more in the direction of finding ways to convince people the problem actually exists - itself a regrettable necessity.


> people opting out ... makes its information less valuable for targeted advertising

I have very little sympathy for a business model based on surveillance and manipulation. Figuring out how to generate revenue is Facebook's problem. Lots of unethical behavior would be valuable in various business models. Facebook can police themselves or they eventually invite (probably less desirable) legislation crafted by pissed off people.

> Threatening Facebook employees with physical harm

For the record, I am in almost all circumstances I am a pacifist. I would never advocate for physical harm. That said, I have nothing against revealing the private information of the people who insist on doing the same as a business model.

> directly inimical to its interests

> I'm not sure what to suggest in its place

That's easy; you change arrange it so they want to do the necessary due diligence of making sure new any new algorithm is both necessary and safe. We accomplish this with liability. Data needs to be toxic. If you collect data and store it for long periods of time or aggregate it with other types of data, then you are responsible for problems that arise from your databases. In the case of this doctor, if any problems happen to her patients from Facebook's disclosures, then Facebook is the liable party.

They can decide the level of safety required. Either transit the data for the users blindly and enjoy immunity like a common carrier, or inspect the data and pay for the problems that derive from that inspection.

> finding ways to convince people the problem actually exists

That's always a good idea, but in the meantime it is not the responsibility of the user to understand information theory before critiquing Facebook's claims. Blaming the victim is never the right answer.


> I have very little sympathy for a business model based on surveillance and manipulation.

I have none whatsoever. But Facebook, as it is today, is a thing that is. I don't see that imagining the current state of affairs to be other than it is helps anything. I'm also not hugely in favor of looking to government for a solution to this problem, because the United States government, for all its many and various qualities, has an extremely poor track record on legislation related to technology, and I do not see any reason to imagine their response to Facebook would buck the trend. At best, it'll be ineffective in its stated aim. At worst, it will be that and also inimical to a lot of other businesses which don't actually belong in its crosshairs to begin with.

> I have nothing against revealing the private information of the people who insist on doing the same as a business model

This implies an inaccurate conception of Facebook's business model, which has really nothing to do with revealing private information in the way you describe. I don't think Facebook lies when it says that such disclosures are accidental. I don't think that honesty is any excuse here, but you seem to be imputing evil where there's no reason to believe any exists; the problem is not that Facebook schemes at inflicting misery, but that its financial drive to monopolize an ever larger swath of human interaction increasingly creates misery as a side effect. We can acknowledge this, and work to put an end to it, without erroneously painting anyone as a monster.

You claim, too, not to advocate physical harm, and to be in general a pacifist. Those are nice claims to make. I hope you don't find yourself in the position of having to defend them after a release of Facebook employees' personal information results in someone being SWATted, or driven to suicide, or otherwise assaulted, battered, murdered, or likewise mistreated, as a direct result of an action with which you say you see nothing wrong. You might protest at that time that your rhetoric is unrelated, and your responsibility nonexistent. After all, I'm sure you yourself would never actually dox anyone, even if you do say it's fine to do so. Such protestations are not likely to find many sympathetic interlocutors.

> We accomplish this with liability. Data needs to be toxic.

This is an excellent point! It deserves to be found in better company than you have given it here.

> Blaming the victim is never the right answer.

I invite you, quite seriously, to review my HN comments on the subject of Facebook - they are quite plentiful, you'll have no trouble finding them - and identify any case in which I may accurately be said to have blamed the victim. My entire perspective on this matter is what it is because I am a victim! How do you suggest anyone go about making any kind of beneficial change more likely, if no one recognizes the need for it? How do you suggest such recognition come into existence, if not by finding ways to explain to people that there is a problem? Would you rather just sit back and wait until there's enough of a critical mass, of people who've been chewed up and spat out by the gears of Facebook's advertising data generation machine, for a groundswell of public opinion to arise organically? That seems a bit cruel to me.


We probably agree on quite a bit. I'm not trying to accuse you of victim blaming - or anything else - so if I have implied otherwise I apologize; that was not my intention. It wouldn't be my first miscommunication.

My reference to victim blaming was targeted at the the ideas in the thread - and often stated by Facebook and others in the surveillance industry - that people should know not to use Facebook when they have not had an opportunity to learn about how modern technology works. Education is a great idea, but that takes time. (I've been spending 20+ years trying to educate people about the internet, encryption, and privacy in the modern age)

> to be in general a pacifist. Those are nice claims to make.

I have the scars and hospital bill to prove it. Fortunately I was lucky and the (tool assisted) beating didn't do a lot of permanent damage.

> an action with which you say you see nothing wrong.

I never said I saw nothing wrong with it, only that Facebook should to accept what they do to others.

I do understand Facebook's business model. I also understand some of the VP-level people involved, because I taught some of them how to program. These are people that are perfect examples of being born into "privilege", who need some real experience in how the rest of the world actually lives. I don't wish them harm, but I won't shed a tear if they get harsh dose of reality.

(I've probably not worded this optimally; I'm trying to restrain my language because these people piss me off)

> I don't think Facebook lies when it says that such disclosures are accidental.

I'm sure they're telling the truth. I'm suggesting that they are being negligent in their use of automation. If they had any experience in the problems that most people face in the real world, they should have know that problems like at this doctor's office would have happened.


I suspect you're right about the extent to which we probably agree. I also don't think it's so much that you implied I was victim blaming, as that I'm a bit more raw on this topic than I had suspected, and that made it easy for me to find cause for indignation where none in fact exists. I'll keep an eye on that in future; thanks for taking it so equably.

> I never said I saw nothing wrong with it

You said you have nothing against it. If there's a substantive difference between the two, I fail to see it. And while I can only consider it honorable, if admittedly also incomprehensible on a personal level, to choose to submit to a beating rather than betray a personal conviction on the subject of pacifism, it still seems at odds with such a conviction to advocate action which is well known often to result in the infliction of serious harm upon those who are its maleficiaries. I suppose it's possible there is a way to reconcile those, but if so, that's something else I currently fail to see.

On the other hand, it's clear that your perspective on at least some of the people we're discussing is vastly better informed than mine, and intellectual honesty would require that I respect that fact even were I otherwise disinclined to do so. The impression I've gathered in general is that most people who work for Facebook genuinely believe they're improving the world by doing so. Would it be accurate to say that that's especially true for the VP-level people you describe? And in general, it would be interesting to hear whatever else you'd like to describe about Facebook's internal culture and the effect it has on people who partake of it.

> I'm suggesting that they are being negligent in their use of automation

Another point on which we agree. I don't know that it merits the kind of punishment you seem willing to countenance. But I gather also that you're angry about this, in a way that I'm not, and that can easily produce a certain clarity of perspective.


"Aren't they?"

No they are not. For example, it is now common knowledge that Mark Z bought off all nearby houses in every direction to get more privacy. [1] Do you and I have similar access to resources?

Suppose your identity is stolen and you find yourself penniless because someone hacked into Facebook which also affected your friend who works at Facebook. Who is more likely to be in great financial distress the next day? Who is more likely to know the full impact of the situation?

Also, if someone in Facebook were to be negatively affected in some way, they probably have friends inside who can help them out. Do you and I have a direct line to a similar friend? In fact, we are likely to be the very last people to know of any such exploitation.

Besides, the closer you are to the algorithm, the more likely that you know how to circumvent it, even exploiting some simple bugs that others are not aware of.

And how about opting out? As a technologist, how hard do you think it would be for an insider to add himself/herself to the opt-out database, and also make sure that there were no hiccups in the process? Contrast that to something as simple as opting out of junk mail - have you been 100% successful?

I just made four observations about how you and I do not possess the same advantages as an insider at Facebook. What are the odds that, something can slip through four different test cases you set up and still turn into a bug in production? Minimal, don't you think?

You make really good points about not countering immoral action with more immoral action. But your notion that FB employees could somehow become unwitting victims of their own technology sounds seriously far-fetched to me.

[1] http://www.businessinsider.in/Mark-Zuckerberg-Just-Spent-Mor...


I don't know that Mark Zuckerberg's access to resources typifies that of Facebook employees in general, but I see what you're saying, and you make good points here which I'll have to consider at leisure.


Air pollution is an externality even when caused by people who breathe.


I recommend that we all add the office number of a health care professional we don't need to our phone book. It will muddy the water just a little bit.


It would be better if Facebook didn't hoover up data not explicitly entered into their app or website.

In fact I think that should be the basis of privacy laws everywhere: You can only use data that the user personally entered into your application or website. Data should only be available across your different "properties" it they are branded as being part of a single platform.

It would be much more in tune with the average persons understanding of something like Facebook.


Your suggestion might actually lead to something very interesting.

Once a day when someone logs into FB, they should be presented with a word problem asking if the data they have thus far submitted to Facebook can be used to mine such-and-such fact about them.

If they cannot answer correctly, FB should not do said type of mining. As their understanding of the potential for mining info increases, FB is also allowed to add that type of mining.

This would be a win-win. People would actually understand what is going on, and FB itself has something to fall back on when the day comes when people turn this into an inquest (more a question of when than if in my view).

And I wish all the big tech companies would do something like that.


>People would actually understand what is going on

I don't think that's in the interest of companies like Facebook or Google. If people understood how their data can be used, many would close their accounts immediately. Data mining companies, and their customers are best served by keeping the public in the dark as much as possible. Revealing how much they actually know about us would cause trouble, if nothing else then simply because it's creepy as hell to many of us.

The funny thing is that while I think all this data mining is creepy, I also believe it's useless in most cases. The only thing I've seen work well over time is Amazons recommendation of books.


Its also not in my interest to reduce my net worth by paying taxes. But it happens promptly each year. Maybe its time we demanded this from the companies.

Also, I would argue it is indirectly in the interest of said companies and their employees if they prefer that their legacy is to avoid being referred to in the same bracket as the Enrons and the Arthur Andersens of the world.

The trouble is, they are also too big to fail now. The thing that petrifies me more than a thriving Facebook is a Facebook on the brink of collapse and which has nothing to lose.


> I recommend that we all add the office number of a health care professional we don't need to our phone book.

Also the contacts of lots of recruiters.


The recruiters I've been in communication with seemed to use burner Google Voice accounts


This is almost certainly the "phonebook" hypothesis.

If Lisa has her phone number associated with her Facebook account and either Lisa or the client has the others phone number in their smart phones contacts and the Facebook app installed that relationship can pop up in people you may know. If there aren't good "people you may know" suggestions the ones you get can end up being "people who may be known to people you know".

The reason I think this is because a therapist friend of mind had this exact problem and deleting her cell phone number from her Facebook profile made it stop.

What Lisa (and anyone else with a professional responsibility to protect client privacy) need to do is to stop associating the phone number they give to clients with Facebook or other social media.


> What Lisa (and anyone else with a professional responsibility to protect client privacy) need to do is to stop associating the phone number they give to clients with Facebook or other social media.

Understanding how Facebook connects different people might help prevent this from happening, but as Facebook's tech becomes more advanced/pervasive Facebook will need to provide an explicit feature to protect user privacy for situations like this. As it stands, the implications of sharing your phone number, location, etc are already far from explicit.


That might help, but facebook is still fully capable of seeing that the clients have the same number in both of their phone books.


And at the point that they start using that its 100% a Facebook problem, but I don't believe that they are at this point.


I see both methods of matching people to friends-of-friends as equally invasive, to be honest.


I absolutely detest Facebook for:

1. Sharing my mobile number via the Facebook app without my explicit consent or knowledge

2. Using my Whatsapp contact list to recommend people I might know

And now, I've recently started getting all sorts of arbitrary notifications even though I've stated several times I don't want to be notified of anything.

The only reason I still have a facebook account is so that I don't have to share stuff like my email address and phone number with people. But at this point it doesn't seem worth it any more.


> I absolutely detest Facebook ...

https://www.facebook.com/help/224562897555674/


Note that everyone's favourite privacy-respecting app (mine too!), Signal, also does contacts-sharing, although it doesn't do friends discovery (so the server knows one's contacts, but one's contacts don't). If Open Whisper Systems wanted to be evil, though, they could do this form of analysis.

Back in March I laid out how they could use a private set intersection protocol to enable any pair of users to privately share their contacts: https://news.ycombinator.com/item?id=11289223 (I'm not posting this to shame them or something: March wasn't that long ago for developing a feature like this, and of course it's open source; I could develop it myself and submit it to them).

I think it's something they care about; they've just not found a solution they're comfortable with yet.


Yet another daunting issue in our modern world:

No matter how good a given company or product is at privacy-respecting, what happens to all that data if they are bought out by someone else?


Well, that's the good thing about PIR — with the protocol I discussed, OWS wouldn't have access to one's list of contacts. They'd still know with whom one spoke (anonymising that is a hard problem), but at least they wouldn't know everyone one knows.


> I think it's something they care about

so did you get a reply?


> so did you get a reply?

No, I've not. If I wanted to brush up on my Java, I'd take a look at submitting a PR. But Java is the opposite of fun.


I uninstalled the Facebook app from my phone when it kept trying to push Messenger on me. I only use the webclient these days.

This bolsters my resolve to keep that app off my phone. You know, it doesn't bother me too much to have companies like Google analyzing my email to send targeted ads because I assume that information is not going to get out to the public. Facebook is a different case because there's a bidirectional flow of private information. It is a HUGE privacy concern (especially as someone that will be a physician in a few years).


I succumbed to the Messenger app and the first thing it did was message a random handful of my friends to say I've joined Messenger... instant uninstall. Now I also just use the web client and if I want to read my messages on my phone I can do that by requesting the desktop site.


You can also request the mobile site (m.facebook.com), but with a desktop user interface. This seems to be the most practical way of reading Facebook messages on mobile at this time.


Yup, this is exactly what I do.


Amazed that this 'feature' hasn't been killed yet. At this stage of Facebook's maturity, everybody finished adding their real friends about five years ago, and suggesting non-friends with tenuous connections to the user serves only to remind everyone what a privacy disaster Facebook is and generate bad press.


The other side of that argument is: You don't stop making friends & meeting people, why should Facebook stop suggesting people you might know?

I moved out of state three years ago, most of the people I see & spend most of my time with a completely different than the people I did five years ago.


> You don't stop making friends & meeting people, why should Facebook stop suggesting people you might know?

If we've agreed to become Facebook friends then we've done it outside Facebook. If I use the "People you may know" feature I look like a stalker.


There's many younger people for which this isn't the norm.


It's true - "people who may not be your friend any more" would be more useful at this point.


I feel like most users can perform a basic search and add or can exchange contact information elsewhere. When the ratio of weird/unsettling/awkward suggestions to helpful suggestions is high then the feature is not working as intended.


> everybody finished adding their real friends about five years ago

Quite a lot of people were just old enough to get an account today.


"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."

Granted, that's Schmidt, rather than Zuckerberg. The attitude seems to be the same, though.


I normally reply to people who cart this argument out: "so you're ok with someone following you around with a camera videoing you? At work....in the toilet....in the bedroom...?"

Privacy != doing something wrong.


A much clearer example is this article: "Going to a doctor for a mental health issue".


There's still a stigma, most places. It's a shame, and I wish there weren't, but there is. So it's not hard to see how that argument can have a hard time getting traction.


Oh, I just mean it as an example of something that you should absolutely do if you need it, but something that most people wouldn't want publicised (because of that stigma).


Until of course some journalist started digging deep into Schmidt, than the tune was completely different.


Well, of course. He did say "you", after all.


Classic.


By the way anyone has a link to the full transcript/video? I can only find the cut version (and iirc there was something else between the trusted friend question and the quote, but all videos online have a cut/voiceover in between which is kinda suspicious).

(I seem to remember the "it" in the quote meant putting information online / publicly available, no the "something" that you don't want anyone to know)


The "it" in the quote meant, "if you're going to commit a crime, don't tell Google about it. If you tell a big company all about your plans to commit crimes, don't be surprised when the cops come knocking."

Schmidt has a way of saying reasonable things using the most offensive and misinterpreted language ever.


You never know, things you consider normal today might become retroactively punishable tomorrow. Maybe you "offend" someone on-line who in the future becomes powerful figure and they might bring vengeance on you or your family.


WhatsApp (now) shares data with Facebook. Now imagine if Facebook, Google, LinkedIn were also to share data with each other.

Imagine the possibilities [0]. What a wonderful world!

[0] If this were to come true, then the word "possibilities" would be replaced by "synergies" :)


This is a real problem. My sister is a legal clinic domestic violence attorney, and apparently there are concerns about DV clients unwittingly friending their legal clinic advisors, not realizing that by doing so they're outing themselves to their abusive partners.


> Facebook and the other companies in the Facebook family also may use information from us to improve your experiences within their services such as making product suggestions (for example, of friends or connections, or of interesting content) and showing relevant offers and ads. [whatsaap privacy policy]

Many possibilities here:

1 - whatsapp connection with messages exchanged

2 - contact list loaded by whatsapp

3 - psychiatrist secretary number in whatsapp

4 - friends in common

5 - places in common


Or 10 people checked psychiatrist’s facebook profile, facebook found common interest, saw that none of these people are fb friends and suggested to become friends, because hey, you all have something in common, you are all interested in this person.


Yes, I believe this is what happened

Fb will suggest you know person X if a) you looked at person X's profile or b) person X looked at your profile


I think it's also quite likely that the psychiatrist's patients are searching for her profile just to checkout how's her personal life on FB which might give FB some clue as these people might know each other hence a friend suggestion. I do that sometimes to see some of my not-so-close friends.


I have a number of psychologist FB friends and every one that I know of has changed their name on FB to make it harder for patients to find them. Many go with FirstName MiddleName, some make up a last name, etc. That's not to say that it's impossible to find your therapist on FB, but I'd be really surprised if it's easy.


Interesting.


There should be a way to turn off "Peole you may know". I actually hate this feature.


If it has a consistent ID, you could add a custom stylesheet to your browser (or use an element hiding ad blocker) to hide it.


That's why I am getting more and more reluctant to share anything. It's starting to be impossible to predict how your data will be used and what is private and what isn't.


I wonder if there is an open WiFi access point in the vicinity. I noticed that I had several coworkers suggested as friends shortly after I connected my phone to the office WiFi.

It makes sense that people using the same access point or connecting to Facebook from the same external IP would likely know each other.


It's quite an assumption that they would want to 'friend' each other on facebook or even be presented with another's facebook profile.


Sure, I'm just tossing out another hypothesis beyond tracking locations or everyone importing their contact lists containing the doctor.


Actually wayyy before WhatsApp announced [0] that they were going to share data with Facebook, Facebook had already started suggesting me to add friends. These are people whom I have no mutual friends with, but after more suggestions popped up, I realized they were all people I added to my address book and contacted before on WhatsApp.

I definitely did not consent to sharing my address book contacts with Facebook, and frankly nor would I want to. Now WhatsApp is offering an "opt-out" option, but I'm not sure how that will help. Isn't it a little too late for that now?

[0]: https://blog.whatsapp.com/10000627/Looking-ahead-for-WhatsAp...


The funny thing is that this would be very easy for Facebook to fix - just a line of text under each friend request explaining the suggestion:

  * "You're both friends of Duffman McPartyDude"
  * "We found Psycho Ex Boss's phone number in your contacts"
  * "Location Services confirms you were both frequenting a dubious drinking establishment at 4am three Saturdays ago"
Would they do it though? Of course not. It would scare the hell out of their users if they knew how this algo actually worked.


"People You May Know is based on a variety of factors, including mutual friends, work and education information, networks you’re part of, contacts you’ve imported and many other factors,” said the spokesperson by email. “Without additional information from the people involved, we’re not able to explain why one person was recommended as a friend to another."

Facebook is full of shit. Of course they are using locations, why else would I get suggestion to friend the guy that cuts my Mother in Law's yard - he stops by for a check from my wife.


Isn't it more likely he has your mother in law's/your wife's/your phone number programmed into his phone and shared his contacts with Facebook?

It seems like that is the source of 99% of 'creepy' Facebook recommendations: Facebook doesn't realize that while 'has phone number' is a great indicator of 'knowing somebody' it has poor transitive properties.


...because the gardener has MIL's number in his phonebook, you have your MIL's number in your phonebook, and MIL has both numbers in her phonebook. See the network?


Yeah, I don't buy it either. I only use FB in a browser (Android) but have had multiple friend suggestions of people I do not know but have seen out at a bar or party a couple of days afterwards. They aren't in my contact list, pretty damn sure I'm not in their contacts either. I wasn't tagged and didn't check in, but probably did log in to FB at some point while at the location.


And that's one of the reasons I stopped using Facebook. Fuck'em.


> It’s a massive privacy fail,

I can't believe "fail" has become the standard noun instead of failure. It started as a lolcatism and now is standard.


It should be a lot easier for everyone to collectively sue Facebook and other social networks for violation of privacy.

This is just one of the economic asymmetries where small annoyances to everyone, but not enough to individually do anything about it, aggregate to billions for a few in power.

The only social network we need is a collective legal one.


I know for a fact that Facebook uses my phone contacts to suggest friends. When I started at a new job and was exchanging numbers with coworkers, they would appear as a suggested friend within 24 hours.

My doctor also showed up as a suggestion. I figured either the office phone number was linked to his FB page, or FB was scanning my calendar events and linked me to him that way.


I regularly have people show up on my "People You May Know" that have no mutual friends with me, and I don't know them so they certainly don't have my email address or phone number. Oftentimes it's people who went to the same university as me, so I wonder if they base it on friends of friends of friends and other less direct connections.


Facebook is probably using geo-location to determine if two people are in the same vicinity for extended periods of time over time.


This happened to me after attending NA.

I got friend recommendations from FB for other members of the support group.


I assume the connector is the doctor - why doesn't she have a work phone with the patient's numbers that she doesn't use Facebook on? Then the chance of patients being connected to one another is dramatically lower.


But will that actually help? I could easily imagine Facebook matching people who have shared contact numbers, even if the contact number shared is not associated with an account. Possibly they don't do that to try to avoid this situation.

I think this issue requires action from Facebook. The minimum they should do is allow numbers to be registered to be not used for making connections. Much better would be for them to be more explicit about what information they are collecting (with sufficient guidance that the user understands that medical privacy can be affected) and allow users to not send them that information in the first place. I can't imagine them doing that voluntarily, though.


If she is allowing Facebook to view patient's phone numbers in her own phone, this may be a punishable HIPAA violation, and is obviously completely inappropriate.

What's also just as likely is that patients are allowing Facebook to view the contents of their own phonebooks (which they are certainly free to do, unless of course, they're medical professionals with patient information as well...). Facebook sees that these dozen people have the same contact number, and recommends that they all friend each other.


tl;dr

"When Lisa looked at her Facebook profile, she was surprised to see that she had, at some point, given Facebook her cell phone number. It’s a number that her patients could also have in their phones."


Ironically, before it lets me read this story the site pops up a "LIKE US ON FACEBOOK!" prompt. I'm pretty sure once I do that all you fellow article-readers will be my next friends.


"Unfortunately, due to health privacy reasons, Lisa was not able to put me in touch with her patients directly"

You mean: "Fortunately..."


After three accidental ad-clicks and a scrolling ad on mobile, I gave up on reading the article.


She lives in a small town, she specializes in treating a small subset of that population. It is quite possible the patients were recommended as friends as coincidence, not having anything to do with her.


"Most of her patients are senior citizens or people with serious health or developmental issues, but she has one outlier: a 30-something snowboarder. Usually, Facebook would recommend he friend people his own age, who snowboard and jump out of planes. But Lisa told me that he had started seeing older and infirm people, such as a 70-year-old gentleman with a walker and someone with cerebral palsy."


One outlier hardly establishes a pattern, it is still reasonable that the connection to Lisa had nothing to do with these suggestions, and that there was something else in play.


An outlier like that destroys the pattern you're arguing must exist.


I'm not arguing that any pattern exists- I am merely arguing that we don't have enough information to demonstrate conclusively that Facebook is recommending these people as friends simply because they are all Lisas patients. Such a pattern may be probable, but there simply isn't enough information to come to a conclusion.


I don't suppose I feel it necessary to reach absolute incontrovertibility on this matter, given Facebook's longstanding history of doing things very like it, but find mere strong preponderance of likelihood to suffice. But I understand that some may feel otherwise.


> “Without additional information from the people involved, we’re not able to explain why one person was recommended as a friend to another.”

Such a terrible excuse. FB you only have one job! Fail.


Actually it is the most reasonable thing for them to say. Anything more specific would be a privacy violation in itself.


Facebook must understand how weird that sounds. How can they not know why people are recommended to each other?

They really do need to dig into the issue, if in fact they don't know. Because something seriously need to be excluded from their recommendation algorithm if the article is true.


She can't reveal the patients to them, so Facebook wouldn't be in a position to give a specific reason -- only generically how the algorithm is calculated.


In fact, they can expect very serious investigations throughout Europe if someone complains about it to their local information commissioners or equivalent here.

If they are in fact sucking in contact details from users phones and using them for matching and recommendations, that would seem to be something that would be serious enough to likely require express consent (in other words: users taking an explicit action, rather than being "opted in" implicitly by agreeing to a TOS or similar) under EU data protection regulations.

Not a lawyer, but I'd be surprised if there isn't one or more data protection violations lurking in there somewhere.


not surprising really, I have often written software, returned a decade later and had to unravel why it does what it does, even though I am one person and it's just a few thousand lines of code, I can well imagine the FB system is too complex for any individual to inspect, and if the record keeping and documentation on the whys as well as the hows is incomplete then we are we are...


I wouldn't be surprised if they didn't know. They also don't know how timelines are generated.

Strange place.


Technical question: does anyone know if FB can easily answer why X was recommended to Y? That is, do they have to manually check or can they query this easily and get the precise answer why a recommendation was presented(for example: the reason why X was recommended to Y on date d, cannot be that X is connected to Z and and Z to Y, if X connect to Z on d+1; I assume bad inferences like this will probably happen often if such questions are answered manually).


Hopefully it was no Tinder.


Sounds like a solid ground for a class action suite.


LinkedIn has had similar issues. Not news.


But I'm still a paranoid lunatic because I don't want to smear my picture all over the web and give my every scrap of data away for the dubious benefits of Facebook or Twitter...


Hear, hear! I'd suggest some sort of paranoid lunatics' support group, but none of us would show up.


Talk about blowing something simple out of proportion.

All these people have one friend in common with this person, maybe they know each other as well? Being a psychiatrist or whatever has nothing to do with it.

EDIT: I stand corrected. Not so simple regarding where they get the "potential friendship" data from. Diagonal reading mistake on my part.


You might want to take another look at the article; your understanding appears to include a severe oversimplification.


You're right, I stand corrected. The technique is still simple, the way they get the data is not so transparent to the user.


I'm not sure how they get it at all, except via contact list mining. It makes sense that the doctor's phone number is part of her Facebook account. It does not make sense that her patients, who have not added her as a "friend" there, would nonetheless have explicitly told Facebook her number from their end. (I don't even think that's a thing you can do.)


Facebook will happily slurp your entire contact list even if you're not FB friends with all of them. And the algorithms almost certainly know and take into account having someone in your contacts as a way to build a graph of relationships.


Exactly. And, of course, there is not even a way to flag some contacts as private, or otherwise to be excluded from social graph analysis.

I mean, I guess you could keep those contacts in a note or some other record outside your contacts list, and just tap to call or email or whatever. But that only works as long as the Facebook app doesn't decide it needs access to that kind of record, too. And when you find yourself going that far out of your way to circumvent something that's installed on your phone, maybe it's time to think about whether that thing is more trouble than it's worth.


For several official numbers (local taxi company, local police station, local pizza place) I've been random people's names and facebook profile pictures be added to the phone contact where the app has been installed and asked to sync "profile pictures to contact list". This must be people on facebook that jokingly puts in one of these well known phone numbers in their profile, but these strangers then start to appear and take over otherwise reasonable "local pizza" type phonebook entries on completely unrelated people's phones...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: