From the article it sounds like the result of the lawsuit may be just that facebook adds a disclaimer to their ToS that you accept them using your biometric data by using their service. No surprise there, if you don't want any service collecting any of your data, don't use their service.
No surprise there, if you don't want any service collecting any of your data, don't use their service.
Unfortunately, that doesn't go far enough. I don't use FB, but that doesn't mean it doesn't collect my data. It does because my husband and friends use FB, and I do not control the information they share about me with FB. Currently it has to be, "if you don't want any service collecting any of your data, don't use their service, and don't interact with other people or companies who use their service.". As an example of where your original statement fails, I don't want to be tracked in the mall by companies tracking my phone.
>Currently it has to be, "if you don't want any service collecting any of your data, don't use their service, and don't interact with other people or companies who use their service."
Not sure you have any other option. Most of the time, you can't restrain what other people choose to do with images they've taken that contain your likeness, and you can't stop them from posting content that mentions their spouses. That person is willfully disclosing this data under the terms they've accepted. If the person is disclosing data about you that you don't want disclosed, that's an issue between you and that person, not that person and the entity who receives their willful, free, and unprompted disclosures.
Facebook can't know a priori that the user intends to disclose information on a person who doesn't want Facebook to know about them, and thus can't do anything about it until it's too late. Allowing people to delete other peoples' content because they don't want to be mentioned/included in it is an obvious non-starter (and Facebook already includes this to a reasonable extent with untagging features).
The privacy threat could be partially, not totally, mitigated by radically changing the way we approach online services, but I don't see how that could be pushed through. It wouldn't benefit any commercial entity, so no one with private motive would bankroll it to a height that could reasonably challenge FB, and it surely wouldn't be fair to use police/military force to dismantle Facebook et al and force users into a hypothetical somewhat-more-private decentralized, self-hosted, encrypted Facebook replacement, so we're basically stuck.
The reality is that this is the world we live in now. I don't think there's really a way to avoid it. Legal solutions that prevent vendors from referencing some data points may limit some effects here and there, but I think it's going to be difficult to craft something that really accomplishes anything big, and the tech and data is still going to be out there and used by some people regardless of the legal status.
This is literally what EU data protection is for: if you're going to process personal data, people have the rights to know what it is, whether it's correct, and to have it deleted.
From the article the tagging feature is not available in the EU. An interesting question is are the citizens better/happier/safer without this feature? I don't propose to know how to measure that.
The reality is that this is the world we live in now. I don't think there's really a way to avoid it
I agree. Which is why when the original commenter stopped at "don't use their service", I pointed out that that action isn't enough. It's not good advice, because it doesn't actually work.
> Legal solutions that prevent vendors from referencing some data points may limit some effects here and there, but I think it's going to be difficult to craft something that really accomplishes anything big, and the tech and data is still going to be out there and used by some people regardless of the legal status.
I think you underestimate the effectiveness of laws and punitive damages for breaking said laws. Seems to be working fairly well in reining in companies like AirBnB and Uber, its a matter of time before we rein in companies using data mining.
I understand that laws can have an immense impact. In fact, I believe that our bad network access and copyright laws have enabled things like Facebook to become scary data-hoarding monopolies in the first place.
Without the CFAA, without improper application of trespass to chattels, without improper application of the Copyright Act to RAM copies of pages that exist for mere microseconds, and without considering exploitative browsewrap "agreements" legally binding, Facebook (and most other "walled gardens" that effectively downgrade the WWW into just another technical curiosity in content delivery instead of an independent, open, and free publishing platform) could relatively easily be harvested and reduced to something like the decentralized protocol described above, which would greatly enhance individual privacy and control moving forward. But I digress.
Uber and AirBnb have physical-world touchpoints that are easy to police using our existing police and justice infrastructure. You're driving a car with revoked registration or renting out a unit in an area not zoned for short-term rentals and/or without the proper licenses. These are straightforward violations that are trivially observable, relatively simple to prove beyond a reasonable doubt in a court of law and reasonably within a jury's comprehension and familiarity, and we can send a real guy with a gun to come and haul you off to jail if you break these rules.
All-digital conduct, like storing too much metadata, is much harder to detect, curb, and assign accountability for. And even if Facebook itself is stopped from making the correlations or compilations outlined by such a law, nothing can stop under-the-radar third-party scrapers or special-privilege actors like intelligence agencies from combining the data exposed to Facebook and ultimately achieving the same end: an uber-effective mass surveillance system capable of identifying anyone, anywhere (potentially even digitally) with nothing more than a semi-decent picture of their face.
> ...changing the way we approach online services, but I don't see how that could be pushed through
I'm a little bit more optimistic than you. As noted in the article, there are important differences under the hood between FB in the US and The EU, and these were "pushed through" by courts, and cultural norms (e.g. not a violent/benevolent 'privacy junta' as you suggest).
I can imagine a world where the US gov banned sucking entire contact lists out of people's phones for commercial exploitation, for example. Everyone feels it's creepy, and there's zero user benefit. I don't see why this wouldn't be extremely popular, and straight forward to legislate.
This would remove the primary data collection means for FB's shadow profiles in one stroke.
Well, if you came home and I was wearing your partner's dressing-gown and smoking your pipe, I'm sure I could offer a few non-evil research reasons too.
> Not sure you have any other option. Most of the time, you can't restrain what other people choose to do with images they've taken that contain your likeness, and you can't stop them from posting content that mentions their spouses.
But, assuming that FB's face recognition actually works, they can refrain from tracking people that don't have an account.
But your husband and friends are at fault for sharing your information, not FB.
Take Facebook out of the equation and you still have friends showing your picture around, perhaps to people you don't want them to. In a Facebookless world, you would simply tell them to stop. My point is that it isn't Facebook's fault that your friends share too much about you.
Yes, which means the original commenters position of "don't use their service" isn't strong enough, which is what I said. Note that "their service" isn't limited to FB.
Can it even store biometric data for you if it doesn't know who you are? I was under the assumption that it's only stored if your profile is linked with your face in a photo.
Can it even store biometric data for you if it doesn't know who you are?
That's a weirdly phrased question, since your biometric data is, by definition, part of "who you are".
I was under the assumption that it's only stored if your profile is linked with your face in a photo.
My comment has more do with the base concept of "don't want to be tracked, don't use their service" as a general concept for any service, rather than FB specific (which is why I gave a non-FB example). That said, I don't know if the rumors of "shadow profiles" are true or not. There is no technical reason why an image with two people in it (one of him which they already have) tagged with a comment of "Me and my wife on our honeymoon", couldn't be parsed given current NLP and face recognition capabilities.
Sure, you can create a set of 'face templates' that may or not be linked to a profile, and then match against them as long as you have some meaningful way to use the matching templates without a profile.
I would like FB declared a monopoly and broken up or have some sort of data sharing forced on them so that other social networks can pop up.
Network effects cause monopolies. There should be regulation to undo the effects of this sort of monopoly to increase competition in this space. Namely by forced open api and data sharing, so you can take your data to another network if you want to.
Another to see that is that things are not black or white. It's not the company on one side dictating what it wants to the user. This is an example of someone saying "I don't agree with that" and took action. People taking action in accordance to their beliefs should be celebrated.
“We could soon have security cameras in stores that identify people as they shop,”
Already deployed. Here's a report by a retailer who installed a face recognition system.[1] Names anonymized.
“We now know within seconds of a person walking in the store if they’ve previously been caught stealing from us. ... Suppose Johnny Johnson is caught shoplifting at a Store-Mart branch. He’s detained, photographed, and given a barring notice. Johnson’s photo is entered into Store-Mart’s database of known shoplifters, becoming an “enrollee” in the system. Three weeks later, Johnson walks into Store-Mart again. Within five seconds, the system has captured his image from the store security cameras, compared it against every photo enrolled in Store-Mart’s database, found a match, and sent an alert to the in-store loss prevention associate’s (LPA) smartphone. The LPA looks at his phone, and Johnson’s name, photo, and detention history with Store-Mart pops up. The LPA verifies that the photo in the alert actually matches the person who just walked in. Then the associate approaches and says, “Mr. Johnson, you’ve previously been given a barring notice from Store-Mart. You’re not allowed to be here. Please leave.” And Johnson walks back out. So, within a minute or so of walking in, a known shoplifter has left the store, empty-handed."
Coming next, the sharing model:
"A national shoplifter database similar to the Stores Mutual Association model is already in the works. This would mean that each additional retailer who adopts the security camera technology and starts sharing will incrementally increase the value of the system for all members."
What happens if nearly all retail stores had this and you got caught shoplifting. Are you no longer allowed to shop in public? Is thee a time limit, say 1 year, after which they should let you shop again?
> Is thee a time limit, say 1 year, after which they should let you shop again?
This implies a functioning government, and regulations.
Excluding is a decision made by each retailer individually, and the vast majority will not even have a policy for removing people from the list.
You could dynamically raise retail prices based on whether someone has bad credit, or is on one or more bad customers lists. Shouldn't the shoplifter be expected to pay extra for the risk that retailers willing to sell to them bear? Or you could just open stores that don't exclude with high prices and terrible quality products.
Yes, that's why Marks & Spencer in the UK operate the scheme and share the system with other major retailers in the UK. They are able to use the UK prevention and detection of crime exception to avoid data protection laws.
I presume that's who Store-Mart are, but there are others doing similar stuff.
Of course. See this marketing video from FaceFirst, which powered the retail application mentioned above.[1] They offer cloud-based facial recognition as a service. "FaceFirst maintains a massive, centrally managed database and server farm at our headquarters."
Since much of their input comes from video, they get lots of pictures of each individual and combine them. This improves accuracy. They use the Cognitec face matching algorithm.
WalMart tried FaceFirst in 2015, and gave up on it.[2] "We were looking for a concrete business rationale ... It didn’t have the ROI."
Facebook has what they call "shadow profiles" that still track you even if you've never signed up (or deleted your account). Not using Facebook doesn't mean they don't have information on you.
Many sites do this. They encourage users to give full access to their phone's contact list, and store all the data they get from it, then start building a network of "people who know this email address/phone number", ostensibly so that when they finally convince someone to sign up they have a ready-made network of recommended "people you may already know".
We can already fingerprint digital devices and make pretty good guesses based on purchases and geofencing (leaving the area) whose fingerprint goes to which digital device (especially over repeated visits). I haven't heard of a company that offers this at scale or to cross-reference the information, but I can't imagine it'll take long. It's just a matter of some elbow grease for engineering and the right sales team.
You know people who use Facebook. They upload pictures that, more likely than not, include your face. Facebook knows your face even if it doesn't know your name (which people will often readily supply even if you don't have an account, so don't be so sure that they don't know your name too).
You don't want to be "that guy" constantly demanding that your friends pull down any picture that you show up in, a request your friends will probably ignore anyway, and which will greatly diminish your personal likability.
The government has your name, face, address, and other personal information stored away safely in DMV and passport databases, among several others, and this information is surely being used in similar ways behind the scenes. Facebook neatly bundles all of its data and submits it to the NSA via PRISM every day, granting the government full access to both datasets, although to be frank, just Facebook's dataset is going to be good enough to track someone down, even if a little more work is necessary to dig up the exact details like address.
An interested party with such access could almost surely run a program and find the last people you were photographed with, which will almost definitely make it easy to find you personally. Combine this with smart surveillance systems and any time you enter a public place, sans cell phone or any other trackers, your personal location can be recorded.
I don't really think there's any way around it and I am in fact surprised that people haven't already made a public "search by face" engine. Correlate this data with Facebook or some other all-seeing collections and the technology to dynamically identify every individual that enters a building via surveillance footage is already there. The only thing holding it back now is a) political correctness and b) the practical difficulty of extracting all of this data at a large scale into a format that can be easily cross-referenced. Both of these are permeable and temporary restrictions, and access to the data is already a non-issue for some actors.
Expect to see such systems developed and sold in the next several years. Laws may make such software illegal, which will limit distribution to laypeople, but ultimately it will still be out there for interested parties to acquire.
The age of unscannables and cyberpunks is upon us. The only way to retain privacy will be to wear your hair in interesting ways. [0]
Discreetly snap someone's photo and there's an excellent chance that face recognition will fond their social media profile. Now you have their name and potentially a lot more (age, school, hometown, etc).
All it takes is enough centralization and a culture of sharing a lot online and you're loving in the future!
> You don't want to be "that guy" constantly demanding that your friends pull down any picture that you show up in, a request your friends will probably ignore anyway, and which will greatly diminish your personal likability.
Mostly because it doesn't work. I've always been that guy, and people still constantly take your picture. There's a lot of money in convincing people that taking and sharing publicly pictures of oneself and everything that one comes into contact with is unbelievably fulfilling.
> I don't really think there's any way around it and I am in fact surprised that people haven't already made a public "search by face" engine. Correlate this data with Facebook or some other all-seeing collections and the technology to dynamically identify every individual that enters a building via surveillance footage is already there. The only thing holding it back now is a) political correctness and b) the practical difficulty of extracting all of this data at a large scale into a format that can be easily cross-referenced. Both of these are permeable and temporary restrictions, and access to the data is already a non-issue for some actors.
> You don't want to be "that guy" constantly demanding that your friends pull down any picture that you show up in, a request your friends will probably ignore anyway, and which will greatly diminish your personal likability.
I'm not extreme enough to go there - what I meant more is that I don't use it because I don't want to support a company that violates peoples' privacy so much. I really could care less if they can recognize me in particular.
It's good to see that privacy concerns are being voiced publicly. I'm not certain that class-action lawsuits are the most productive approach to take, though.
On the technical side, I was intrigued that the "black box" of multilayer neural networks could provide a legal defense for Facebook in this case.
Often, the fact that neural networks are harder to analyze compared to other AI techniques is cited as a detriment, but in this legal case it could prove useful.
>> I'm not certain that class-action lawsuits are the most productive approach to take, though.
1. I don't understand the hand-wringing. If FB turns off the feature for Europe, why are they allowed to continue in the US? [1]
2. It would be better to simply send the decision makers at these tech megacorps "to the guillotine" (figuratively, of course) [2] without over analyzing this issue until they announce a moratorium on these features unless...
3. ... they explicitly ask for permission for each new data point they collect and infer. Sure, opt-in seems high friction in the short run, until these companies finally realize that they are going to be clubbed together with the Enrons and the Arthur Andersons of the world. Character is generally judged at its worst, and having a billion dollars in your back pocket and all the lobbying power in the world, believe it or not, only reinforces suspicion.
[2] Yes, the guillotine was a knee jerk reaction too. The problem is - these preachy founders seem to strongly believe they are entitled to apply rules differently to themselves. E.g. Mark Z buying off all the four houses surrounding his house, Eric Schmidt getting angry when Google was used to obtain his personal stuff..
There are several competing philosophies of legal interpretation, but (afaik) only one, strict constructionism, wholly disregards intent as a principle.
Usually, the issue is more about whose intent should be considered, and how should that intent be ascertained, than whether or not "the letter of the law" is supreme.
The thing about it is that people are not governed by an ethereal intention that was in the legislature's mind when they crafted the law, because this is undefined and people may have differing opinions about what it is, and because there isn't necessarily a single unifying intention behind the passage of the law; there is only the law that was written, encapsulating the diverse intents of the various parties sufficiently to achieve passage. Assigning more than this enters the realm of speculation.
People do not vote their reps in or out based on their intentions, but on their work product (the laws that get passed and enforced). We do not want judges to be able to say "Yes, I know the law says this, but I think the law really meant this thing that is, in fact, very different."
That's way too much power for a judge, especially considering they're typically appointed, not elected (local judges are sometimes elected). The power of the people via the legislature can easily be neutralized by such actions, and confidence in the judiciary could be seriously harmed by a string of conflicting and contradictory rulings, making people unsure of what's actually legal/safe.
Thus, textualism considers the meaning of the law as written as a whole, and may consult documentation not related to the historical development of the specific legislation at issue, but related to the usage and understanding of words in context at the time that the law was enacted, to approximate what the law (not necessarily the legislature) really meant and how it would've been interpreted by the people who, through their representatives, approved its installation.
One could say this is all technical self-indulgence and that it misses the forest for the trees, and that would pretty well encapsulate the legal industry. ;)
As someone who had cosmetic surgery, it took a good amount of changes to be tagged as a "different" person by Facebook and the Photos app. One operation that altered the frontal view of the jaw seemed to have finally done the trick (many common operations like rhinoplasty mostly affect your side profile.)
I think part of this is that their facial recognition doesn't have a logic check based on the other context data that they have.
eg. my brother's wife will post vacation photos of him, him with his kids, and him with her, and yet sometimes it still auto-identifies the pictures of him as me. Facebook knows I'm logged in at an ip 1000 miles away. Facebook knows my brother is with his wife because they both have logged in recently while on vacation. Yet I am tagged instead of him.
I think they only try to recognize the face and don't really care about anything more than that and because it's actually very accurate, that strategy has been working so far.
How soon before this analysis is being used to cross-reference with terrorist and sexual predator watch-lists causing all sorts of problems for people who happen to closely resemble anyone on those lists?
I'm thinking it's time to buy stock in suppliers of masks, cosmetic surgery and other forms of facial obfuscation.
There are already make-up patterns that obfuscate the face causing face detection tofail, but no one wears that on a daily basis, because it looks rather weird.
To me, the interesting thing about this is that the "biometric data" is actually the collection of photos that the users uploaded.
The data-structures used for facial recognition are just metadata, extracted from the original imagery. It seems to me like data retention laws would apply to the original imagery, but not the derivative data structures. But that's obviously not what privacy-conscious user would expect...
I'm interested in seeing how this kind of case ends up being decided, when it (inevitably) ends up at the supreme court.
> It seems to me like data retention laws would apply to the original imagery, but not the derivative data structures.
That's probably not how it works.
If I have a database of personal information of users, and I throw away the user's day of birth (but not birth year), I have still a database with sensitive information.
Also, if I convert a CD to MP3 format, then I'll also throw away 90% of the original information. I might still be liable for illegally copying a song.
I like the face feature personally. Kinda just neat and makes tagging super easy.
However I'm curious, How does Illinois even have jurisdiction to sue Facebook? As far as I know they don't have any land or employees there being a California based company. Even many terms of services say you agree which court to dispute. Seems like Illinois is over reaching to me.
> You will resolve any claim, cause of action or dispute (claim) you have with us arising out of or relating to this Statement or Facebook exclusively in the U.S. District Court for the Northern District of California or a state court located in San Mateo County, and you agree to submit to the personal jurisdiction of such courts for the purpose of litigating all such claims. The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions.
So I guess that's how they can get them... I really hope Facebook wins though. Last time I checked Face recognition creates 3 floats from a face. I'm not sure Facebook's specifics but in general that how it works.
So in a way I'm amazed a face can be turned into 3 numbers, and also think it's silly to sue over 3 numbers in the grand scheme of things.
Suppose facebook has the ability to look at a photo and determine who is in the photo, and the court determines that amounts to storing biometric data (despite the explicit exclusion of photos from the definition of biometric data). Now facebook would have to get explicit permission from users and non-users to collect biometric data. How could they possibly not collect data from non-users? If they have the ability to turn the feature on at any moment, any photo of a person would amount to biometric data, whether the person was a user or not. There doesn't seem to be a good way to comply. They could not host photos (no way). They could stop pursuing facial recognition (yeah right). My favorite solution is to have them alter any new upload by blurring out the faces of any person who hasn't consented to them storing biometrics.
It freaked me out in the recent video about tech in Russia. You could take a picture of someone with your phone and it'll find them in social networks. Eg you could take a picture of someone on a street them find everything about them.
Facebook et al are only paving the way for the future where more and more people and smaller organizations will have such tools and data at their disposal for whatever means. These walled gardens of data can only maintain this advantage for so long.
They would likely require people to upload multiple photos to build a profile to "opt-out" and even then there are FPs and look-a-likes that make such an opt-out hard to guarantee (e.g. identical twin where one opts-in and one opts-out)