You'd be surprised / scared / outraged if you knew how common this is. Any time you've been in a public place for the past few years, you've likely been watched, analysed and optimised for. Advertising in the physical world is just as scummy as it's online equivalent.
Face tracking, emotional analytics and vision based demographics analysis is a pretty huge industry. There's a entire spectrum of uses for this tech, from the altruistic (psychology labs, humans factors research), to the well, not.
I've lost track of how many times I've said this on HN:
We need HIPPA for all personal information, not just medical. We have an expectation of privacy in being "lost in the crowd" when we're out and about. Our physical & online whereabouts, who we're physically with, who we're communicating with, our personal contact information, and obviously payment information is private information that can be harmful if not kept private (false positives in automated legal systems, identity theft, and including all the defenses of securing medical information).
Anybody who chooses to hold such information must regard it with a high level of respect and privacy. Since nobody is doing so, and there are no penalties for violating privacy, and this gets into fundamental rights and proper functioning of society, it seems applicable to federal law.
HIPAA does not make your medical information private, it makes it Portable. Whether it has improved the protection of your digitized medical records is debatable, but it definitely forced almost every industry remotely related to medical care (and some previously unrelated industries) to digitize their records and share them.
Sure, paper medical records suck and aren't inherently more or less secure, but no one breaks into a car and runs away with 500 patients' medical histories when each patient's record fills pages, folders, or filing cabinets, rather than bytes on a hard drive (or even better, it slips away through a network connection that no one in the hospital even knew existed thanks to a back door on a piece of medical equipment).
HIPAA largely means that your medical information has been outsourced to whatever software/network/hardware provider claimed they could do the job (and whoever they outsourced the job to in some cases). If you don't sign whatever HIPAA agreement(s) your provider puts in front of you, chances are they can't treat you, so what choice do you really have?
>with about a billion caveats to allow journalism etc.
Then you'd just see setups like the financial industry has. You get analysts, call them journalists, and have people subscribe to your publication. The journalists go get insider-ish tips and 'publish' them to a select group of followers.
With laws like that you'd just hire a full time business analytics journalist to cover your store.
The cameras retailers use with their surveillance systems are coming with facial recognition built in now. [1]
And lots of retailers, banks, etc, are using systems that track people's visits across multiple locations. [2]
You'll see a lot of these systems being sold as fraud/loss prevention solutions. The reason for this is that it's a relatively easy sell this way - customers can count how many thieves they've caught this way to easily determine the ROI they're getting on the system. Once the systems are in place, it's relatively easy to start using them for marketing related purposes.
Not all uses of systems like these are necessarily unethical. Consider a case where you want to set up a rule like 'if the average lineup length at the checkouts exceeds 5 people, call backup cashiers'. The problem is that once you have something like this in place, it's very tempting for company execs to want to use the data for legal but less than ethical purposes.
That's always going to be tempting, and the only real tractable solution is for society to have a larger conversation on the ethics so the law can catch up with it.
Note that some ethical consensus is key---without it, companies can just price "Well, some customers think image recognition is creepy" into the risk model and do it anyway. Compare privacy concerns---people talk big about their concerns over privacy, but in practice, we're still in a world where a survey-taker can get very personal information from a random individual at a mall by offering a free candy bar. Until and unless people arrive at a common consensus that their personal information---including their face---has value or they have a proprietary right to that information, even in public, there's no real tractable solution to this problem.
... because there's no real agreement that there's a problem to solve.
the only real tractable solution is for society to have a larger conversation on the ethics so the law can catch up with it
The department of commerce tried to facilitate talks about establishing a voluntary standard. The surveillance industry was so terrified of the idea that they should be held to a principled position that they wouldn't even budge on one of the weakest possible protections: A voluntary-participation standard that said people must opt-in to be identified by name through facial recognition when they are on public property.
Often, you're more-or-less getting the content surrounding the ads from the ads, albeit indirectly.
My local gas station upgraded its pumps recently to allow it to play video ads on the screen used to do the credit card transaction. I don't doubt it's partially the reason that gas station is still operational when similar non-franchises vendors in town have gone under.
Often times i don't give a damn about the content it sponsors. I'd much rather be able to do my business without being assaulted by ads, which often have little to do with reality, and often act as an alienating and dehumanizing force. It's very difficult to see the good.
I would rather have no content than ad-supported content. of course, nobody will ever offer that! You can't sell ads if people can opt out, and too many big players think they're the only way.
Thank gas station should have charged more or folded than sell you shit you don't want, won't want, and will never spend money on.
If you're expecting people to fold their livelihoods instead of sell ads, you may not understand how attached people are to their livelihoods. ;)
Meanwhile, there are some inroads into financial support alternatives to ads everywhere. Google has a "contributor" product (https://contributor.google.com/v/marketing) where you can basically bid against the ads they'd vend to you; instead of an ad running, you pay a microtransaction to buy the privilege of no ad.
It's an interesting idea, but it only works with Google's ad network.
Frankly, i don't mind google ads; i mind wasting 20 seconds to load a page with about two paragraphs of content and 3mb worth of ads. But this is all ignorig the broader point: why are we basing our revenue off of patterns many realize for being toxic, consumerist, negative-value? People AT GOOGLE will happily admit this while working to build it.
I do my own part by supporting Ad Nauseum[0] and actively punishing sites that serve ads, particularly facebook and google. It's also decent for a (very shallow, for now) layer of noise for your ad profiles. Offer me a flat fee and convince me to spend; don't trick me into viewing ads.
Anyone have a source for this oft claimed fact? Retail to spot spread is averaging $.50-$.70/gallon for 86 octane with $.20-$.30 added for each premium tier in PHX. Does adding detergent & transport eat up that much margin?
There's no real reason to believe that, though. If someone has a space for an ad, why wouldn't they sell it, even if they don't need it to produce the content? This is one of the problems with profit-maximization: it means every avenue of efficient revenue generation should be exploited whether it's needed or welcomed or not.
Even the pay-for-no ads model doesn't hold up, because if you pay for content, why wouldn't they just double-collect and make you pay for ads served with the content? I purchased my phone and my phone service, but I still get ads in my notifications. Because I didn't pay "enough" to avoid it.
It's like paying off a blackmail ransom. You give them $100 and they come back next week and say "how about another $100?"
"The cameras retailers use with their surveillance systems are coming with facial recognition built in now. [1]"
Your source in the marketing material of an IP camera manufacturer.
We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running. These manufacturers, like Axis, whom you cite, would love for such capabilities but they are still very uncommon.
>We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running.
While I'm sure this is true (since the majority of IP cameras in the world are cheap things little more than webcams), do you have a number for retail stores specifically? I know many of the larger chains spend a lot of money on their cameras and movement detection and other intelligence has been onboard those for at least 15 years.
Just yesterday I was hearing news of how most of the retail giants and lots of smaller retail stores are going out of business due to competition from ecommerce. If that means the end of practices like this, then good riddance.
But, aren't ecommerce sites collecting this information and more from your browsing? I don't think it's possible to say one is much better than the other, just that we expect tracking online, not in the real world.
There is the point that in the "real world", social norms haven't yet adapted to the requirements of privacy (although you could also view it as societal norms allowing too much tracking). For example, if I wanted to use a mask to conceal my face from trackers, I would be ostracized. There are analogues in the virtual world of course, but it's usually harder in the physical world.
It's likely traditional retail that falls by the wayside is going to make room for more competitive retail that leverages this information to its advantage in a way ecommerce sites can't.
Consider Amazon Go (https://www.theverge.com/2016/12/5/13842592/amazon-go-new-ca... setting up an account with a store, users enter, grab what they want, and leave. The system of cameras and biometric trackers observing the store figures out after-the-fact what you grabbed and charges it automatically to your account through a sensor fusion including face recognition. That's a level of convenience rivaling ecommerce for things people want to grab by hand (often produce and small items, for example), and it's completely enabled by this category of technology.
Perhaps, but it simply means the survivors will become more desperate to gain an edge. We've seen this exact behavior with online news sources cramming more and more ads and trackers into websites.
It also picked up the colors in my aloha shirt perfectly. (Anyone who knows me knows that I am to aloha shirts as Steve Jobs was to black turtlenecks.)
When I want to feel young and go shopping for shirts, now I know what to do!
Hey! We should make a club. I am a 41 year old male and it recognized me as a 26 year old female. It recognized my wife at her age and gender until she took her glasses off. She lost ten years and stated, "I'm never wearing my glasses again!" Then, proceeded to walk straight into the wall.
I tried the demo. A little sad, that it sees me as 12 years older than I am, and that I apparently always look angry and disgusted! I'm going to blame it on my glasses and bushy beard, and try to look at the bright side - apparently face scanning systems aren't quite good enough to get a read on me yet. (And try not to be too sad about looking like a grumpy old man)
(Anyone else with a glasses, a beard, or other non-typical facial features want to comment? I'm curious now how well their system handles these?)
Same experience here. Shows my age 10-15 years more than actual. I tried to smile and it just filled up the "disgust" bar. Neutral expression shows a high amount of "sadness". I wear glasses too, and have a slight beard.
I don't think glasses and beard are non-typical facial features!
With a bald head, beard/mo, and reading glasses it doesn't detect a face. Without the glasses it estimates me as 7 years younger than I am - and reasonably high on anger and sadness...
I did well... it said I was an angry 33 y/o male. Well, I am male... I'm almost 50... and I didn't think smiling at the camera conveyed anger... but who am I to question our AI overlords ;-)
(edit)... on the other hand they probably were just trying to sell product so thought flattery was the right approach...
Says I'm 4 years older than I am (31 / 27) and have high levels of sadness.
Covered up my receding hairline a bit and it said 29. I reckon if I shaved I could get it down to about 22 since that's how old people usually think I am.
The real metric to judge it'
s effectiveness is by comparing its accuracy to an average human observer's responses. I doubt a human would do a lot better at estimating someone's age.
Don't think so - I just used the same pair of images - one with my glasses on and one without - AWS Rekognition guesses mostly the same for both of them - the sightcorp.com one doesn't even detect a face when I've got my glasses on.
Rekognition guesses a wider age range - but gets a "correct" answer - the sightcorp one guesses me as 7 years younger than I am.
I was very unhappy to discover that shoes now often have RFIDs built into soles. This + anti-theft RFIDs readers that are already deployed by the entry of most stores can allow to easily assign unique ids to shoppers.
Most anti-theft tags are not RFID and the gates are not full RFID readers. At least in Europe, vast majority I see are still based on simple resonators that get disabled on checkout. Effectively, the gates only provide a yes/no signal and can't be used for tracking.
Applied Science has a good video on how they work:
This is super cool. Was there any announcement or documentation for this?
We used a Cisco Meraki router once for a client and rigged it up to know who was in the office (for fun, to be aware that it could be done). It'd be nice to know the iPhone/iPad scramble themselves if possible.
Only if your phone is locked and it is looking for all open wifi networks. If you unlock it or it is connected to a particular wifi network this is not true.
I've heard that you can disable RFID readers (not tags, readers) with an appropriately-resonant coil and an EMP circuit.
I'm not sure if the same can be done to tags, but considering the size of the tiny electronics, and the fact that they are manufactured under the assumption they'll never need to be touched (aka, no CMOS spike tolerance), it might be trivially...
...wait. I just remembered about RFID alarm barriers in retail stores.
Well this is annoyingly difficult to discuss, then...
I draw the line where actions are taken. A discussion is not an action. Using a device illegally is, like for instance pulling the trigger of a gun with the evil intent of murder, or taking something that isn't yours.
another trick would be to pay for the shoes in cash; in this case they will not be able to link the RFID chip to your real identity. Cash payment is a very privacy friendly technology.
That's not the point - the point is being identified as an entity by a unique marker that the RFID tag gives off. It's still an anonymous entity, but it can be deanonymized by correlation... with your face via video or whatever.
Me neither (too lazy for that), but I know they are used and abused by people too. This leads to funny cases I heard of like a company specifying that some shoes are for "walking", not for "running", and refusing to refund them if you admit to running in them.
If it's "not unreasonable" for them to reject warranty claims if you run in shoes "not intended for running", does that mean it is reasonable to make a warranty claim on shoes "intended for fashion" if you're not picked up while wearing them?
"Wore these heels to six bars, didn't get hit on once. Please repair or refund."
Do you have a source for this? All I can find online is the occasional use of RFID for stock management or the odd marketing campaign. But nothing about customer tracking
When I was 18, some lady at the community pancake breakfast in my grandpa's small town told my mom that 13 and younger are free. Humans make mistakes too.
Are you by chance of a specific ethnicity? (no offense). These systems fail spectacularly if the training set includes only certain ethnicities and the test-ee is not one of them.
Perhaps. On the other hand, it's something that good salespeople are doing internally already; there's an argument to be made that this is just automating yet another part of the customer service process.
(There's an old joke that sometimes shows up on HN about augmenting an automated bugtracker to snap a photo when a crash is detected or a bug is reported, so developers can be reminded that bugs tie down to real people who are actually sad / angry that the software failed them ;) ).
Why is this scummy exactly? If a salesperson was to try to sell to you in a store, they would take into account how you appear and act to tailor the sale. There's nothing wrong with that. Why is it suddenly bad if a machine does it?
Because when you talk to a salesperson you know you're being looked at (and reciprocally you're looking at them), and human memory is limited so it's unlikely they will retain any "data" about you when the contact is finished.
Here, instead, there is no indication that you're being watched, analyzed and kept recorded for indefinite amounts of time.
Reminds of a law here in Sweden and how car surveillance work on the bridge to Denmark. The law forbids the unnecessary registration of people so in order to avoid breaking the law the police have a live system in place where information of a car on the danish side get show on a screen on the Swedish side, giving border and toll guards enough time to react. The whole thing is legal only because the system operate live and never store any data, which otherwise would create a illegal register with personal information.
I assume that the data is being used for A/B testing on the display designs (we get 20% more attention from teenagers when the background is orange) - if that's the case, not very scummy.
If you are in public you are being looked at I do not understand your logic. When you go in a public place there are already public accessible web cams that people use to track this kind of thing i remember a thesis that used public accessible cams to try and track people and build up a database. I have always had the opinion you lose privacy when you leave your house since you are in public, and public like is opposite of private/privacy so to me it makes sense.
I have always had the opinion you lose privacy when you leave your house
Privacy is not black and white.
There is a world of difference between someone seeing you for a moment as they pass you in the street and forgetting you a moment later, and automated systems that permanently record everything, analyse it, correlate it with other data sets, make it searchable, and ultimately make automated decisions or provide information that will be used by others to make judgements about the affected individuals, all without the knowledge or consent of those individuals and therefore without any sort of reciprocity.
The idea that you have no reasonable expectation of privacy in a public place dates from a time when you could also expect to pass through town in relative anonymity, go about your business without anyone but your neighbours and acquaintances being any the wiser, and would probably change those neighbours and acquaintances from time to time anyway so the only people who really knew much about you would be your chosen long-time friends and colleagues. I think it's safe to say that that boat sailed a while ago, and maybe what privacy means and how much of it we should expect or protect aren't the same in the 21st century.
Just because there is no expectation of privacy does not mean that a reasonable person would assume that their every action is being recorded in precise detail to be stored away forever by a third party.
A lot of things are technologically feasible, and in many cases can't realistically be prevented ahead of time, yet are still considered socially unacceptable or even made illegal. Just because we can do something, that doesn't mean we should. This principle has never been more relevant than in the use of technology.
What's technologically feasible is irrelevant to our moral expectations. It's technologically feasible to brain you with a club and steal your stuff, and has been for millennia.
Preventing the misuse of Blunt Instrument Technologies™ is literally what laws are for. Surveillance is just a club we don't have laws about yet, but should.
Well, your behavior and appearance isn't logged in some computer somewhere available for someone to look at whenever they want. Not to mention, face-to-face interaction means you know someone else is watching. This allows someone to do this without your knowledge.
Your reply is disingenuous. The problem is not that abuse is not possible in a human-driven system. Of course some gifted salespeople have incredible memories, hypnotic powers of persuasion, and so on. However, you must consider the following:
1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.
2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
>1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.
A temporary problem solved by natural selection, technological augmentation, and increasing incentives. Perfect performers in any profession are hard to come by. Ambitious people still strive to get there.
>2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
Because people don't understand technology or sales. In your reality, people should be on guard all the time because sales and marketing were already continuous, even before hidden cameras. In actual reality, most people don't care that much about being sold to as long as the sale itself it not abusive.
> 3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
Sure, but that omits the necessary step of justifying this behavior as being either mildly bad on an individual level or terrible on a mass scale, much less both. It is neither.
Also, I would add item 0: advances in technology mean that surveillance devices will only become smaller, cheaper, and more connected over time. The future you fear so much is, in fact, inevitable.
You are applying binary "all or nothing" logic to the real world, which contains many more shades of grey.
It is true that technology (both social and digital) continues to progress, and that the genie can't be put back in the bottle once it escapes. However, you don't have to put it back in the bottle. Speed limits don't stop speeding, and laws against murder don't stop homicide. The legal and regulatory system exists not to fully prevent understand behavior, but rather to reduce it to a manageable level.
In short: I agree with one part of your premise. Technology will continue to evolve and will continue to challenge human society in this area. Unlike you, however, I don't believe that we have to roll over and accept the implications and consequences of unregulated privacy invasions, neuromarketing and whatnot.
I don't think that either, because I correctly recognize that in public, you do not have privacy, either de jure or de facto. Especially if you're not even wearing a burqa, which would today at least give you de jure privacy because it demonstrates intent.
I'm sure that in the future, we will also create cheaply available opaque faraday cages that you can roll around in if you wish. And that most people will not care to do so.
You do have privacy in public. Both the de jure "reasonable expectation of privacy" and the de facto privacies of anonymity, free association, and predictable rules of social engagement.
Well, I seem to have no trouble practicing all of those, so I know they are based on fact. Perhaps you don't actually understand what I'm talking about? Or maybe your experiences differ. Either way, telling me the things that that I personally do are not being done is... not an argument.
>>the de jure "reasonable expectation of privacy"
> Does not protect your exposed face
Yeah, that's why it is "reasonable expectations" not "absolute enforcement."
In other words, as long as you are unaware of the surveillance, you are happy to pretend it doesn't exist? So where's the problem? Just don't click on links like the OP.
Eidetic memory and follows you everywhere and can transfer all those memories perfectly to any number of other people? Yeah. It's like super-stalking and it's obviously horrible.
Stalking per se is mostly only illegal because it becomes harassment and bothers the victim. This kind of monitoring is entirely unobtrusive. As the response to the original tweet illustrates, most people aren't even aware that it is happening.
The information is being used to conduct asymmetric psychological warfare. The notion that it's harmless even if never outright abused where we define abuse as use for other than its intended purpose, is false.
Being subjected to constant sensory input and trickery from dozens of teams of experts on consumer psychology is bad enough when they haven't also been stalking and recording your every move.
Caveat emptor becomes an absurd position when the power imbalance is so great. Massive data collection and mining needs to be reigned in. The fact that it's not obvious people seeking to trick you by any means necessary are recording you everywhere you go does not make it OK, at all. Surveillance capitalism is way, way over the line, has been for some time, and just keeps going farther. That they're good at keeping you from realizing you're under surveillance is no defense whatsoever.
Complaining about warfare that is asymmetrical solely due to the incompetence of one side does not elict any sympathy from me.
Consumers do try to aggregate data for the equivalent of "massive data collection and mining". Most just don't care to pay for something that is not wholly controlled by a storefront. Generally, producers are more likely to understand the ROI.
I find it funny that the store doesn't trust their salespersons to make such a judgement on their own. Probably they hope to do analytics on what kind of people are visiting and when. Selling the data would only make sense if they are able to link it to an identity, I am not sure that they can legally do that.
Well, you never know into what dystophia you are heading...
Most humans today are prejudiced against nonorganic life due to not growing up interacting with anyone but other meatbags.
There's a huge double standard in place that makes it somehow wrong for computers to do what humans have been doing without objection for decades or millenia.
It's because people view the AI as an infallible machine that records everything, which is much more intimidating than the gut instincts of a salesperson.
Right, that's the manifestation of their prejudice. In reality, there is a spectrum, not a dichotomy, and some humans can have better memories than some computers.
Germany is an outlier; their history with the Nazis makes the country unusually conservative about anything that could be abused for mass-surveillance purposes.
Not that this is a bad thing---being able to think differently like this is one of the positives of having countries!---but relative to the rest of the world, what Germany considers "surveillance" is unusual and sometimes surprising.
That demo is amusing. I did a Google image search for "N year old faces" for N = 5, 9, 10 and 14, and eliminated results where the accompanying text did not confirm the age, and then gave some of the remaining ones to it. It was always at least 10 years too old on its guess for these children. It got the gender right maybe 3/4 of the time.
I also tried it on a few internet porn images. It looks like it is definitely only relying on the face for determining gender, or it thinks that there are a lot of women with hairy flat chests and large penises...
Is it creepy? Sure. But anyone can run it. I was looking at a rewrite to work in CLI with a web interface instead. But the core loop is the magic part that makes everything work nicely.
No one's going to go to jail for their first offence of putting a in an advertising sign.
What's the worst that could happen? The local advertising regulator will order you to meet the regulatory standards or remove the cameras, but give you x number of weeks to act per unit installed.
I will be honest, I have no real problems with this. Then again I enjoyed some of the concepts for advertising shown in Minority Report which did feature ads which could identify you.
the idea of collecting who looked at your display is invaluable. It would be beneficial for both government and ad agency. Ad agency is obvious but government could learn if displays present information people want and it was presented in a manner to catch their attention. The negative aspects of government use could be limited through privacy laws and such.
That job measured the number of keystrokes per hour of each employee. You had to maintain a 10,000 keystrokes per hour minimum data entry rate, they also spot checked for accuracy. Capturing the data you were suppose to enter (a scan of a piece of mail) what you entered, and what should have been entered.
While there was no question about goofing off... they used commodity hardware, but nothing else was general purpose (no internet, no email, no solitaire, no obviously general purpose OS), and no phones, talking., etc... they were very much watching for speed and accuracy during the entire time you were clocked in.
Yes, this. I don't know what technique they used. I expect it was known-value variety because they would have better automation in error detection. They may also have used a consensus model, showing work product that really wasn't known value, but was shown to enough different data entry people that the error rate was negated (the error rate requirement was pretty low as I recall) as in a group almost everyone would have done the entry correctly. A third alternative would be to send images that had passed OCR intake for the testing sample.
It was tedious and not well suited for people needing a certain level of variety or intellectual stimulation in their work. I found it a touch soul crushing... on the other hand I knew people there that were completely content doing exactly that work. They'd turn on their radio/books on tape/etc. (some, including me from time to time, would even read a bit) an go to a world of their own thoughts for 8-10 hours and be content. I had a relative that worked at the same place for 20 years and was completely happy with the job and life.
I did the data entry job for (as I recall) about a year and a half: it paid the bills and gave me a better life than I would have without it and I didn't have the right qualifications or work experience for anything better. For those reasons I was appreciative of the work while at the same time I looked to improve my working situation... something I can say of my work today (though what counts as improvement in working situation is way different now).
There are many jobs out there that need doing. Many of them are boring, or dirty, or dangerous. I don't think that necessarily makes for any more or less of a "sad life". I'm completely thankful to those that do those boring, dirty, or dangerous jobs. Some of them, like me, did it as an early first job sort of thing and used the work experience to get a better job: we paid our dues as it were. Others, like what I find unfulfilling or uninteresting. Some want to work outside, some want to work with their hands, some want to work with minds, and some simply want to be a bit financially better off than they are without the work.
"There is no such thing as a lousy job - only lousy men who don't care to do it."
We used to do this in a school - it wasn't creepy - hear me out:
We had labs of iMacs and if anything happened to a machine, kids would (more often than not) just yank the power cord. Occasionally this would foobar the machine entirely and create unnecessary work. We couldn't catch the culprits.
So, if the machine managed to boot up successfully, after an unexpected power loss, we would take a photo using the built in camera and send it along with a ticket to the job queue, as well as do a complete re-image of the machine (automatically).
The number of funny photos we collected of kids just starting at the computer with WTF looks on their faces. But - from these photos we at least had the opportunity to educate the individuals about how to look after the computers a bit better.
There was a minor scandal a few years ago when a school was sending laptops home with kids and randomly snapping photos via the webcam for similar reasons
I think this is the original story, you can follow the "related stories" links at the bottom to see how it developed:
It's been a while but I don't think the old iMacs at my elementary school had individual user logins at the system level. For things like the reading test program there was a login I think but not for the whole machine.
Now I kinda want to see what comes out if one gives a neural network a big pile of pre-transition photos of trans people, and another pile of similarly old photos of cis people. Would it be able to reliably predict if someone's likely to transition based on some subtle cues in the images?
Problematic parts: sourcing these images (I know there sure aren't many photos of pre-transition me, and I don't let the ones in my possession go much of anywhere), lots of ethical issues around having a system that can say "I am 97% sure this person is gonna want to transition". Also probably lots of other ethical issues I'm not thinking of.
No idea, I wasn't really being serious, but what inspired me was a real case of a woman who somehow got identified as pregnant by her supermarket long before she was willing to tell anybody (I don't remember reasons) and they sent her some coupons or something and her family found out. Needless to say, she didn't appreciate.
Presumably most cashiers would just optimise to pressing the same button on every transaction, since doing it "right" makes no difference observable to them.
Unlikely to be common behavior. The tally would come back at the end of a shift or day and the person doing that would be reprimanded, then fired if it continued. It would be enforced by the manager/s.
Now, would they press a random demographic button after that (instead of the same button every time)? Maybe, however there are numerous other ways to increase compliance in that case as well. If the logs kept coming back sketchy, well the cashiers are on tape - bam, another firing (note from the video tape: cashier intentionally hits male button when it's obviously a female, then repeats the behavior multiple times). Eventually the example gets across to the other workers to at least make an effort.
You're giving the organizations too much credit. A roommate of mine worked at a fast food restaurant in the early 2000s, where they were measuring the speed that workers took orders by timing the transactions on the cash register. The store manager had the brilliant idea to game it by running all cash transactions on pen&paper - not using the register at all! not even the cash drawers! - and then keying in the orders as fast as possible after the fact. The store won an award, the manager got a big bonus.
In a few stores here in Australia, I am asked for my post code (or country of residence if no post code) by the cashier when ringing up a sale. Predominantly at tourist/tour related points of sale, but I've also had it an electronics and white goods stores.
No idea if the operator is also recording gender and perceived age group etc., but I do know that on most occasions, you can opt not to answer the post code question.
When you go to those weekend open inspections in Sydney, you are almost guaranteed to be asked for your postcode. They record that as well.
I actually did some experiments - for different properties in roughly the same area/price range, I told different real estate agents different postcodes. It is beyond reasonable doubt that the code you tell them play a huge role on how they rank you as potential buyers. When you tell them a random north shore post code, you are guaranteed to receive a nice & friendly follow up on the coming Monday, however if you tell them that you live in the west (when mostly inspecting north shore properties), they would smile and immediately end the whole conversation.
The sample size here is ~50, which I believe is big enough to draw some reasonable conclusions.
Buying data lists. You usee your mobile number to get your cinema ticket - or whatever - the cinema sells that data. Do you get package alerts when you have a parcel due, now your phone number is joined to that address, plus presumably the credit card companies sell their data (?).
Companies amalgamate that data, then sell lookups of varying degrees.
Screwfix in the UK gather a lot of personal data as part of their sales process, they're the least covert about data gathering I've seen.
I'm not sure if it's true, but a friend once told me the reason a store asked for your zip code was to see if they had a large audience coming from a certain area. This let them know other locations to possibly open other stores.
As far back as 2000, my partner was asked for her zip code as we made a purchase and I curtly replied "no comment," to the surprise of everyone there. She walked for her phone number, which really irritated me because this was a $5 retail purchase of some type), and I got more curt when I said, "You don't need that!"
I've been annoyed by this stuff long before most people were ready to consider privacy concerns anything more than paranoia.
I like to give fake numbers in these situations. The way I see it, intentionally supplying bogus data is one of the only ways we have left to fight the machines and their algorithms!
I like to give obviously fake numbers. Like 12345 for a zip code or 212-555-1234 for a phone number. Most people don't care enough to have a reaction, now and then you get a laugh, and rarely you'll get someone who calls it out as bogus. My standard retort is somewhere between "Are you saying I don't know my own phone number?" and "Are you calling a liar?!" depending on how surly the response.
I was in an albertsons back when they wanted a number in tahoe and went in the 2nd time... cashier remembered me and said "what's that number again... something something something 5309eynine..." and was dancing a bit... it took me a second then I said "what's the area code here?" he glady gave it to me, so for about 12 years I just did $CURRENT_ZIP-867-5309.
My safeway card is in someone else's name... one day they had to pull it for some reason and I got a "Have a nice day.... Mr.... Soprano." and a big smile.
My father had memorized a fake Social Security Number that had come as a sample card in a wallet he got in the 1950s. When anybody except the government asked for his SSN and who wouldn't relent on his pushback, he would give them that number.
Wow, nice! I don't know if he was one of the 12 in 1977, but he would have been if this is the number he used. Woolworth's totally makes sense. If he were alive, he would poop purple Twinkies at that story. Thanks!
He's been dead for 10 years, but he didn't use it for those. As I remember it, in the 80s-90s it was more common for SSNs to be requested for normal consumer things.
I gotta say, it's possible he never actually used it in my lifetime, but he could sure tell that story and rattle off the digits at a moment's prompting.
There's a bit of plausible deniability if you give slightly bogus info, like transposing numbers. You could assert that there was a typo on the company's part.
Sometimes the credit card terminal will ask especially at gas pumps for my zip code as an anti fraud measure, and will reject the transaction of you enter the wrong number.
If a cashier is asking I always use 90210.
I guess a lot of people not from the US will use 90210 when prompted for a US postal code. I can't even remember what the show was about (except that it was set in Beverly Hills, obviously), but the number stuck.
I had same issue when visiting the US. I tried some fake numbers but it wouldn't accept it (my actual postal code has letters, so that wasn't possible), so I just ended up paying by cash.
Post codes in Australia are just 4 numbers, so when buying subway tickets in NYC, I just put in 10000 or something (I believe that's close enough to the local code?).
Not necessary, most point of sale systems can provide a unique hashed or tokenized version of the account number for analytics and identification purposes.
By cross-referencing your name (from your credit/debit card) with your zip/post-code, stores are able to determine specifically who you are with greater probability than without the zip/post-code.
Seriously, I mostly shop in the same neighbourhoods. And when not, there's often something on the counter with their address...or I can give a mate's address and let him get the junk mail....
I volunteer at the Boston Museum of Science on Sundays and we also track how many people we interact with at the various activities. We log by group, so log might read "1 man, 1 woman, 2 boys, 1 girl (family)" or "3 women, 10 girls, 12 boys (school group)"
It's really handy to see how many people the activities attract, and who they appeal to most. You're tracked everywhere!
Sorry, I wanted you to expand on this: "cashiers recorded buyer demographics by hand".
Give me an example of what they used to record by hand. All I can think of is "male, adult". I am specifically interested in what else you say they used to record.
I wasn't asking about the present status quo, only your historical statement about cashiers recording by hand.
Thank you! Obviously that is far less privacy violating than demographics could be.
Today advertisers that phone home (spyware) often lie and claim only aggregate data is produced - but this manual example really is the kind of data that is okay. It's far less detailed than something like facial recognition. Thanks for adding the link!
You could distinguish people who are married or are parents (with false negatives) by recording people who are at the till with their spouse or children. (Send out demographic-research cards once, to a few of the same stores you've collected this info from, to derive a normalization factor that will make such collected observations useful from then on.)
You could make a note of a person's seeming affect—positive/negative/neutral emotion.
I'm asking historically what was actually done, not what could be done. (For, "what could be done" you could ask if the cashier had seen this person before? Are they a regular shopper here, as far as the cashier notices?) I am asking what information cashiers in Japan actually in fact recorded by hand. What did these cards look like for each customer. Etc.
Generally what has been "accepted" as done is age and male/female. I'm sure some places have done more but that's all I've heard of (lived in Japan for about 6 years now).
During 2010-2012, I was part of a startup called Clownfish Media. We basically created something very similar to this and got scary accurate results then. Given how accessible computer vision has become, the image in the tweet comes at no surprise to me.
Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).
Next time you look at digital signage, just be aware that it is probably looking back at you.
For me I knew how our data was anonymized. So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them, even the unique identifier was just a hash.
Obviously it depends on how much data you collect/store, personally I don't think the things shown in OP are all that onerous (sex, age group, gender, rage, time spent looking at ad).
> So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them...
Minor nitpick, but giving someone a nickname isn't the same as anonymization.
"Hey Bob, thanks for logging on. Did you know we've been calling you 1234 these past five years!"
When a passive recognition system _uniquely_ tracks & identifies a person, it just takes time before that gets cross-referenced.
(different story if the data gets aggregated, or you scrub the uid completely after some window)
Under this strong definition, anonymization doesn't exist in practice at all. Strong anonymization requires serious destruction of information (e.g. reducing all samples to a single average number). It's not what people in ad industry do.
I work on digital signage, our product isn't using facial expression recognition yet, but it has been asked and will eventually be a part of the system.
What's the difference between this and an anonymised dataset? No PII is tracked, it's just looking at you and calculating what emotion you're likely feeling to show more targeted advertising.
I mean, I'm personally against it but we've got to prove a higher and higher ROI to justify the cost of digital signage, this leads to just that.
You going to offer me another job that has comparative pay and work-time flexibility? I'll take if one's going, but right now this is my gig.
If you want to start a war, have a quick Google for the big players, they'll have this tech in and will be proudly advertising it on their site.
The thing is: People don't care. Not your HN reader (evidently), but your Joe Bloggs. Hell, Snowdon told them the Five Eyes are reading their email and they barely gave a damn.
You can care and be opposed to something and still not stake your job to stop it. I'm against a whole lot of things that I do not spend all my time fighting because I have things to do, or it would be inconvenient as hell.
And some I sacrifice things for. A person can't die on every single hill they happen to fancy :)
"Hi. I am the original taker of the photo. There is a screen that normally shows peppes pizza advertisements in front of peppes pizza in Oslo S. The advertisements had crashed revealing what was running underneath the ads. As I approached the screen to take a picture, the screen began scrolling with my generic information - That I am young male (sorry my profile picture was misleading, not a woman), wearing glasses, where I was looking, and if I was smiling and how much I was smiling. The intention behind my original post on facebook was merely to point out that people may not know that these sort of demographics are being collected about them merely by approaching and looking at an advertisement. the camera was not, at a glance, evident. It was merely meant as informational, maybe to point out what we all know or suspect anyway, but just to put it out in the open. I believe the only intent behind the data collected is engagement and demographic statistics for better targeted advertisements."
It is still a BIG ethical issue for some people. Myself I see this as just the natural progression we are headed. If we don't have rules about this kind of technology it will very much be "Minority Report" in a decade.
More like strangers taking pictures of you without your consent (and often knowledge) with the intent to increase their profits and not sharing any of that with you.
Ethically, neither your consent or knowledge is required for someone to see you in public and remember that image. Why they do it isn't really relevant. If they use that image to do something unethical, like commit fraud, it is the fraud that is unethical, not the imaging.
I think many people would disagree. It might be legal, but that doesn't mean it's not unethical.
If a stranger on the street started following me, taking pictures without permission, and taking notes about my appearance of actions and storing it in their database, I would say he was behaving unethically.
Ask street photographers - it's a delicate balance. Many people really dislike having their pictures taken without their permission.
How does this compare to the pre-automation practice mentioned above of cashiers manually making a tally of how many men/women of each age group were visiting?
I mean, this is literally the "global village" coming to fruition. The online shopkeeper knows you just as well as a shopkeeper in a real village - it knows who you are, it remembers all your previous visits, it knows your hobbies (even if you didn't tell him about them, but someone else in the village), it can make suggestions based on that.
When you buy flowers, the village shopkeeper knows not only who's buying them, but also has a good idea for whom these flowers are intended. That's where we're heading.
This is the level of (non)privacy that we historically had, living in much smaller communities than modern cities. The trend of more anonymity brought by urbanization is reversing, but it's not something new or horrible, if anything, the possibility of being just another face in the crowd is an anomaly that existed for a (historically) short time and is slowly coming to an end once more.
That is a lot of words to simply say that some people think it is unethical. Which is an essentially empty statement. Couldn't you at least say most people and make it an argumentum ad populum?
lawful and ethical are two totally different things. They are both related but are mutually exclusive of each other
ethical != opinion
Ethical has weight and you can lose your job, and even go to jail for being unethical. RMS actually has a very strong academic ethical mind (Even though I disagree with him more then agree). BUT ethics isn't easily defined.
Until relatively recently, unavailability of large stockpiles of consumer data (at least, stockpiles at the scale now possible) was a significant impediment to a large and probably mostly-undiscovered class of potentially unethical behavior. Do you not suppose the removal of that impediment, with no other equally powerful compensatory regulations or oversight, to at least potentially be a serious problem now or at any time in the future?
Until relatively recently, unavailability of large stockpiles of consumer data (at least, stockpiles at the scale now possible) was a significant impediment to a large and probably mostly-undiscovered class of potentially ethical behavior, as well as behavior that actively combats unethical behavior. Data itself is amoral and can be used for either good or bad.
This is a gross mischaracterization of the issue. You aren't looking at the bigger picture.
Do we really want to commodotize the simple act of walking down the boulevard? Make every moment in public space (and private digital space!) sliced, diced, and scrutinized by God knows how many data munchers, middlemen, analytics brokers, and ethically challenged people in order compel as much thoughtless consumer spending as possible, long term consequences be damned? Allow incredibly detailed profiles to be built up on every person, spanning the decades of their life? And of course, there is always the danger of governments co-opting and abusing this information years or decades in the future, after adminstrations have come and gone, and laws have been overturned, drastically altered, or ignored. As the tech and richness of the data increases, the temptations will as well. Well meaning people can do nefarious things in certain contexts.
I believe our societal institutions and corporate entities are not mature enough to safely handle the power granted by unrestrained, high resolution data on the entire populace
Granted, I don't think things would get too terrible without overwhelming protest, but I don't see why we should bet on that.
"You aren't looking at the bigger picture" is just an arrogant way to say "I think you're wrong and I'm right". It can be safely omitted in favor of actual arguments.
>Do we really want to commodotize the simple act of walking down the boulevard?
It's not a boulevard you are walking down, but a bazaar. The only difference is that modern technology allows you to visit the bazaar to be "sliced, diced, and scrutinized by God knows how many data munchers, middlemen, analytics brokers, and ethically challenged people in order compel as much thoughtless consumer spending as possible" without physically travelling there.
>Allow incredibly detailed profiles to be built up on every person, spanning the decades of their life?
Sure. It's called a relationship. Or a memory.
>And of course, there is always the danger of governments co-opting and abusing this information years or decades in the future, after adminstrations have come and gone, and laws have been overturned, drastically altered, or ignored.
You can safely replace "this information" with virtually anything useful and get the same effect. Do you feel the same about, say, nuclear weapons? Or legal authority to lock people in cages? I would say either of those is far more dangerous than data. Yet we recognize that the power exists regardless, and the government can at least put it to good use.
>I believe our societal institutions and corporate entities are not mature enough to safely handle the power granted by unrestrained, high resolution data on the entire populace
Then the obvious answer is to improve societal institutions and corporate entities, which is useful in and of itself, rather than futilely trying to impede the progress of technology.
> It can be safely omitted in favor of actual arguments.
Fair point, I could have dropped that sentence. I stand by my gross mischaracterization statement, though. Programmatic surveillance is very different from a stranger looking at someone.
> Sure. It's called a relationship. Or a memory.
The profile built up on people by ad brokers and spy agencies is a relationship? I don't think that's how most people would describe it.
> You can safely replace "this information" with virtually anything useful and get the same effect. Do you feel the same about, say, nuclear weapons? Or legal authority to lock people in cages?
Uh, a core part of the problem is this information being coupled with the ability to lock people in cages (or exert power in other ways). Obviously the data by itself is inert and useless. It's what people might do with it that matters.
Important examples would be restrictions on free speech and suppression of dissent. Imagine something like a credit score 2.0, created by analyzing a lifetime of private communication, online activity, and transactional data.
Those websites you visited 12 years ago? It's gonna cost you on your next car loan. And don't even think of running for city council -- the dirt will really come out then. Etc etc.
Obviously, technology brings a lot of great benefits. I'm all for that. I think we should just be aware of new pitfalls it brings as well, and try to account for them.
>The profile built up on people by ad brokers and spy agencies is a relationship? I don't think that's how most people would describe it.
Most people use language woefully imprecisely. The relationship I have with the barista at the cafe near my office isn't the same as the relationship I have with my sister but it is a relationship of the kind that's relevant here. Knowing what I order and when, recognizing me, etc.
>Uh, a core part of the problem is this information being coupled with the ability to lock people in cages (or exert power in other ways). Obviously the data by itself is inert and useless. It's what people might do with it that matters.
A nice thought, but in practice, when we try to fragment this power by privatizing police, prisons, military, firefighting, etc, all of which have many modern examples, things do not turn out well. As unreasonable as it may sound, the evidence suggests it's better to put all the eggs into one poorly run basket.
>Imagine something like a credit score 2.0, created by analyzing a lifetime of private communication, online activity, and transactional data....
> Most people use language woefully imprecisely. The relationship I have with the barista at the cafe near my office isn't the same as the relationship I have with my sister but it is a relationship of the kind that's relevant here. Knowing what I order and when, recognizing me, etc.
Yes but that is a very different type of relationship with quite different characteristics. I hope it isn't too difficult to infer I'm arguing not everyone wants these types of relationships. To call it "just another relationship" is not very helpful for the discussion.
This type of relationship may have significant extended and unforseen side effects. It's not well constrained and the preserved artifacts could easily be hijacked for countless unknown purposes decades in the future. It's a fundamentally new paradigm that we don't fully understand yet, and given humanity's historical tendency to abuse new mechanisms of power as they become available, I think some caution is very reasonable.
Perhaps to make my position a little more clear, a key point on why detailed data profiles could be quite dangerous is their scalable and programmatic nature. Never before could a single click of a button identify every individual who has been discussing topic X in the last year, or spit out a list of everyone with 2 degrees of connection to some targeted individual. The same unlimited possibilities that make this stuff exciting to technologists are also why it may be quite dangerous.
These powers are unprecedented. You would need a rotating team of investigators inside every home and every place of business in order to gather this data in previous eras, not to mention even trying to collate and process it. It's equivalent to someone in previous eras standing over your shoulder and writing down every newspaper article you read, taking notes on every conversation you have, etc. Because it is invisible, it doesn't feel this way, but that is what's happening.
> when we try to fragment this power by privatizing police, prisons, military, firefighting, etc, all of which have many modern examples, things do not turn out well
To be honest, I am fairly surprised at the reaction here on HN. It's not really surprising to see such system, it would be more surprising if such system did not exist because offline ads is a huge business and the technology is here. This goes together with conversion tracking at physical shops, etc.
I am equally surprised by the comments about how come engineers implement such systems, how they find it ethical, etc. I'm sorry, but it sounds just a bit out of touch with the real world, or just outside of HN bubble. Given the things that money motivates people to do, it's probably one of the least unethical things that has been done.
I am not judging that this is right or wrong, I am simply stating the fact that nothing about this should be surprising. Yes, this is slightly sad, but that's simply the reality of technological advancement. It's not really possible to expect the rest of the world to use the technology only for things considered 'right', etc.
Well, nothing here is surprising beyond maybe the scale of things (if a random pizza joint now uses facial recognition in ads, who else is using them?). But those things still need to be called out and opposed, because peer pressure is an important part of morality in society. People are social animals, and are less likely to do things that are disliked by their friends.
Looking at things from a little distance, the whole thing is abhorrent, and paints a really sad state of our society. I wrote this many times, and will keep writing it: if you did the same things personally to your friend that people in advertising industry do to everyone, you'd most likely get punched in the face. And yet somehow marketing became a respectable occupation.
There isn't really much consensus---even on HN---that passive demographic data collection is a bad thing alone. People claim it is, and I believe they feel it is---then they turn around and do things that compromise their stated beliefs because it's convenient.
I liken it to the gap between the rhetoric around open source and free software and the reality that Windows and Mac OS make up approximately 90% of OS marketshare. You can believe what you want to believe, but from a business standpoint you'd be putting yourself at a disadvantage if you structure your business requiring FOSS operating systems to climb to even 25% of marketshare; there's a similar situation, probably, for customer data tracking and advertising preference tracking.
Lots of Black Mirror is commentary on where we are now, not where we're going, even if the episode itself is set in future (15 Million Merits is an easy example).
I think, such reaction is just because article is less "techy" than it could be, but more about "moral" aspect.
OTOH, what is so interesting in simple face recognition, innit? That future became a past quite fast, meanwhile teh human rights never get old. (smile.jpg)
As someone working on a similar project (specifically, emotion recognition) I'm highly interested to hear how such a product should look like to be not considered unethical. So far from the comments I see that:
- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera
- no raw data should be stored
- it should be used to collect statistics, not identify individuals (?)
Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?
The ethics are simple: If you don't get opt-in consent, many subjects are going to feel violated. Even if you assure them you anonymize the data.
It's not enough to put a warning next to the camera because you've already captured them at that point and it's too late. If anywhere it would need to be at the entrance to the store.
If a store has a warning at the door that this happens inside, it's good because now I can avoid getting inside the store and silently hate and boycott the brand.
If a store has a warning label on the device engaging in this, it's bad because it's too late for not entering the store. I'm gonna complain right now at the store manager, maybe call the cops or sue. I'll be vocal about actively hating the store, the brand, the manager, the employees.
If I went to a store engaging in this without telling and I later learn about it, then I'm calling Keyser Söze and it's pitchforks and beheading time.
I suppose it will take a couple more generations of brainwashing to have the population ready to accept this kind of highly invasive technology. IIRC about 10 years the big brother awards was awarded to a french industry group for their blue book describing how to condition a population to accept surveillance and control technology over a few generations.
In some countries, like Sweden [1], this type of deployments of cameras are strictly regulated. A quick reading of the rules in Sweden tells me that you are unlikely to get permission for this easily.
Sure, but I would bet money that GP isn't in a country that has those kinds of regulations. I'm addressing their emotional overreaction to something that require rational action (such as the law you mention).
Thanks for the detailed comment. Couple clarifying questions if you don't mind:
- do you know that loyalty cards are often used in stores to collect customer data (a kind of offline cookie)? do you consider it a bad/dangerous/unethical or does it sound ok for you?
- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?
- do you know that loyalty cards are often used in stores to collect customer data (a kind of offline cookie)? do you consider it a bad/dangerous/unethical or does it sound ok for you?
Yes I do know that loyalty cards are used to collect data. I think most people do. I don't take loyalty cards for this reason and I'm glad that they are opt in although there is some financial pressure to take them.
- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?
I would feel bad about it and I think the person should ask my permission first.
> I would feel bad about it and I think the person should ask my permission first.
But if that person just memorizes customer reactions to understand how people on average react to particular products or actions, that's ok, right? Because this is what sellers and business owners do to improve their product. So is it about human-to-human interaction or some more subtle detail? I'm biased here, so sorry if I miss something obvious in this situation.
It's not subtle. If there was an employee standing next to you or following you around the store with a clipboard taking notes on you and your facial expressions, only then would you have something approaching an apples to apples comparison. Stop pretending that's normally "what sellers and owners do" and you're just automating it. Customers Do Not Want.
Loyalty cards are opt in. Security surveillance can be unsettling but customers understand its purpose and limited scope. What you're proposing is more invasive, and most people would not appreciate it if they knew about it.
Look, give up trying to justify it. Customers don't want it. You should find another application for this technology.
Well, I definitely do have other applications for it. For example, I know that similar software has been used in labs to estimate people's reaction to videos and game features, in mobile applications to improve interaction with a user, etc.
My interest to offline applications comes from personal experience: recently we demonstrated our product (not emotion recognition, but also capturing user's face) on an exposition. People came to our stand, used the product (so they clearly opted-in), asked questions, etc. After 2 days, we asked a girl at the stand "What do people think about the product"? "Well, in general, they are interested" she answered. Not much info, right? Definitely less informative than "65% expressed mild interest, 20% had no reaction and 5% found it disgusting, especially this feature".
So I don't try to justify this use case - my life doesn't depend on it - but I find it stupid not to try to understand your clients better when it doesn't introduce a moral conflict.
Loyalty cards are opt-in, and it's common knowledge that its explicit purpose is to track information about yourself -- so I think a lot more people find them (or at least their existence) acceptable.
This is true, the vast vast majority of stores have surveillance of some kind. Advertising's impact on the human psyche cant be underestimated. In the last ten years alone this has become increasingly apparent. Whether it be photoshopping images that manipulate our conception of beauty or dating apps that make us feel lonely enough to install.
I don't mind being recorded at a checkout.
Recording me to decipher my thoughts instead of my actions crosses a line.
> If a store has a warning at the door that this happens inside, it's good because now I can avoid getting inside the store and silently hate and boycott the brand.
Given how widespread this kind of monitoring is, this approach is basically "I will punish the honest stores and reward a sneaky store by spending my money there instead"
It's actually pretty simple - don't use it on people.
Advertising? No. Sales? Definitely no.
Augmenting that single-player video game so that it adjusts content depending on emotions and gaze of the player? Ok. Better if the player is explicitly told the game will track their reactions though.
EDIT:
Also, another angle. Even for advertisers / "sales optimization", I'd forgive you if that was a local, on-site system. But if it's meant as a SaaS, with deployments connected to vendor's butt, then I am gonna actively try to screw with it if I learn there's one installed anywhere I frequent. Hopefully new EU laws will curb that, though.
I had it on for so long that, for my brain, the two words are basically the same now :). I keep forgetting about it when I edit a post (the substitution happens on display, not on submit).
The only ethical possibility in my view is for it to not exist. I don't like having my emotions manipulated to make me buy more stuff, regardless of whether I am anonymized or not. But then again, I think similarly of a lot of the non-targeted advertising; the recognition just add whole new level of disgusting.
What about collecting statistics to make better decisions? Let's say, you go to your favorite jeans store, but find out that current collection is disgusting. Does it sound ok for you if some sort of system would analyze your attitude to the product to improve it in later versions?
You can do controlled user-testing sessions with that system with specific people that have consented and are potentially compensated in some way. You will most probably also get more useful information out of that.
But being recorded "en masse" in a shop for that purpose I would think is invasive. I would totally avoid that shop if I knew that system was in place.
Also, I am not convinced that statistics lead to better design, so that would most probably be just wasteful, but that is another discussion :p
But isn't A/B testing doing the same thing? If it's different, what's the key difference between analyzing facial expression in a shop and analyzing user interaction on a site (given that both have a warning about data collection)?
The former has very weak (if any) consent, is indiscriminate, easy to abuse, and creates unnecessary conflict (e.g. i really like those pants sold in that shop but I don't like to be tracked, ok i'll go in just this time...).
In A/B testing there is a clear context and purpose, and is normally negotiated between actual humans.
There might be middle grounds (A/B testing can be done online and use facial recognition and in relatively large scale) but for me it has to be opt-in (as in, you have to fill a form to join) not opt-out (as in, leave this webpage/shop if you don't want my tracking). This is more challenging to the organization proposing the tracking, because they need to provide some value in exchange so people actually sign up for that. But in the long term being founded on the principle of consensual mutually benefiting relationships can only be good for your organization/brand, right? As in: at last a company that treats people like humans!
I've watched the same hysteria & concerns about all kinds of privacy-invading systems. Social Security Numbers, credit cards, computer IDs, camera GPS, search queries, and piles of other tech all start popularization with "OMG evil people can do evil things with that data to hurt you!" Save for a few holdouts (usually much older folks), society at large has completely accepted all that tech as normal. Just takes about a decade of the convenience overwhelming the fear. I despise SSNs, but cutting my taxes by $1500/yr (child tax credit) is motivating; credit cards suck for a zillion reasons, but swipe-and-done is so damn convenient; no question Google has an impressive model of me but those search results are enormously useful; etc.
I have a question: Do you do trials in a controlled environment, where you actually have proper feedback and a distinct comparison between self-described state and machine analysis? Because in my opinion, systems like these are like modern day version of astrology (at least, when they are only based on vision and not things like fMRT imaging or proper psychological analysis). I know seriously depressed people, who always had a smile on their face (maybe a social coping mechanism), as well as "angry" looking coworkers, who had a very good mood most of the time. It very easy do misinterpret a persons mood, when the only "interaction" is: looking at them, and analyzing their facial features.
When these things are used outside a controlled environment, things could get even more complicated: weird beards, squeezing of your eyes because of excessive sunshine, reflexive glasses, etc.
1. Accurate collection of facial features. Illumination, occlusions, head rotation, etc. may seriously affect accuracy, but this is exactly our main focus right now. We are at the very start of the process, yet early experiments and some recent papers show that it should be doable.
2. Correlation between real and detected emotional state. At the moment we concentrate on the 6 basic emotions and don't detect less common expressions like depression with smiling face. This topic is definitely interesting and I'm pretty much sure it's possible to implement given enough training data, but right now we try to concentrate on different things.
> I'm pretty much sure it's possible to implement given enough training data
No, the point of the comment you are replying to is that there are emotions that are impossible to detect using external information. We can hide our emotions very well. The question is to what extent does external emotional information provide monetizable value?
> there are emotions that are impossible to detect using external information. We can hide our emotions very well.
This is an assumption which I'm not convinced holds true. Just because we can hide our assumptions well enough to fool other people doesn't necessarily imply that's it's impossible to detect using external information.
I've seen some pretty convincing expressions of emotions from actors who were obviously not at the time, in love, in pain, in anger etc. I'm pretty certain that any system that takes your facial appearance and no other information (e.g. you are an actor, you are currently on a movie set), it would have no way to distinguish genuine from false emotion.
If we are talking about professional actors trying to trick the tracker, then yes, it should be pretty hard to design software to overcome it. But most people aren't that good, and although they can mislead their friends or collegues, they still leave clues to detect a fake emotion. If you are interested, Paul Ekman has quite a lot of literature on the topic, e.g. see [1].
But humans are notoriously bad at picking up on details, and things like music and scenery can have a big impact on our perceptions. I'm not saying that you're wrong, I'm just saying that in the absence of any evidence to the contrary I don't think we can just assume that you're right.
The fact that you are already working on this says something about your willingness to do something distasteful to earn a paycheck. A slightly bigger paycheck would probably mean you would relax you morals even further. Even if your product starts out with stickers and no logging, I bet it doesn't stay that way for long. Not if the paycheck can be bigger.
To me the only way this could be ethical is by the project being limited to private space (a lab, a room in your house). No data is to be recorded ever, runs on an airgapped computer, doesn't try to identify people, every people subjected to it has to be fully aware of what this is about and the implication it can have.
The opinion of most people here is that facial recognition technology is for the most part creepy if used in a commercial setting. Mine is slightly different. I think it's fine if you want to show me a different advertisement or sign based on an interpretation of my expression. I also think it's ok if you track my position within a mall and see which shops I visit and when. I would draw the line at attaching personally identifiable information to that data such as a name or a photo of my face. Anyone who decides to do that is probably going to cause harm/inconvenience to me (I don't want junk email from shops I happened to visit but didn't buy in).
I should also state that I think the first use of my data is ultimately unprofitable. Will the extra cents you make by advertising cinnabon to depressed looking people or hairdressers to long haired people really offset the cost of developing such a system? If applied to a broad population any customisation effects will be marginal.
I also believe that the non-anonymous tracking system is much more likely to produce value for companies and it would be very tempting when gathering anonymous data to cross reference with actual individual information. My concern about any tracking system is that by the motivation of profit it could easily shift from an ethical to non ethical space.
I'm kind of surprised that it didn't have some sort of Data Protection warning near it already, but I'm not sure if the EU data protection directive covers Norway as well.
We have pretty strong laws regarding this. It has created several news articles for the past week, and the Norwegian Data Protection Authority has already commented and said they don't believe this is legal. Stickers where added after the initial discovery.
I'm curious what's the boundary between ethical and unethical. People constantly analyze each other's mood and it's perceived positively. But doing the same thing massively using automated tools is often considered inappropriate. So is it because of using technology, massiveness, purposes? I hope there's a way to make such things both efficient and not unethical.
It is unethical because there's no "opt-out" option. You have taken a photo of me without my consent, used it for intents to increase profit despite not having my consent; and furthermore, attaching personal data to it is Orwellian and a complete invasion of my privacy. I can go out in public and have nobody know who I am. A retailer should not have access to my identity (since they can cross-reference other data sets to deanonymize me)unless I interact with them and hand over information of my own volition.
As far as I know, storing personal data - including photo, name, email and sometimes even IP address - without explicit and clear consent is strictly forbidden in most countries, at least in EU.
The only way this could possibly be considered ethical is if you get informed consent from every single person the system is analysing. If you provided each person with a detailed explanation of what the data would be used for, and required them to opt-in before collecting it, that would be fine.
That's at the heart of it - is examining a person in public with automated tools, unethical? Just saying it is, isn't a compelling argument.
The FBI can use automated tools for surveillance - which doesn't speak to ethics directly but indirectly, as we hope ethics drove those rules.
I can sit in my private store and observer people out the window all day, even take notes. That's not unethical; that's a sociological experiment or some such, and done millions of times a day.
It may be jarring or creepy to imagine an advertisement is sizing me up. Again, ethics is more than 'does it make people uncomfortable'.
Manipulating people on a mass scale without their informed consent has always been considered unethical; its on you if you're trying to argue that its not.
And you're 'but what if a person does it' arguments are irrelevant - there's a clear difference of scale between the massively automated systems we're discussing and a single person with a pencil and paper.
It was one billboard ad - not really 'massively automated'. Would have been cheap to hire an intern to stand behind the billboard and make notes. Probably cheaper.
I think terms of service have something to do with it. In most profiling scenarios, the average consumer has no idea what's going to happen to the data collected on them. I'd be much more comfortable participating in a value-exchange involving your product if I knew precisely what information would be collected, how long it would be stored, who would have access to it during that time frame, what would happen to it at the end of that term, and precisely how that information would be capitalized upon. That probably seems ridiculous to you, but from my perspective, it represents a precise definition of the value I'm yielding to you, and a reasonably precise definition of the risk I'm incurring by doing so.
Uh, I got sidetracked and brain hammered by the devolving discussion on that Twitter thread, thus couldn't find the context for this pizza shop kiosk - Is it a customer service portal that attempts to identify the person in front of it to try and match up with an order, or a plain advertising display that is trying to capture the demographics of the people who happen to stop in front of it and look at it?
To summarize, it's an experimental project, there is so far only one such screen at the train station. If someone stands within 5 meters of the screen it will try to classify their age and gender and show a targeted ad, and record how long they looked at it. The raw images are not saved.
The screen uses a software called Kairos to analyze faces. It can estimate age, gender, and whether you are "white, black, hispanic, asian, or other".
According to the marketing manager at Peppe's Pizza, he thought there would a label on the screen saying what's going on, but in fact there is just a small sticker on the back of it, which is quite hard to see.
The company making the screen, ProtoTV, says that people should be okay with this because ads on the internet are even more targeted. A government representative says that the system might violate laws about surveillance cameras.
Thank you, this was the first comment actually telling more on the matter after all the "it said I'm X years old!" comments :)
I guess using such a system just to analyze people in real-time might already violate some surveillance laws like you said (in my country all such cameras must be warned about, even traffic cameras). But do you have any idea if those things also record data on customers? I could see how keeping such a database on customers might be dancing on the fine line of creating a "registry", which is pretty heavily regulated by laws at least here. Even more so if there is a possibility to identify real persons from that data.
So what am I supposed to do if I find this invading my privacy? Not walk within 10m of such a billboard? It's not like there's any other active way to opt out of this.
I suppose a banner saying "you are being tracked" on tom of such a billboard could have quite an interesting effect. Come to think of it I've seen "Smile, you're being recorded!" in some shops.
"So what am I supposed to do if I find this invading my privacy?"
There are laws and customs around public places and what may be done there. E.g. depending on your local laws, if you're in public, people can usually take pictures of you and there's also nothing you can do about that.
Don't like it? Petition to have the laws changed. This is how we deal with such things in a democracy.
Trying to guild the engineers who build this system is both IMO wrong, but also completely pointless in terms of real-world effect.
> if you're in public, people can usually take pictures of you and there's also nothing you can do about that.
Quite the opposite, in France "le droit à l'image" is a privacy right that allows anyone to request that any picture of them being taken to be deleted.
Photographs are legal in public. This is just taking that to the extreme. Address that law. What bothers me is ALPR. Taken to an extreme you can just put a camera on every intersection and effectively track all vehicles without a warrant.
You'd have a point if this was a general statement, but in the case of automated facial recognition it's almost universally despised across the world.
If you're not sharing this position it could be that you are younger or have been subjected to the conditioning of population by industries to make intrusive surveillance technology acceptable to them which has been going on for at least 15+ years afaik.
>If you're not sharing this position it could be that you are younger or have been subjected to the conditioning of population by industries to make intrusive surveillance technology acceptable to them which has been going on for at least 15+ years afaik.
"if you don't agree with me you're either a kid or brainwashed" - nice.
From what I can tell this is anonymous analysis and classification - this kind of info is useful and I don't mind one bit that it's being collected - in fact if the data is accurate then I like it - I can provide feedback without effort. I prefer it much more than being spammed by pollsters or a service tracking and associating behavior with my profile.
"[...] in the case of automated facial recognition it's almost universally despised across the world."
I don't think that's true. Most of us tech-geeks are worried about privacy way more than the average person. I personally don't see this particular use case as too problematic, depending on what is done with the data - as others in the thread have pointed out, you're in a public place, other people could be taking pictures of you or writing down information about you, and I don't think most people are worried about that either.
My view is that as long as the technology exists or can exist, it will be developed used, so complaining about the people building it is completely fruitless. If you really dislike how it's used, help pass laws against it! Don't go around guilting people for building this stuff.
> you're in a public place, other people could be taking pictures of you or writing down information about you, and I don't think most people are worried about that either.
The difference is scale. It would be prohibitively expensive for every pizza shop to hire someone to collect demographics of passerby. These systems can run on a Raspberry Pi.
The point of the person you are replying to is that there isn't a clear consensus that the kind of facial recognition done by the pizza shop sign is unethical. Your argument is that it must be unethical because there is a clear consensus. Where are you getting your data from?
It's fine to speculate about and individual's reasoning for why they believe what they believe but it's entirely useless for determining what the majority believe.
I would totally get interested in building this, have a half-good working prototype. All before I'd even considered that someone else might use it for evil...
The technology interests me and I would gladly work on a system that implemented such features.
As an ethical programmer, I'd be sure to incorporate security, anonymization, and be able to draw the line so that I can help businesses make more money (since that's what they pay me for), and advance technology at the same time.
This is only unethical when it's used in a system that infers more information aside from general demographics (which, BTW nearly everywhere collects), and makes them vulnerable to interception outside of the pizza company.
It saddens me that people are complaining about this yet people are doing way worse in our profession like extorting business for money through ransom-ware, hacking personal information and bank accounts, creating robots that kill people. But no people are worried that while walking around in public place a picture is taken of them and an add is changed to target them.
Besides the movie's premise of psychic surveillance using disabled people and indefinite detention of pre-perpetrators, there was plenty of dystopian elements in that movie, even if the future wasn't specifically a dystopia. The spider bots, the vomit sticks, and the ads:
Spielberg: "The Internet is watching us now. If they want to. they can see what sites you visit. In the future, television will be watching us, and customizing itself to what it knows about us. The thrilling thing is, that will make us feel we're part of the medium. The scary thing us, we'll lose our right to privacy. An ad will appear in the air around us, talking directly to us."
It wasn't that prescient; public-location commercial facial recognition system had been deployed for several years, and companies were, IIRC, already actively promoting customer tracking and advertising applications at the time.
I've sifted through 80% of the comments here, and I couldn't find a mention about the unintended consequences of this technology.
Ethical vs. Unethical, Pro-Privacy vs. Against Privacy are the two common discussion points. I, however, think the bigger problem here is that there's a very non-zero probability that this technology may cause unintended consequences simply by relying on false/inaccurate data.
For one, I work in analytics (loaded catch-all occupation) and I work with people who would marry their "data skills" if they could. In my industry, false positives of 80% is acceptable, and openly admitted errors in "machine-learning" logic (quoted to highlight my company's buzz-word usage, but practically non-existent) are made daily. People create algorithms, and people make errors.
Let's let our imagination run wild here for a second: It's 2030, and this technology becomes ubiquitous to the point where no one objects. Businesses take all the data from sentiments, gender, age...etc. to optimize for their target demographic, and price accordingly. In other words, let's assume this tech is used for perfect price discrimination. Economic theory dictates this is a win-win for everyone since everyone starts paying their willingness to pay. But, let's assume there's a catastrophe and medicine is in dire need. Price discrimination works fine assuming perfect competition, and is a useful framework, but it breaks down empirically where we live in a society that doesn't behave so rationally. Who survives? Those willing to pay the most, and the algorithm worked flawlessly here. But it was not intended to dictate who survives.
What I'm trying to say is that we should be cognizant of the fact that we don't live in a perfect bubble, and technology like this should be scrutinized for it's effects exhaustively- including any unintended consequences. We live in a society (duh), and as a society, it is up to us, with the help of policy makers, to determine the fate of this technology.
IMHO, perfect price discrimination is not usually a good thing. It's often discussed more in the context of monopolies. Under simple market models (e.g. no externalities, downward sloping demand curve etc.), a profit-maximising monopolist will set prices above the market clearing prince, resulting in dead-weight loss (market inefficiency).
However, there is one situation where monopolies achieve market-efficiency: when the monopolist can perfectly price-discriminate. This eliminates the dead-weight loss. But, crucially, this also means that all of the gains from trade accrue entirely to the monopolist, as consumers are all paying their own individual 'indifference' prices.
It's a value judgement, but I don't see this as a socially optimal outcome even if it is the market efficient one.
Yes, I agree that it's not the optimal outcome. (I should have not used "win-win" and instead used "win-win for some consumers.") I was trying to emphasize the individual consumer only having to pay what they were willing to pay- not making a judgement call on what is optimal.
Furthermore, The scenario you highlight is price discrimination to the first degree, where the monopolist captures all the "surplus." Economists generally claim this outcome to be "unrealistic" but it helps us understand the more traditional outcome: https://courses.byui.edu/econ_150/econ_150_old_site/images/8...
As you can see, there is still consumer surplus from a monopoly price discriminating, but at the cost of a deadweight loss.
Fair enough, I agree that we could be here all day if we opened the value-judgement can of worms (though I do wish we did this when discussing public economic policy).
Economists generally claim this outcome to be "unrealistic" but it helps us understand the more traditional outcome
I agree with this. Just to add, pretty much every simplified market model you would find in an undergraduate-level textbook won't correspond to any market in reality. As you suggest, they're just very simplified models designed to 'kinda point you in the right direction', rather than be taken as a description of reality. The most dangerous people tend to be those who took micro 101 but were never told the latter :)
"Hello Mr. Yakamoto and welcome back to the GAP!" - Minority Report, after protagonist (Tom Cruise, clearly not Asian) buys black-market replacement eyeballs to avoid retina-based security
"It's that guy in the ski mask and santa claus outfit again" sadly makes you MORE identifiable (unless you can talk a ton of other people into doing the same).
from: "(..) What’s interesting with regards to the book while thinking of this stuff is that the garment that sort of helps to save the day is a t-shirt. You call it the ugliest t-shirt in the world."
If you're talking about shirts with faces on them, those won't work whatsoever. Even if there is evidence that it appears to work right now, it's an entirely solvable problem today, with very little effort required.
a) You're assuming that the AI is looking to find the first face it sees, rather than all faces in view - both your shirt and your real face will be picked up as two separate individuals. Even if it's "one face at a time", why would you assume the shirt gets picked up instead of your face?
b) It really would not be difficult teach a neural net to detect one real face located above a face on a shirt, and ignore the lower one. The only potential false positives would happen with a taller individual walking with a shorter person in front of them, whereby the shorter person may be filtered out as a shirt.
So no, shirts with faces on them are not a countermeasure. You're adding additional noise, but you're not eliminating your own face from being picked up as well.
No. The idea is that with a need to conduct covert ops in a world filled by automated camera systems, there'd be a back-door built in, such that whenever one tries to retrieve footage of an "ugly shirt" (a special, machine-readable pattern) - that footage would be deleted.
That way a camera blackout/missing footage wouldn't signal a covert action, while at the same time one could go on defending democracy(tm) without worrying creating a media-storm about the tyrannical methods used to ensure Freedom(tm).
I'm a little surprised about the HN reaction on this one. You guys didn't seem to care about collection of passive biometrics a year ago: https://news.ycombinator.com/item?id=11172652 . What's changed?
I suspect you're right; I've noticed we also like the 'news' media again. I guess it's only a problem when it's not "your side" invading people's privacy, spreading false propaganda, destroying liberal democracy etc. etc.
This is in itself not scary compared to what a random website does when I visit.
That is - given that what we see in this log is actually all it does. What's scary is what we don't see (does it store this? does it cross reference anything? does it target ads based on it)?
I don't really think that's the case (here, yet) but I do think it's scary that it's so easy to do that its not just done as a proof of concept but actually used in production in a low tech industry.
Gathering demographic or sentiment without storing, cross referencing (has this person been here before etc) or otherwise using the data for anything such as targeting ads - is kind of acceptable. I mean it wouldn't be hard to do that manually via a camera if you wanted to test the engagement of an ad. I'm sort of hoping this is just some tech project from a university or something, and not an actual product you can buy and hook into some adtech service.
And just today I got an ad (in the paper mail) from an electronics distributor notifying me of new parts they stock. Among them was an embedded face and expression recognition engine that would emit pretty much this data, in a convenient text output you can read into any little microcontroller and act on (omron B5T-007001-010 if anyone is interested). This is no longer exciting cutting edge technology, it's off the shelf. And terrifying.
If you visit my website and fail to complete a purchase, you may find a physical postcard in the mail from us in a few days with a surreptitious coupon code. No, you did not actually tell us your mailing address at any point; knowing your e-mail is sufficient.
But for every you, there's ten people who complete their purchase using the code they received, and my boss makes more money and I keep my job.
Until and unless those ratios reverse, it's going to be that way and you'll have fewer and fewer places to shop. (I'd happily make the case to my boss that he makes more money without retargeting tactics like this if such were the truth.)
I'm okay with that. I find your practices disgusting but I'm not in a position to be telling your customers what they should be choosing. So all I can do is vote against it with my money and reward shops that don't do evil. There's very few things that I need badly enough to put up with that crap.
You classify sending un-asked-for mail as evil? (I admit the company that provides this service is not doing God's work by combining/selling all their customers' databases to make this possible, and so by enabling them neither are we. I'm not seeing how it's exactly evil though.)
Ghostery was this tool aimed at privacy minded people that collected their data to provide advertisers for their effort of defeating measures taken by privacy minded people. No way I will trust this ever.
With this much personal care to really know their customers by face I'm sure they put just as much personal care into the quality and craft of the product /s
Despicable. Any authors of this work should be publicly shamed and punished. And don't get me started on what should befall the owners of capital that drove this.
I saw a pitch for this tech 5 years ago. Not sure the name of the company. The idea is they can measure engagement (how long you looked), approximate age and sex.
Five years ago it didn't seem so sinister. A lot has happened since then, I guess.
So they show erratic data on screen, preferably flattering (eg younger, happier) or just obviously wrong and record the real data on the back-end. Instant public acceptance without even writing big numbers on a red bus.
It was sinister 20 years ago (probably even before that but I'm not aware of it), and was a problem enough for the industry that they looked for way to condition the population to accept the technology over a few generations.
I think this becomes sinister when it makes the move to being free for the business with the service provider records & collects the data to sell to third parties.
This has been happening for at least a decade. I had to do some updates on a system in 2008 that had this same functionality built in, and they were far from the first company to do it.
I don't see any evidence that this is a "facial recognition system."
It's likely hard to legislate against software that attempts to detect if there is a person, what their expression is, and guesses at their gender.
You could imagine that job being done by a person (just noting how many people stopped at the advertisement, and what their expression was). I don't think there's really a way to make that illegal.
I suppose I think it's something that people should be aware of, though.
If you enter most of our stores with a phone in your pocket, you're being tracked. They track where you went, in front of what shelves you stopped and for how long, if you went to the cashiers of just left...
And if we track people here in the third world, you can be sure you are being much more tracked in first world stores.
There are many solutions that do this, both proprietary and Open Source. Accuracy is influenced by lots of factors, some to do with the setup (camera angle, lighting) the hardware (camera quality, computation speed) and the subjects (race, facial hair, glasses). We used this in research projects involving elderly mood assessment and in television viewer's emphatic responses. The package we used was marketed by Noldus ( http://noldus.com ) and developed bu Vicarvision ( http://vicarvision.nl ), but most of these packages perform at about the same level.
This is nothing new; Ad Companies are actively marketing this features. See i.e. http://livedooh.com
Quote from their website:
"Audience Measurement included
The information and statistics needed in order to realize audience targeting in DOOH is gathered through livedooh’s integrated anonymous video analysis, which collects information about gender, age and length of view. Audience metrics are used by the ad server’s decision engine to optimize advertisement delivery and increase performance."
It just makes too much sense to show ads based on the demographics. They now have robots in malls too. They are just recording everything, processing, logging, extracting, selling and up selling. There is no privacy. The problem is not only does it make economic sense just to have these robots, the added intelligence from the data mining makes it even more attractive.
The Island Airport(Billy Bishop) in Toronto is littered with adverts that are camera connected. Other than, maybe power management, looking at what is possible in OpenCV gives a good indication of what can, and probable is, being done. From tracking where you look to matching faces....
The problem is that this is an agency(of the government) owned facility.
At Vitensenteret (science center) in Trondheim they have a web cam hooked up to a TV screen where they show a live view of some face scanner software (multiple faces simultaneously). It estimates your gender, age, and mood (happy, sad, angry, surprised).
Every time I've visited it's been quite accurate on me, my friends, and on the other visitors.
20 years ago they were adequately accurate in university lab experiments, 10 years ago it was very accurate, nowadays it is scary how accurate and fast this is.
This has been a feature of these digital sign products for a few years, generally they aren't interested in specific faces, just if faces are seen looking at the sign and for how long. It's all just simple opencv stuff.
For sure. And when you consider that research is improving on the topic of guessing someones race, age, gender, emotion via facial video/stills, as well as identifying other quasi-unique features such as gestures, gait, and facial expressions, well... it seems like it'd be hard to evade advanced ad or government tracking.
It really annoys me that these are deployed in public spaces where you essentially can't opt-out or fight the tracking like you can online (ad blockers, script execution control etc).
Might be time to pull out the regulations (a.k.a protections). You could destroy the market for this sort of thing if executives went to jail or were fined out of business.
As much as I can't stand the guy, I think David Brin is right here. The government does this to you anyway; the solution might not be regulation (which they won't care about anyway - see the NSA) but reciprocation. Let's watch everyone (in public), and make that data accessible to everyone.
IIRC this was something Cory Doctorow addressed in a talk or book, right now we can do something in the digital online realm, but as soon as this hits public space we're fucked and it had already started at the time.
can you imagine a world where automatic identification/classification, tracking, and targeting would decrease?
seems like you would either need a collapse of the economy to such a degree that cameras and computers aren't affordable...or some kind of extremely aggressive regulation.
i can't see how the latter would ever come about or be effective.
> i can't see how the latter would ever come about or be effective.
Try picturing a future where having access to clean drinking water is a privilege that only some have, with no cheap energy available and unstable climate. This is the future we built for ourselves and admittedly internet and computer are useless when you don't have electricity.
How is this disturbing? its just like some a public webcam that gets used to identify people. If it is in the public you have no privacy. If you want privacy go home as soon as you are out of your house you lose your privacy. I am a big voter for privacy in your home and private but out in the public your are not private anymore and therefore privacy falls away. Privacy and private kinda go hand in hand. you cant have privacy in a public space it is impossible.
Assuming there is no privacy in public space, please tell us all the name, age, address, SSNs, CC numbers, health records, gender, buying history, sexual orientation, facial features of everyone you walked by in public space over the course of one day last week.
Now that you have failed to do that, try doing it for one day 5 years ago, a whole week, a whole month, everyday since you were born.
A public surveillance apparatus such as the one featured here who record everyone, every day, not forgetting a single thing or person.
Try thinking a bit and free yourself from the backfire effect before making claims that make you look bad.
Just because I am not in my house does not mean I consent to being tracked everywhere I go. We make the rules in our society, and with enough political will we can restrict this stuff.
Why are people so surprised by this? Imagine you're a company building digital signage/advertising products. Wouldn't this be one of the first ideas that pop in your head? The technology is out there for free...
Windows.. TeamViewer.. using the primary screen to display the ad.. and the camera is not even hidden..
Amateurs.
I wouldn't be alarmed by this, they probably don't even now the accuracy of the algorithm they are using or how to interpret the collected data correctly.
The subsequent Twitter thread featuring @justkelly_ok et al. is probably the worst things about Twitter all bundled up in in one. It's a pure cringefest.
Its absurd people are outraged about something like this, relatively harmless and at the same time use Facebook. The social network has your face, all your life, moods, expressions, interests,personal conversation etc. Now THAT is worrying and not some pizza shop which gathers stats to know what type of customer is their frequent visitor.
> Its absurd people are outraged about something like this, relatively harmless and at the same time use Facebook.
What is absurd is there is someone like you in every privacy-related thread claiming that everyone who is outraged is also a Facebook user, or somehow is fine with what Facebook does.
Though i disagree about this being harmless, I get your point about facebook which is way more intrusive and has its own controversial facial recognition running in the background on all pictures.
But here's the thing, you can choose to not use facebook or deploy some kind of mitigating strategies for online surveillance[1]. On the other hand ublock origin is useless to protect you from meat space surveillance and tracking.
Understand that surveillance in public space physical world is different from its online counterpart, that it is way harder to detect and counter and as such ought to draw a bigger outrage before it gets generalized.
I'm pretty sure my 300 friends have given fb more data about me, than I have. [ed: and that doesn't even interlude the tracking from "like"-buttons etc]
Which is not enough to not be on facebook or even preventing facebook from profiling you.
I've chosen to not be on facebook and I've been shown an account in my name made by someone else, pictures where I'm tagged, public posts and comments mentioning me, private message mentioning me.
This is the tip of the iceberg, I have not been shown the facial features facebook has associated to me, the "social graph" they have linked to me and countless other internal facebook stuff the general public is supposed to not know about.
How does that help? Fb will use facial recognition and track you in pictures, build up a friend-graph - and if they can - associate that profile with tracking data from "like"-buttons etc.
You can choose not to use the Internet, or only use hardened devices over tor - but it's not exactly the same as "you can choose not to drink coca cola" (incidentally, it might be difficult in places to completely avoid products by the coca cola company, as opposed to just "coke, the coca cola soft drink").
Added to this, the network effect make fb hard to avoid - people use fb/messenger for a lot of communications, volunteer groups, political groups, education... You are free to argue it's I'll advised (and I agree). But wishful thinking alone does not mean "choosing not to use Facebook", might not include: missing social, educational and work opportunities.
I wonder where you get this idea, pretty much everything points to facebook being in control. Even when you have never registered with them you are profiled and tracked.
Check out the video here http://sightcorp.com/ for an ultra creepy overview. You can even try their live demo: https://face-api.sightcorp.com/demo_basic/.