Hacker News new | past | comments | ask | show | jobs | submit login
A crashed advertisement reveals logs of a facial recognition system (twitter.com/gamblelee)
1450 points by dmit on May 10, 2017 | hide | past | favorite | 522 comments



You'd be surprised / scared / outraged if you knew how common this is. Any time you've been in a public place for the past few years, you've likely been watched, analysed and optimised for. Advertising in the physical world is just as scummy as it's online equivalent.

Check out the video here http://sightcorp.com/ for an ultra creepy overview. You can even try their live demo: https://face-api.sightcorp.com/demo_basic/.


Woah, so this got a bit of interest. To be honest I'm a little surprised this seems to be news to the HN crowd.

I feel a little bad about calling out one API provider specifically, so here's a bunch more: https://www.kairos.com/ https://skybiometry.com/ https://azure.microsoft.com/en-us/services/cognitive-service... http://www.affectiva.com/ http://www.crowdemotion.co.uk/ http://emovu.com/e/ https://www.faceplusplus.com/

Face tracking, emotional analytics and vision based demographics analysis is a pretty huge industry. There's a entire spectrum of uses for this tech, from the altruistic (psychology labs, humans factors research), to the well, not.


I've lost track of how many times I've said this on HN:

We need HIPPA for all personal information, not just medical. We have an expectation of privacy in being "lost in the crowd" when we're out and about. Our physical & online whereabouts, who we're physically with, who we're communicating with, our personal contact information, and obviously payment information is private information that can be harmful if not kept private (false positives in automated legal systems, identity theft, and including all the defenses of securing medical information).

Anybody who chooses to hold such information must regard it with a high level of respect and privacy. Since nobody is doing so, and there are no penalties for violating privacy, and this gets into fundamental rights and proper functioning of society, it seems applicable to federal law.


HIPAA does not make your medical information private, it makes it Portable. Whether it has improved the protection of your digitized medical records is debatable, but it definitely forced almost every industry remotely related to medical care (and some previously unrelated industries) to digitize their records and share them.

Sure, paper medical records suck and aren't inherently more or less secure, but no one breaks into a car and runs away with 500 patients' medical histories when each patient's record fills pages, folders, or filing cabinets, rather than bytes on a hard drive (or even better, it slips away through a network connection that no one in the hospital even knew existed thanks to a back door on a piece of medical equipment).

HIPAA largely means that your medical information has been outsourced to whatever software/network/hardware provider claimed they could do the job (and whoever they outsourced the job to in some cases). If you don't sign whatever HIPAA agreement(s) your provider puts in front of you, chances are they can't treat you, so what choice do you really have?


Do you really think HIPAA is the only reason medical providers are going digital?


>We need HIPPA for all personal information, not just medical.

The UK has the Data Protection Act which does some of this.

One radical option would be to grant people copyright over their own PII (with about a billion caveats to allow journalism etc.)


>with about a billion caveats to allow journalism etc.

Then you'd just see setups like the financial industry has. You get analysts, call them journalists, and have people subscribe to your publication. The journalists go get insider-ish tips and 'publish' them to a select group of followers.

With laws like that you'd just hire a full time business analytics journalist to cover your store.


Who is this we? America opted out of the OECD privacy framework.


To be honest this is news to me and I lurk on hn every day :-/

I thought Minority Report was still a few years away...

Thanks for the links, this stuff is both fascinating and scary.


Amazon too:

https://aws.amazon.com/rekognition/

Sentiment analysis for everyone!


Yep, Microsoft also offers much the same services. Works surprisingly well too. https://azure.microsoft.com/en-us/services/cognitive-service...


Indeed. This isn't new. It's everywhere.

The cameras retailers use with their surveillance systems are coming with facial recognition built in now. [1]

And lots of retailers, banks, etc, are using systems that track people's visits across multiple locations. [2]

You'll see a lot of these systems being sold as fraud/loss prevention solutions. The reason for this is that it's a relatively easy sell this way - customers can count how many thieves they've caught this way to easily determine the ROI they're getting on the system. Once the systems are in place, it's relatively easy to start using them for marketing related purposes.

Not all uses of systems like these are necessarily unethical. Consider a case where you want to set up a rule like 'if the average lineup length at the checkouts exceeds 5 people, call backup cashiers'. The problem is that once you have something like this in place, it's very tempting for company execs to want to use the data for legal but less than ethical purposes.

[1] https://www.axis.com/ca/en/solutions-by-application/facial-r... [2] https://www.facefirst.com/solutions/face-recognition-predict...


That's always going to be tempting, and the only real tractable solution is for society to have a larger conversation on the ethics so the law can catch up with it.

Note that some ethical consensus is key---without it, companies can just price "Well, some customers think image recognition is creepy" into the risk model and do it anyway. Compare privacy concerns---people talk big about their concerns over privacy, but in practice, we're still in a world where a survey-taker can get very personal information from a random individual at a mall by offering a free candy bar. Until and unless people arrive at a common consensus that their personal information---including their face---has value or they have a proprietary right to that information, even in public, there's no real tractable solution to this problem.

... because there's no real agreement that there's a problem to solve.


the only real tractable solution is for society to have a larger conversation on the ethics so the law can catch up with it

The department of commerce tried to facilitate talks about establishing a voluntary standard. The surveillance industry was so terrified of the idea that they should be held to a principled position that they wouldn't even budge on one of the weakest possible protections: A voluntary-participation standard that said people must opt-in to be identified by name through facial recognition when they are on public property.

https://www.eff.org/document/privacy-advocates-statement-nti... (and previous HN discussion on negotiations falling apart: https://news.ycombinator.com/item?id=9729696 )


Woah, i'd give away a lot of information for a candy bar. I get nothing from ads; they lower the quality of everyone's life.


Often, you're more-or-less getting the content surrounding the ads from the ads, albeit indirectly.

My local gas station upgraded its pumps recently to allow it to play video ads on the screen used to do the credit card transaction. I don't doubt it's partially the reason that gas station is still operational when similar non-franchises vendors in town have gone under.


Often times i don't give a damn about the content it sponsors. I'd much rather be able to do my business without being assaulted by ads, which often have little to do with reality, and often act as an alienating and dehumanizing force. It's very difficult to see the good.

I would rather have no content than ad-supported content. of course, nobody will ever offer that! You can't sell ads if people can opt out, and too many big players think they're the only way.

Thank gas station should have charged more or folded than sell you shit you don't want, won't want, and will never spend money on.


If you're expecting people to fold their livelihoods instead of sell ads, you may not understand how attached people are to their livelihoods. ;)

Meanwhile, there are some inroads into financial support alternatives to ads everywhere. Google has a "contributor" product (https://contributor.google.com/v/marketing) where you can basically bid against the ads they'd vend to you; instead of an ad running, you pay a microtransaction to buy the privilege of no ad.

It's an interesting idea, but it only works with Google's ad network.


Oh no, I'm not expecting it to go away.

Frankly, i don't mind google ads; i mind wasting 20 seconds to load a page with about two paragraphs of content and 3mb worth of ads. But this is all ignorig the broader point: why are we basing our revenue off of patterns many realize for being toxic, consumerist, negative-value? People AT GOOGLE will happily admit this while working to build it.

I do my own part by supporting Ad Nauseum[0] and actively punishing sites that serve ads, particularly facebook and google. It's also decent for a (very shallow, for now) layer of noise for your ad profiles. Offer me a flat fee and convince me to spend; don't trick me into viewing ads.

0: https://adnauseam.io


Gas stations in the US make very little, possibly zero, from the sale of gasoline.


Anyone have a source for this oft claimed fact? Retail to spot spread is averaging $.50-$.70/gallon for 86 octane with $.20-$.30 added for each premium tier in PHX. Does adding detergent & transport eat up that much margin?


There's no real reason to believe that, though. If someone has a space for an ad, why wouldn't they sell it, even if they don't need it to produce the content? This is one of the problems with profit-maximization: it means every avenue of efficient revenue generation should be exploited whether it's needed or welcomed or not.

Even the pay-for-no ads model doesn't hold up, because if you pay for content, why wouldn't they just double-collect and make you pay for ads served with the content? I purchased my phone and my phone service, but I still get ads in my notifications. Because I didn't pay "enough" to avoid it.

It's like paying off a blackmail ransom. You give them $100 and they come back next week and say "how about another $100?"


"The cameras retailers use with their surveillance systems are coming with facial recognition built in now. [1]"

Your source in the marketing material of an IP camera manufacturer.

We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running. These manufacturers, like Axis, whom you cite, would love for such capabilities but they are still very uncommon.


Can't they still use the feed from regular camera and have another system to facial recognition from that feed?


>We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running.

While I'm sure this is true (since the majority of IP cameras in the world are cheap things little more than webcams), do you have a number for retail stores specifically? I know many of the larger chains spend a lot of money on their cameras and movement detection and other intelligence has been onboard those for at least 15 years.


Just yesterday I was hearing news of how most of the retail giants and lots of smaller retail stores are going out of business due to competition from ecommerce. If that means the end of practices like this, then good riddance.


But, aren't ecommerce sites collecting this information and more from your browsing? I don't think it's possible to say one is much better than the other, just that we expect tracking online, not in the real world.


There is the point that in the "real world", social norms haven't yet adapted to the requirements of privacy (although you could also view it as societal norms allowing too much tracking). For example, if I wanted to use a mask to conceal my face from trackers, I would be ostracized. There are analogues in the virtual world of course, but it's usually harder in the physical world.



some modern cams (at least traffic ones) no longer use AGC and will not be fooled by this


They're even more accurate than facial recognition at building a profile of what demographic you fit in


It's likely traditional retail that falls by the wayside is going to make room for more competitive retail that leverages this information to its advantage in a way ecommerce sites can't.

Consider Amazon Go (https://www.theverge.com/2016/12/5/13842592/amazon-go-new-ca... setting up an account with a store, users enter, grab what they want, and leave. The system of cameras and biometric trackers observing the store figures out after-the-fact what you grabbed and charges it automatically to your account through a sensor fusion including face recognition. That's a level of convenience rivaling ecommerce for things people want to grab by hand (often produce and small items, for example), and it's completely enabled by this category of technology.


Perhaps, but it simply means the survivors will become more desperate to gain an edge. We've seen this exact behavior with online news sources cramming more and more ads and trackers into websites.


[deleted]


It is HIPAA, not HIPPA. It stands for "Health Insurance Portability and Accountability Act."

I get your basic point and I don't disagree that we need more privacy protection. But, no, we do not "need HIPPA" for all personal information.


I'm 65 and it says I am 39.

This is a wonderful app. I will use it every day!

It also picked up the colors in my aloha shirt perfectly. (Anyone who knows me knows that I am to aloha shirts as Steve Jobs was to black turtlenecks.)

When I want to feel young and go shopping for shirts, now I know what to do!


I'm a man and it thinks I'm a woman.

But it also scores me high for anger and sadness, despite (what I thought to be!) a rather neutral expression. Perhaps it knows more than we think :)


Hey! We should make a club. I am a 41 year old male and it recognized me as a 26 year old female. It recognized my wife at her age and gender until she took her glasses off. She lost ten years and stated, "I'm never wearing my glasses again!" Then, proceeded to walk straight into the wall.


As a 30 year old male who apparently looks 42, I hate all of you.


Take your glasses off (but check for walls before you do)..


I am a 22 year old male who looks like a 39 year old.


You can mess with it quite a bit by making different expressions and looking different directions.

Though I'm guessing it gets a lot more accurate when it can take and average multiple shots.


I tried the demo. A little sad, that it sees me as 12 years older than I am, and that I apparently always look angry and disgusted! I'm going to blame it on my glasses and bushy beard, and try to look at the bright side - apparently face scanning systems aren't quite good enough to get a read on me yet. (And try not to be too sad about looking like a grumpy old man)

(Anyone else with a glasses, a beard, or other non-typical facial features want to comment? I'm curious now how well their system handles these?)


I uploaded Comrade Putin. [1]

Thinks he is 45 years old. He is 64! Not calibrated for the superior Russian genetics.

[1] https://pbs.twimg.com/media/CuV5wciUAAA0aBz.jpg


It's hard to tell, with all the plastic surgeries.


Same experience here. Shows my age 10-15 years more than actual. I tried to smile and it just filled up the "disgust" bar. Neutral expression shows a high amount of "sadness". I wear glasses too, and have a slight beard.

I don't think glasses and beard are non-typical facial features!


I'm a 25 year old male, bald with a full beard, and it thinks I'm a 33 year old woman. At least it could tell I was happy?


and profession = circus sideshow?


A 33-year-old Russian woman?


Of course you look angry and disgusted. That's only natural, considering that you are aware of what's going on.


Bearded male here, and it got me exactly right the first time - 33 yr old male, 100% happiness.

The second time it thought I was 28, which increased my happiness even more.


Pretty good with me too. Guessed 51. I'm 53. My happiness was inscrutable.


MS did this one a while back to estimate age. I loaded some family members and it was on the majority quite accurate.

https://how-old.net/


With a bald head, beard/mo, and reading glasses it doesn't detect a face. Without the glasses it estimates me as 7 years younger than I am - and reasonably high on anger and sadness...


It added 5 years to my age (42) but got everything else right.

My partner tried it and it took 10 years off her age, and found an angry 31 year old man hiding in the folds of her clothing!


I did well... it said I was an angry 33 y/o male. Well, I am male... I'm almost 50... and I didn't think smiling at the camera conveyed anger... but who am I to question our AI overlords ;-)

(edit)... on the other hand they probably were just trying to sell product so thought flattery was the right approach...


Says I'm 4 years older than I am (31 / 27) and have high levels of sadness.

Covered up my receding hairline a bit and it said 29. I reckon if I shaved I could get it down to about 22 since that's how old people usually think I am.

Pulled a disgusted face and it said 47. Hmm.


It thinks I'm 15-20 years older than I am, and even if I smile it thinks I'm angry.


The real metric to judge it' s effectiveness is by comparing its accuracy to an average human observer's responses. I doubt a human would do a lot better at estimating someone's age.


Most people think I'm around 30 and I'm 42. The software said I was 40 the first time and 44 on the second try.


From the responses it feels like they are using aws recognition.


Don't think so - I just used the same pair of images - one with my glasses on and one without - AWS Rekognition guesses mostly the same for both of them - the sightcorp.com one doesn't even detect a face when I've got my glasses on.

Rekognition guesses a wider age range - but gets a "correct" answer - the sightcorp one guesses me as 7 years younger than I am.


I wonder if the age error is symmetrical, or if it tends towards guessing lower or higher? It would make for an interesting study.


Glasses and a beard. Pegged me as an angry white guy about five years younger than I am.


Apparently I'm a 40+ yr-old male. At least it got the "male" part right.


I was very unhappy to discover that shoes now often have RFIDs built into soles. This + anti-theft RFIDs readers that are already deployed by the entry of most stores can allow to easily assign unique ids to shoppers.


Most anti-theft tags are not RFID and the gates are not full RFID readers. At least in Europe, vast majority I see are still based on simple resonators that get disabled on checkout. Effectively, the gates only provide a yes/no signal and can't be used for tracking.

Applied Science has a good video on how they work:

https://benkrasnow.blogspot.si/2015/11/how-anti-theft-tags-w...


Retailers also use your MAC address from your phone which is always being published unless you take precautions.

http://lifehacker.com/how-retail-stores-track-you-using-your...


Not on iOS anymore, that value is scrambled on a regular basis.


This is super cool. Was there any announcement or documentation for this?

We used a Cisco Meraki router once for a client and rigged it up to know who was in the office (for fun, to be aware that it could be done). It'd be nice to know the iPhone/iPad scramble themselves if possible.


Apple made this change in 2014, it was widely reported. [1] Apparently it exists on Android now also, though I don't follow that platform closely.

1. https://arstechnica.com/apple/2014/06/ios8-to-stymie-tracker...


Android does the same thing, both announced a while ago


Only if your phone is locked and it is looking for all open wifi networks. If you unlock it or it is connected to a particular wifi network this is not true.


No necessarily. If you are connected to some wifi and sending/receiving data, your MAC is still visible in the air.


Would putting my newly bought shoes into a microwave be a good countermeasure?


I've heard that you can disable RFID readers (not tags, readers) with an appropriately-resonant coil and an EMP circuit.

I'm not sure if the same can be done to tags, but considering the size of the tiny electronics, and the fact that they are manufactured under the assumption they'll never need to be touched (aka, no CMOS spike tolerance), it might be trivially...

...wait. I just remembered about RFID alarm barriers in retail stores.

Well this is annoyingly difficult to discuss, then...


Indeed you can, from a disposable camera flash circuit[1]!

Though I concur, might be used for evil purposes, couldn't resist posting this. I just love disposable cameras hacks.

[1]:https://events.ccc.de/congress/2005/static/r/f/i/RFID-Zapper...


Ah, very interesting. Thanks for the link.


>> Well this is annoyingly difficult to discuss, then...

Why? Is there a law against public discussion about how to disable an anti-theft device or something?


Okay, not really - but it can be tricky to know where to draw the line. I guess I was uncertain.


I draw the line where actions are taken. A discussion is not an action. Using a device illegally is, like for instance pulling the trigger of a gun with the evil intent of murder, or taking something that isn't yours.


I heard that the "is there a pot on"-impulse of induction cooktops is strong enough to kill RFID-chips without burning them. Have not tried it though.


another trick would be to pay for the shoes in cash; in this case they will not be able to link the RFID chip to your real identity. Cash payment is a very privacy friendly technology.


That's not the point - the point is being identified as an entity by a unique marker that the RFID tag gives off. It's still an anonymous entity, but it can be deanonymized by correlation... with your face via video or whatever.


Next month: new shoe regulations require the use of materials that melt or burn when microwaved.


Nah, they won't need to push that hard. "Warranty void when microwaved" will most likely be enough.


I've never made a warranty claim for shoes, so that shouldn't be a problem.


Me neither (too lazy for that), but I know they are used and abused by people too. This leads to funny cases I heard of like a company specifying that some shoes are for "walking", not for "running", and refusing to refund them if you admit to running in them.


You guys need Norwegian consumer protection...


As long as it comes bundled with Norwegian famously cruel child "protection" services, thanks but no thanks.


That doesn't sound too unreasonable. Some shoes like heels are made for fashion, not function.


If it's "not unreasonable" for them to reject warranty claims if you run in shoes "not intended for running", does that mean it is reasonable to make a warranty claim on shoes "intended for fashion" if you're not picked up while wearing them?

"Wore these heels to six bars, didn't get hit on once. Please repair or refund."


Do you have a source for this? All I can find online is the occasional use of RFID for stock management or the odd marketing campaign. But nothing about customer tracking



Is there something like this available commercially, or at least a guide on making one with a Pi?

I suppose reading the paper is one option.

EDIT: Link to the paper seems to be broken. Here's the PDF: http://www.cs.vu.nl/~ast/Publications/Papers/lisa-2006.pdf


Pretty sure they already are built into credit cards for this exact purpose.


Any sources for this?


By grinning like an idiot I was classified as a very happy 33 year old female. I'm a 25 year old male.

I tried variations of the standard expressions and pulled off sad, disgust, anger quite easily.

I knew binge watching Lie To Me before my psychology mid would come handy at some point!


if you ever are homeless you could be like the guy from The Imposter (2012 film) and trick a family into believing you are their long lost daughter.


I was blown away by the accuracy of Microsoft's https://how-old.net/ This just kicked it up a notch.


Counterpoint: this tool pegged my 8-year-old as 13 (and she looks younger than her age) and me as 56... well into the double-digit error zone.

Someone recently thought I was 30. People aren't any better than computers.


When I was 18, some lady at the community pancake breakfast in my grandpa's small town told my mom that 13 and younger are free. Humans make mistakes too.


Are you by chance of a specific ethnicity? (no offense). These systems fail spectacularly if the training set includes only certain ethnicities and the test-ee is not one of them.


If it fails, it fails spectacularly, worse than any human ever would.


When I was 18, someone thought I was 35. You apparently don't appreciate how failure-prone humans are.


It thinks I'm 91.

I am not 91.


"Emotion Recognition

Understand how your customers feel. Detect and measure facial expressions like happiness, surprise, sadness, disgust, anger and fear."

Creepy, indeed.


Perhaps. On the other hand, it's something that good salespeople are doing internally already; there's an argument to be made that this is just automating yet another part of the customer service process.

(There's an old joke that sometimes shows up on HN about augmenting an automated bugtracker to snap a photo when a crash is detected or a bug is reported, so developers can be reminded that bugs tie down to real people who are actually sad / angry that the software failed them ;) ).


>You can even try their live demo

I'd rather not give them my facial image so they can optimize for me.


Matched to your IP none the less. I wonder why Microsoft made that "how old are you?" web app...


To collect training data.



Why is this scummy exactly? If a salesperson was to try to sell to you in a store, they would take into account how you appear and act to tailor the sale. There's nothing wrong with that. Why is it suddenly bad if a machine does it?


Because when you talk to a salesperson you know you're being looked at (and reciprocally you're looking at them), and human memory is limited so it's unlikely they will retain any "data" about you when the contact is finished.

Here, instead, there is no indication that you're being watched, analyzed and kept recorded for indefinite amounts of time.


Reminds of a law here in Sweden and how car surveillance work on the bridge to Denmark. The law forbids the unnecessary registration of people so in order to avoid breaking the law the police have a live system in place where information of a car on the danish side get show on a screen on the Swedish side, giving border and toll guards enough time to react. The whole thing is legal only because the system operate live and never store any data, which otherwise would create a illegal register with personal information.


I assume that the data is being used for A/B testing on the display designs (we get 20% more attention from teenagers when the background is orange) - if that's the case, not very scummy.


If you are in public you are being looked at I do not understand your logic. When you go in a public place there are already public accessible web cams that people use to track this kind of thing i remember a thesis that used public accessible cams to try and track people and build up a database. I have always had the opinion you lose privacy when you leave your house since you are in public, and public like is opposite of private/privacy so to me it makes sense.


I have always had the opinion you lose privacy when you leave your house

Privacy is not black and white.

There is a world of difference between someone seeing you for a moment as they pass you in the street and forgetting you a moment later, and automated systems that permanently record everything, analyse it, correlate it with other data sets, make it searchable, and ultimately make automated decisions or provide information that will be used by others to make judgements about the affected individuals, all without the knowledge or consent of those individuals and therefore without any sort of reciprocity.

The idea that you have no reasonable expectation of privacy in a public place dates from a time when you could also expect to pass through town in relative anonymity, go about your business without anyone but your neighbours and acquaintances being any the wiser, and would probably change those neighbours and acquaintances from time to time anyway so the only people who really knew much about you would be your chosen long-time friends and colleagues. I think it's safe to say that that boat sailed a while ago, and maybe what privacy means and how much of it we should expect or protect aren't the same in the 21st century.


Just because there is no expectation of privacy does not mean that a reasonable person would assume that their every action is being recorded in precise detail to be stored away forever by a third party.


... but mostly because reasonable people haven't been brought up to speed on what is technologically feasible now.


A lot of things are technologically feasible, and in many cases can't realistically be prevented ahead of time, yet are still considered socially unacceptable or even made illegal. Just because we can do something, that doesn't mean we should. This principle has never been more relevant than in the use of technology.


What's technologically feasible is irrelevant to our moral expectations. It's technologically feasible to brain you with a club and steal your stuff, and has been for millennia.

Preventing the misuse of Blunt Instrument Technologies™ is literally what laws are for. Surveillance is just a club we don't have laws about yet, but should.


Well, your behavior and appearance isn't logged in some computer somewhere available for someone to look at whenever they want. Not to mention, face-to-face interaction means you know someone else is watching. This allows someone to do this without your knowledge.

It's just creepy.


Because a machine has much more capabilities than one salesperson.

The salesperson doesn't know in what shops you have been before.

The salesperson might also not know you talked to his colleague the day before.

This is about trust and privacy. You can't trust what they do with your data.


So if the salesman has an eidetic memory it becomes unethical? How about just an above-average one?


Your reply is disingenuous. The problem is not that abuse is not possible in a human-driven system. Of course some gifted salespeople have incredible memories, hypnotic powers of persuasion, and so on. However, you must consider the following:

1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.

2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.

3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.


>1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.

A temporary problem solved by natural selection, technological augmentation, and increasing incentives. Perfect performers in any profession are hard to come by. Ambitious people still strive to get there.

>2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.

Because people don't understand technology or sales. In your reality, people should be on guard all the time because sales and marketing were already continuous, even before hidden cameras. In actual reality, most people don't care that much about being sold to as long as the sale itself it not abusive.

> 3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.

Sure, but that omits the necessary step of justifying this behavior as being either mildly bad on an individual level or terrible on a mass scale, much less both. It is neither.

Also, I would add item 0: advances in technology mean that surveillance devices will only become smaller, cheaper, and more connected over time. The future you fear so much is, in fact, inevitable.


You are applying binary "all or nothing" logic to the real world, which contains many more shades of grey.

It is true that technology (both social and digital) continues to progress, and that the genie can't be put back in the bottle once it escapes. However, you don't have to put it back in the bottle. Speed limits don't stop speeding, and laws against murder don't stop homicide. The legal and regulatory system exists not to fully prevent understand behavior, but rather to reduce it to a manageable level.

In short: I agree with one part of your premise. Technology will continue to evolve and will continue to challenge human society in this area. Unlike you, however, I don't believe that we have to roll over and accept the implications and consequences of unregulated privacy invasions, neuromarketing and whatnot.


I don't think that either, because I correctly recognize that in public, you do not have privacy, either de jure or de facto. Especially if you're not even wearing a burqa, which would today at least give you de jure privacy because it demonstrates intent.

I'm sure that in the future, we will also create cheaply available opaque faraday cages that you can roll around in if you wish. And that most people will not care to do so.


You do have privacy in public. Both the de jure "reasonable expectation of privacy" and the de facto privacies of anonymity, free association, and predictable rules of social engagement.


>the de jure "reasonable expectation of privacy"

Does not protect your exposed face

>the de facto privacies of anonymity, free association, and predictable rules of social engagement

Are outdated illusions with no basis in fact


Well, I seem to have no trouble practicing all of those, so I know they are based on fact. Perhaps you don't actually understand what I'm talking about? Or maybe your experiences differ. Either way, telling me the things that that I personally do are not being done is... not an argument.

>>the de jure "reasonable expectation of privacy" > Does not protect your exposed face

Yeah, that's why it is "reasonable expectations" not "absolute enforcement."


In other words, as long as you are unaware of the surveillance, you are happy to pretend it doesn't exist? So where's the problem? Just don't click on links like the OP.


Eidetic memory and follows you everywhere and can transfer all those memories perfectly to any number of other people? Yeah. It's like super-stalking and it's obviously horrible.


Humans will get there too.

Stalking per se is mostly only illegal because it becomes harassment and bothers the victim. This kind of monitoring is entirely unobtrusive. As the response to the original tweet illustrates, most people aren't even aware that it is happening.


The information is being used to conduct asymmetric psychological warfare. The notion that it's harmless even if never outright abused where we define abuse as use for other than its intended purpose, is false.

Being subjected to constant sensory input and trickery from dozens of teams of experts on consumer psychology is bad enough when they haven't also been stalking and recording your every move.

Caveat emptor becomes an absurd position when the power imbalance is so great. Massive data collection and mining needs to be reigned in. The fact that it's not obvious people seeking to trick you by any means necessary are recording you everywhere you go does not make it OK, at all. Surveillance capitalism is way, way over the line, has been for some time, and just keeps going farther. That they're good at keeping you from realizing you're under surveillance is no defense whatsoever.


Complaining about warfare that is asymmetrical solely due to the incompetence of one side does not elict any sympathy from me.

Consumers do try to aggregate data for the equivalent of "massive data collection and mining". Most just don't care to pay for something that is not wholly controlled by a storefront. Generally, producers are more likely to understand the ROI.


Also if it records images it would fall under data protection legislation in the EU


I find it funny that the store doesn't trust their salespersons to make such a judgement on their own. Probably they hope to do analytics on what kind of people are visiting and when. Selling the data would only make sense if they are able to link it to an identity, I am not sure that they can legally do that.

Well, you never know into what dystophia you are heading...


Most humans today are prejudiced against nonorganic life due to not growing up interacting with anyone but other meatbags.

There's a huge double standard in place that makes it somehow wrong for computers to do what humans have been doing without objection for decades or millenia.


It's because people view the AI as an infallible machine that records everything, which is much more intimidating than the gut instincts of a salesperson.


Right, that's the manifestation of their prejudice. In reality, there is a spectrum, not a dichotomy, and some humans can have better memories than some computers.


Everytime I eat at Chipotle I smile at the sign on door that proclaims this property is protected by Envysion.


> Any time you've been in a public place for the past few years, you've likely been watched, analysed and optimised for

I don't believe this. This kind of advertisement in public would be illegal in germany as it is mass surveillance.


Germany is an outlier; their history with the Nazis makes the country unusually conservative about anything that could be abused for mass-surveillance purposes.

Not that this is a bad thing---being able to think differently like this is one of the positives of having countries!---but relative to the rest of the world, what Germany considers "surveillance" is unusual and sometimes surprising.


I think East Germany is probably the anti-surveillance sentiment more than 70 year old history.


That demo is amusing. I did a Google image search for "N year old faces" for N = 5, 9, 10 and 14, and eliminated results where the accompanying text did not confirm the age, and then gave some of the remaining ones to it. It was always at least 10 years too old on its guess for these children. It got the gender right maybe 3/4 of the time.

I also tried it on a few internet porn images. It looks like it is definitely only relying on the face for determining gender, or it thinks that there are a lot of women with hairy flat chests and large penises...


Really neat, except, it's about... 22 years and 5 months off. Plus a gender. http://i.imgur.com/wVmPdDj.png


I also have my own facial recognition open sourced as well.

https://gitlab.com/crankylinuxuser/uWho

Runs on CPU only, realtime 1280x720 @ 15 fps.

Is it creepy? Sure. But anyone can run it. I was looking at a rewrite to work in CLI with a web interface instead. But the core loop is the magic part that makes everything work nicely.


Isn't it illegal in most countries to put cameras in public places? Especially if they don't contain a warning?


No one's going to go to jail for their first offence of putting a in an advertising sign.

What's the worst that could happen? The local advertising regulator will order you to meet the regulatory standards or remove the cameras, but give you x number of weeks to act per unit installed.


haha, i did something like this in college for a project, it was confined to only emotions though and used images skimmed from google to train it.

Maybe i should try to sell it..


I will be honest, I have no real problems with this. Then again I enjoyed some of the concepts for advertising shown in Minority Report which did feature ads which could identify you.

the idea of collecting who looked at your display is invaluable. It would be beneficial for both government and ad agency. Ad agency is obvious but government could learn if displays present information people want and it was presented in a manner to catch their attention. The negative aspects of government use could be limited through privacy laws and such.


Then new Hitler (Erdogan) enters office, and starts using said device to deport infidels like it's 1915.


I've tested it a few times and it's indecisive on my gender and age (depends if I'm smiling, angle etc etc).

Microsoft's service is much more consistent and accurate (I've tested the same images...): https://azure.microsoft.com/en-us/services/cognitive-service...


Wearing a baseball cap makes me invisible to their scan...


I tried, but nothing from the response... is that normal?


I take my glasses off, but with exactly the same expression and position; 15 years younger, suddenly a lot African(?) and very happy.


Fearful with my glasses on and angry with them off - I'm sitting here while my morning coffee is steeping so I was expecting sleepy!


Try it but first take some clay and apply it all over your face that's skin-tone... then use that as your face ha.


I love how many people replied to your comment about how creepy this is with their age, appearance, and gender.


I love how people tried it in the first place. Now the site has faces tied to IP addresses.



hahahah!!! I did try this image and here are the results (removed plenty of px-related lines). My commentary next to the "<---" :

  "error_code": 0, <--so i guess no error

  "img_size": {
    "w": 1200,
    "h": 1609
  },
  "people": [
    {
      "age": 26,     <--- they need to re-calibrate that and make it x3
      "gender": -87,
      "mood": 21,

      "clothingcolors": [
        "#996666",
        "#aa7766",
        "#bb8877"
      ],
      "ethnicity": {
        "african": 17,     <--- wow!!!
        "asian": 8,        <--- wow!!!
        "caucasian": 65,   <--- for real!!!!!
        "hispanic": 7      <--- wow!!!
      },
      "emotions": {
        "happiness": 0,
        "surprise": 4,    <--- even HE can't believe he's POTUS
        "anger": 76,      <--- not surprised at all
        "disgust": 0,
        "fear": 6,
        "sadness": 4      <--- #SAD


> they need to re-calibrate that and make it x3

They guessed 39 for me (I'm 25). The age bit seems not so accurate


Now your employer can monitor your facial expressions to determine if you are actually working or just reading HN.


The future is here:

"WorkSmart can track workers' keystroke activity and take webcam images to ensure they're doing their jobs."

http://www.oregonlive.com/silicon-forest/index.ssf/2017/05/j...


The future was here in 1995....

I worked for the U.S. Postal service for a time doing data entry (here as a matter of fact... http://www.sltrib.com/news/3445651-155/the-first-and-last-of...).

That job measured the number of keystrokes per hour of each employee. You had to maintain a 10,000 keystrokes per hour minimum data entry rate, they also spot checked for accuracy. Capturing the data you were suppose to enter (a scan of a piece of mail) what you entered, and what should have been entered.

While there was no question about goofing off... they used commodity hardware, but nothing else was general purpose (no internet, no email, no solitaire, no obviously general purpose OS), and no phones, talking., etc... they were very much watching for speed and accuracy during the entire time you were clocked in.


> what you entered, and what should have been entered.

If they knew what should have been entered, why couldn't they just automate the data entry?


They extrapolated your accuracy from auditing a relatively small sampling of it, or fed you known-value items and automatically audited them.


Yes, this. I don't know what technique they used. I expect it was known-value variety because they would have better automation in error detection. They may also have used a consensus model, showing work product that really wasn't known value, but was shown to enough different data entry people that the error rate was negated (the error rate requirement was pretty low as I recall) as in a group almost everyone would have done the entry correctly. A third alternative would be to send images that had passed OCR intake for the testing sample.


What sort of data entry? Seems like OCR/human combo would have been better.


Data entry of all the addresses that the OCRs failed to read.

OCR was pretty bad back in 1995 (which is why the Palm Pilot had a special handwriting for users to learn).

See this NPR piece: http://www.npr.org/2010/12/28/132393643/Undeliverable-Mail-I...

Also "Act Three" of Episode 70 of "This American Life": https://www.thisamericanlife.org/radio-archives/episode/70/o...


This is a very sad sad life people agree to :(


It was tedious and not well suited for people needing a certain level of variety or intellectual stimulation in their work. I found it a touch soul crushing... on the other hand I knew people there that were completely content doing exactly that work. They'd turn on their radio/books on tape/etc. (some, including me from time to time, would even read a bit) an go to a world of their own thoughts for 8-10 hours and be content. I had a relative that worked at the same place for 20 years and was completely happy with the job and life.

I did the data entry job for (as I recall) about a year and a half: it paid the bills and gave me a better life than I would have without it and I didn't have the right qualifications or work experience for anything better. For those reasons I was appreciative of the work while at the same time I looked to improve my working situation... something I can say of my work today (though what counts as improvement in working situation is way different now).

There are many jobs out there that need doing. Many of them are boring, or dirty, or dangerous. I don't think that necessarily makes for any more or less of a "sad life". I'm completely thankful to those that do those boring, dirty, or dangerous jobs. Some of them, like me, did it as an early first job sort of thing and used the work experience to get a better job: we paid our dues as it were. Others, like what I find unfulfilling or uninteresting. Some want to work outside, some want to work with their hands, some want to work with minds, and some simply want to be a bit financially better off than they are without the work.

"There is no such thing as a lousy job - only lousy men who don't care to do it."


If only we ran other government agencies the same way!


We used to do this in a school - it wasn't creepy - hear me out:

We had labs of iMacs and if anything happened to a machine, kids would (more often than not) just yank the power cord. Occasionally this would foobar the machine entirely and create unnecessary work. We couldn't catch the culprits.

So, if the machine managed to boot up successfully, after an unexpected power loss, we would take a photo using the built in camera and send it along with a ticket to the job queue, as well as do a complete re-image of the machine (automatically).

The number of funny photos we collected of kids just starting at the computer with WTF looks on their faces. But - from these photos we at least had the opportunity to educate the individuals about how to look after the computers a bit better.


There was a minor scandal a few years ago when a school was sending laptops home with kids and randomly snapping photos via the webcam for similar reasons

I think this is the original story, you can follow the "related stories" links at the bottom to see how it developed:

https://arstechnica.com/tech-policy/2010/02/school-under-fir...


That's creepy.


Couldn't you just identify the user by their logon?


It's been a while but I don't think the old iMacs at my elementary school had individual user logins at the system level. For things like the reading test program there was a login I think but not for the whole machine.


Bingo!


You're assuming there were usernames and passwords. For consumer grade devices in decades past, that was not the norm.


"...

Damn it!"


Fair point.


Haha :D


Amazon uses pretty tight tracking in their product warehouses to verify that floor runners are meeting expected performance goals.


Oh dear it thinks I'm a 35 year old woman...


And I suppose you're not?


It's AI, it knows you are secretly trans.


Now I kinda want to see what comes out if one gives a neural network a big pile of pre-transition photos of trans people, and another pile of similarly old photos of cis people. Would it be able to reliably predict if someone's likely to transition based on some subtle cues in the images?

Problematic parts: sourcing these images (I know there sure aren't many photos of pre-transition me, and I don't let the ones in my possession go much of anywhere), lots of ethical issues around having a system that can say "I am 97% sure this person is gonna want to transition". Also probably lots of other ethical issues I'm not thinking of.


No idea, I wasn't really being serious, but what inspired me was a real case of a woman who somehow got identified as pregnant by her supermarket long before she was willing to tell anybody (I don't remember reasons) and they sent her some coupons or something and her family found out. Needless to say, she didn't appreciate.


It was the father who complained. The case as reported by NYT http://www.nytimes.com/2012/02/19/magazine/shopping-habits.h...



just tried it... 57 yo?! Fucking kids wrote this code. Everyone above 30 is an old man for them.


I'm 40 and it keeps saying I"m a 28y/o


It just said I'm 23 on one picture. The other one unfortunately was pretty close to my real age.


It said I was 35 years old and yet I am 23 so I don't think it is just you


In Japan at least, before automated facial recognition, cashiers recorded buyer demographics by hand. I would think other places do it too.

Edit: Here is what the buttons look like. Gender and age. https://image.slidesharecdn.com/hvc-c-android-prototype20141...


I've seen this in Canada, at a Dairy Queen.

The cash register had a matrix of buttons

M: [0-9][10-19][20-29][30-54][55+]

F: [0-9][10-19][20-29][30-54][55+]

They'd just push a button as they punched in your order.


Presumably most cashiers would just optimise to pressing the same button on every transaction, since doing it "right" makes no difference observable to them.


Unlikely to be common behavior. The tally would come back at the end of a shift or day and the person doing that would be reprimanded, then fired if it continued. It would be enforced by the manager/s.

Now, would they press a random demographic button after that (instead of the same button every time)? Maybe, however there are numerous other ways to increase compliance in that case as well. If the logs kept coming back sketchy, well the cashiers are on tape - bam, another firing (note from the video tape: cashier intentionally hits male button when it's obviously a female, then repeats the behavior multiple times). Eventually the example gets across to the other workers to at least make an effort.


You're giving the organizations too much credit. A roommate of mine worked at a fast food restaurant in the early 2000s, where they were measuring the speed that workers took orders by timing the transactions on the cash register. The store manager had the brilliant idea to game it by running all cash transactions on pen&paper - not using the register at all! not even the cash drawers! - and then keying in the orders as fast as possible after the fact. The store won an award, the manager got a big bonus.


Hah, that's brilliant.


In a few stores here in Australia, I am asked for my post code (or country of residence if no post code) by the cashier when ringing up a sale. Predominantly at tourist/tour related points of sale, but I've also had it an electronics and white goods stores.

No idea if the operator is also recording gender and perceived age group etc., but I do know that on most occasions, you can opt not to answer the post code question.


When you go to those weekend open inspections in Sydney, you are almost guaranteed to be asked for your postcode. They record that as well.

I actually did some experiments - for different properties in roughly the same area/price range, I told different real estate agents different postcodes. It is beyond reasonable doubt that the code you tell them play a huge role on how they rank you as potential buyers. When you tell them a random north shore post code, you are guaranteed to receive a nice & friendly follow up on the coming Monday, however if you tell them that you live in the west (when mostly inspecting north shore properties), they would smile and immediately end the whole conversation.

The sample size here is ~50, which I believe is big enough to draw some reasonable conclusions.


If you give them a mobile phone number (which they all ask for) they will simply use that to track you far more closely than than can with a postcode


How? As far as I know a commercial company can't get your location or other personal information just by knowing your mobile phone number?


You are fooling yourself here. They probably have a whole data cake already, they want the mobile number as a cherry topping on this cake.

You know about the anonymized and aggregated data that one can buy ? Well it can be easily anonymized and deaggregated.

The good part is that you don't even have to do it yourself and go directly to data brokers that have done the hard work for you.


Coming from the standpoint of a very curious independent researcher, I'm curious what I might query/search for to learn more about these data brokers.

Not at all out of anything that might be categorized as malice, just to add this datapoint to my mental map.


Buying data lists. You usee your mobile number to get your cinema ticket - or whatever - the cinema sells that data. Do you get package alerts when you have a parcel due, now your phone number is joined to that address, plus presumably the credit card companies sell their data (?).

Companies amalgamate that data, then sell lookups of varying degrees.

Screwfix in the UK gather a lot of personal data as part of their sales process, they're the least covert about data gathering I've seen.


I'm not sure if it's true, but a friend once told me the reason a store asked for your zip code was to see if they had a large audience coming from a certain area. This let them know other locations to possibly open other stores.


As far back as 2000, my partner was asked for her zip code as we made a purchase and I curtly replied "no comment," to the surprise of everyone there. She walked for her phone number, which really irritated me because this was a $5 retail purchase of some type), and I got more curt when I said, "You don't need that!"

I've been annoyed by this stuff long before most people were ready to consider privacy concerns anything more than paranoia.


I like to give fake numbers in these situations. The way I see it, intentionally supplying bogus data is one of the only ways we have left to fight the machines and their algorithms!


I like to give obviously fake numbers. Like 12345 for a zip code or 212-555-1234 for a phone number. Most people don't care enough to have a reaction, now and then you get a laugh, and rarely you'll get someone who calls it out as bogus. My standard retort is somewhere between "Are you saying I don't know my own phone number?" and "Are you calling a liar?!" depending on how surly the response.


I was in an albertsons back when they wanted a number in tahoe and went in the 2nd time... cashier remembered me and said "what's that number again... something something something 5309eynine..." and was dancing a bit... it took me a second then I said "what's the area code here?" he glady gave it to me, so for about 12 years I just did $CURRENT_ZIP-867-5309.

My safeway card is in someone else's name... one day they had to pull it for some reason and I got a "Have a nice day.... Mr.... Soprano." and a big smile.


My father had memorized a fake Social Security Number that had come as a sample card in a wallet he got in the 1950s. When anybody except the government asked for his SSN and who wouldn't relent on his pushback, he would give them that number.



Wow, nice! I don't know if he was one of the 12 in 1977, but he would have been if this is the number he used. Woolworth's totally makes sense. If he were alive, he would poop purple Twinkies at that story. Thanks!


How does that work for him for things that do credit checks?


He's been dead for 10 years, but he didn't use it for those. As I remember it, in the 80s-90s it was more common for SSNs to be requested for normal consumer things.


!!!!!!! Can't wait for those numbers to leak out!


I gotta say, it's possible he never actually used it in my lifetime, but he could sure tell that story and rattle off the digits at a moment's prompting.


Some countries have law making it illegal to give bogus information about you, with hefty fines and jail time.


There's a bit of plausible deniability if you give slightly bogus info, like transposing numbers. You could assert that there was a typo on the company's part.


Really? I say "Nope" constantly and they're like "no problem" or "it's way easier to look up refunds that way" which is true depending on the store.


If you purchased with a card, they can usually look up a transaction based on that.


> "it's way easier to look up refunds that way"

"Thanks good to know, from now on I'll go buy at a business where this artificial limitation does not exist."


The GM of the big box that I worked at 25 years ago said that the zip code thing was to measure the "destinationness" of the store.

In our case, the average big ticket buyer travelled an average of 30 miles, which was awesome in that it made the end caps more valuable.


Sometimes the credit card terminal will ask especially at gas pumps for my zip code as an anti fraud measure, and will reject the transaction of you enter the wrong number. If a cashier is asking I always use 90210.


I guess a lot of people not from the US will use 90210 when prompted for a US postal code. I can't even remember what the show was about (except that it was set in Beverly Hills, obviously), but the number stuck.


The show was called 'Beverly Hills 90210'

Just a generic teen drama


I had same issue when visiting the US. I tried some fake numbers but it wouldn't accept it (my actual postal code has letters, so that wasn't possible), so I just ended up paying by cash.


Did you try your postal code minus the letters? Afaik[1] usually only the numbers in your address are verified anyway.

[1]: https://en.wikipedia.org/wiki/Address_Verification_System


My UK postal code only has two numeric digits, whereas I seem to recall the US gas pumps require five digits. Just means you can't pay at the pump.


Post codes in Australia are just 4 numbers, so when buying subway tickets in NYC, I just put in 10000 or something (I believe that's close enough to the local code?).


I've heard (unsure if true) it was to reduce entropy.

IE they keep the last 4 digits of your credit card, combined with a post code you have a unique id.


Not necessary, most point of sale systems can provide a unique hashed or tokenized version of the account number for analytics and identification purposes.


By cross-referencing your name (from your credit/debit card) with your zip/post-code, stores are able to determine specifically who you are with greater probability than without the zip/post-code.


With near 100% probability actually. There was a detailed report on that few years ago.

Googling for "why stores ask for the zip code" brings up a lot of press post, e.g. https://www.forbes.com/sites/adamtanner/2013/06/19/theres-a-...


This is the correct answer.


I always give the post code of the shop, if I know it, or one nearby that I do know.


Wait, why do you know the post codes of the places you shop?


Wait, you don't? :-)

Seriously, I mostly shop in the same neighbourhoods. And when not, there's often something on the counter with their address...or I can give a mate's address and let him get the junk mail....


Isn't it a credit card security feature? That's what it's for at gas stations.


Correction, that what they pretend it is for at gas stations.

99,99% of the time when asked a piece of personal data it is for cross-referencing or other privacy invading purpose.


I think they mean if a salesperson asks you for it, rather than when you put it in for a credit transaction.


Of course you can refuse, but you can also just give them a false one.


Don't forget loyalty cards are both a way to track demo and purchasing trends.


I volunteer at the Boston Museum of Science on Sundays and we also track how many people we interact with at the various activities. We log by group, so log might read "1 man, 1 woman, 2 boys, 1 girl (family)" or "3 women, 10 girls, 12 boys (school group)"

It's really handy to see how many people the activities attract, and who they appeal to most. You're tracked everywhere!


Could you be more specific? What could they possibly [Edit:] have recorded in a second other than male/female, Japanese/foreigner, or child/adult?

Facial recognition is a unique identifier but cashiers have access to almost nothing they can record... [Edit] What was it?

Edit: clarified that I am asking about the history here, what information was manually collected by cashiers as parent stated


Demographics and sales data is a primary example of big data.

The tweeted advertisement system also looks like it's only recording demographics. Not individual personal IDs.


Sorry, I wanted you to expand on this: "cashiers recorded buyer demographics by hand".

Give me an example of what they used to record by hand. All I can think of is "male, adult". I am specifically interested in what else you say they used to record.

I wasn't asking about the present status quo, only your historical statement about cashiers recording by hand.


Oh, OK. I edited in a link to the buttons the cashiers used, in the original post.


Thank you! Obviously that is far less privacy violating than demographics could be.

Today advertisers that phone home (spyware) often lie and claim only aggregate data is produced - but this manual example really is the kind of data that is okay. It's far less detailed than something like facial recognition. Thanks for adding the link!


You could distinguish people who are married or are parents (with false negatives) by recording people who are at the till with their spouse or children. (Send out demographic-research cards once, to a few of the same stores you've collected this info from, to derive a normalization factor that will make such collected observations useful from then on.)

You could make a note of a person's seeming affect—positive/negative/neutral emotion.


I'm asking historically what was actually done, not what could be done. (For, "what could be done" you could ask if the cashier had seen this person before? Are they a regular shopper here, as far as the cashier notices?) I am asking what information cashiers in Japan actually in fact recorded by hand. What did these cards look like for each customer. Etc.


Image of actual buttons used added to the original post.

Keep in mind this has to be done while doing usual cashier things, so not much attention can be taken up by it.


Thanks!


Generally what has been "accepted" as done is age and male/female. I'm sure some places have done more but that's all I've heard of (lived in Japan for about 6 years now).


During 2010-2012, I was part of a startup called Clownfish Media. We basically created something very similar to this and got scary accurate results then. Given how accessible computer vision has become, the image in the tweet comes at no surprise to me.

Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).

Next time you look at digital signage, just be aware that it is probably looking back at you.


Do you feel it was ethical to work on that?


Not GP but I've worked in a similar industry.

For me I knew how our data was anonymized. So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them, even the unique identifier was just a hash.

Obviously it depends on how much data you collect/store, personally I don't think the things shown in OP are all that onerous (sex, age group, gender, rage, time spent looking at ad).


> So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them...

Minor nitpick, but giving someone a nickname isn't the same as anonymization.

"Hey Bob, thanks for logging on. Did you know we've been calling you 1234 these past five years!"

When a passive recognition system _uniquely_ tracks & identifies a person, it just takes time before that gets cross-referenced.

(different story if the data gets aggregated, or you scrub the uid completely after some window)


A friendly reminder that there's no such thing as "anonymized data", there's only "anonymized until combined with other data sets".


By definition anonymization is supposed to be irreversible. What you're describing is de-identification (https://en.wikipedia.org/wiki/De-identification#Anonymizatio...).


Under this strong definition, anonymization doesn't exist in practice at all. Strong anonymization requires serious destruction of information (e.g. reducing all samples to a single average number). It's not what people in ad industry do.


I work on digital signage, our product isn't using facial expression recognition yet, but it has been asked and will eventually be a part of the system.

What's the difference between this and an anonymised dataset? No PII is tracked, it's just looking at you and calculating what emotion you're likely feeling to show more targeted advertising.

I mean, I'm personally against it but we've got to prove a higher and higher ROI to justify the cost of digital signage, this leads to just that.


If you were actually against it you would not take part of making this happen.

Inadvertently spill the name of the company making this and those wanting to use it to the public so they can receive their well deserved backlash.


You going to offer me another job that has comparative pay and work-time flexibility? I'll take if one's going, but right now this is my gig.

If you want to start a war, have a quick Google for the big players, they'll have this tech in and will be proudly advertising it on their site.

The thing is: People don't care. Not your HN reader (evidently), but your Joe Bloggs. Hell, Snowdon told them the Five Eyes are reading their email and they barely gave a damn.


> You going to offer me another job that has comparative pay and work-time flexibility? I'll take if one's going, but right now this is my gig.

That's exactly what bigbugbag meant. You don't really care and are in for the job. That's okay but just be honest here.

I could sell useless products to old people and make some money on the side. But I don't because I'm against that.


You can care and be opposed to something and still not stake your job to stop it. I'm against a whole lot of things that I do not spend all my time fighting because I have things to do, or it would be inconvenient as hell.

And some I sacrifice things for. A person can't die on every single hill they happen to fancy :)


it should just be regulated, i.e. banned, to save you the hassle of implementing it


(Supposedly) Lee Gambles comment on Reddit -

"Hi. I am the original taker of the photo. There is a screen that normally shows peppes pizza advertisements in front of peppes pizza in Oslo S. The advertisements had crashed revealing what was running underneath the ads. As I approached the screen to take a picture, the screen began scrolling with my generic information - That I am young male (sorry my profile picture was misleading, not a woman), wearing glasses, where I was looking, and if I was smiling and how much I was smiling. The intention behind my original post on facebook was merely to point out that people may not know that these sort of demographics are being collected about them merely by approaching and looking at an advertisement. the camera was not, at a glance, evident. It was merely meant as informational, maybe to point out what we all know or suspect anyway, but just to put it out in the open. I believe the only intent behind the data collected is engagement and demographic statistics for better targeted advertisements."

Source: https://www.reddit.com/r/norge/comments/67jox4/denne_kr%C3%A...


Not Lee Gamble. He just shared the photo without any source. See this article: https://translate.google.com/translate?sl=auto&tl=en&js=y&pr.... Photo taken by Jeff Newman.


It is still a BIG ethical issue for some people. Myself I see this as just the natural progression we are headed. If we don't have rules about this kind of technology it will very much be "Minority Report" in a decade.


How is it an ethical issue for strangers to look at your face when you are out in public?


More like strangers taking pictures of you without your consent (and often knowledge) with the intent to increase their profits and not sharing any of that with you.


And then able to follow you and know your present location whenever you go by a camera.

Minority Personalized Advertising https://www.youtube.com/watch?v=7bXJ_obaiYQ


Ethically, neither your consent or knowledge is required for someone to see you in public and remember that image. Why they do it isn't really relevant. If they use that image to do something unethical, like commit fraud, it is the fraud that is unethical, not the imaging.


I think many people would disagree. It might be legal, but that doesn't mean it's not unethical.

If a stranger on the street started following me, taking pictures without permission, and taking notes about my appearance of actions and storing it in their database, I would say he was behaving unethically.

Ask street photographers - it's a delicate balance. Many people really dislike having their pictures taken without their permission.


How does this compare to the pre-automation practice mentioned above of cashiers manually making a tally of how many men/women of each age group were visiting?

I mean, this is literally the "global village" coming to fruition. The online shopkeeper knows you just as well as a shopkeeper in a real village - it knows who you are, it remembers all your previous visits, it knows your hobbies (even if you didn't tell him about them, but someone else in the village), it can make suggestions based on that.

When you buy flowers, the village shopkeeper knows not only who's buying them, but also has a good idea for whom these flowers are intended. That's where we're heading.

This is the level of (non)privacy that we historically had, living in much smaller communities than modern cities. The trend of more anonymity brought by urbanization is reversing, but it's not something new or horrible, if anything, the possibility of being just another face in the crowd is an anomaly that existed for a (historically) short time and is slowly coming to an end once more.


That is a lot of words to simply say that some people think it is unethical. Which is an essentially empty statement. Couldn't you at least say most people and make it an argumentum ad populum?


lawful and ethical are two totally different things. They are both related but are mutually exclusive of each other

ethical != opinion

Ethical has weight and you can lose your job, and even go to jail for being unethical. RMS actually has a very strong academic ethical mind (Even though I disagree with him more then agree). BUT ethics isn't easily defined.

Here is a decent link to defining Ethics. https://www.scu.edu/ethics/ethics-resources/ethical-decision...


>lawful and ethical are two totally different things.

Mind explaining where you think I implied otherwise? Or why you keep repeating this despite the fact that I haven't?

>Ethical has weight and you can lose your job, and even go to jail for [what your boss thinks is] unethical.

For example, doctors who get fired for performing abortions, something that most of hn doesn't find unethical.

>https://www.scu.edu/ethics/ethics-resources/ethical-decision...

That is itself, merely what Manuel Velasquez, Claire Andre, Thomas Shanks, S.J., and Michael J. Meyer find ethical.


Until relatively recently, unavailability of large stockpiles of consumer data (at least, stockpiles at the scale now possible) was a significant impediment to a large and probably mostly-undiscovered class of potentially unethical behavior. Do you not suppose the removal of that impediment, with no other equally powerful compensatory regulations or oversight, to at least potentially be a serious problem now or at any time in the future?


Until relatively recently, unavailability of large stockpiles of consumer data (at least, stockpiles at the scale now possible) was a significant impediment to a large and probably mostly-undiscovered class of potentially ethical behavior, as well as behavior that actively combats unethical behavior. Data itself is amoral and can be used for either good or bad.


Ethical behavior isn't a problem.


You might even say it's the opposite.


This is a gross mischaracterization​ of the issue. You aren't looking at the bigger picture.

Do we really want to commodotize​ the simple act of walking down the boulevard? Make every moment in public space (and private digital space!) sliced, diced, and scrutinized by God knows how many data munchers, middlemen, analytics brokers, and ethically challenged people in order compel as much thoughtless consumer spending as possible, long term consequences be damned? Allow incredibly detailed profiles to be built up on every person, spanning the decades of their life? And of course, there is always the danger of governments co-opting and abusing this information years or decades in the future, after adminstrations have come and gone, and laws have been overturned, drastically altered, or ignored. As the tech and richness of the data increases, the temptations will as well. Well meaning people can do nefarious things in certain contexts.

I believe our societal institutions and corporate entities are not mature enough to safely handle the power granted by unrestrained, high resolution data on the entire populace

Granted, I don't think things would get too terrible without overwhelming protest, but I don't see why we should bet on that.


"You aren't looking at the bigger picture" is just an arrogant way to say "I think you're wrong and I'm right". It can be safely omitted in favor of actual arguments.

>Do we really want to commodotize​ the simple act of walking down the boulevard?

It's not a boulevard you are walking down, but a bazaar. The only difference is that modern technology allows you to visit the bazaar to be "sliced, diced, and scrutinized by God knows how many data munchers, middlemen, analytics brokers, and ethically challenged people in order compel as much thoughtless consumer spending as possible" without physically travelling there.

>Allow incredibly detailed profiles to be built up on every person, spanning the decades of their life?

Sure. It's called a relationship. Or a memory.

>And of course, there is always the danger of governments co-opting and abusing this information years or decades in the future, after adminstrations have come and gone, and laws have been overturned, drastically altered, or ignored.

You can safely replace "this information" with virtually anything useful and get the same effect. Do you feel the same about, say, nuclear weapons? Or legal authority to lock people in cages? I would say either of those is far more dangerous than data. Yet we recognize that the power exists regardless, and the government can at least put it to good use.

>I believe our societal institutions and corporate entities are not mature enough to safely handle the power granted by unrestrained, high resolution data on the entire populace

Then the obvious answer is to improve societal institutions and corporate entities, which is useful in and of itself, rather than futilely trying to impede the progress of technology.


> It can be safely omitted in favor of actual arguments.

Fair point, I could have dropped that sentence. I stand by my gross mischaracterization statement, though. Programmatic surveillance is very different from a stranger looking at someone.

> Sure. It's called a relationship. Or a memory.

The profile built up on people by ad brokers and spy agencies is a relationship? I don't think that's how most people would describe it.

> You can safely replace "this information" with virtually anything useful and get the same effect. Do you feel the same about, say, nuclear weapons? Or legal authority to lock people in cages?

Uh, a core part of the problem is this information being coupled with the ability to lock people in cages (or exert power in other ways). Obviously the data by itself is inert and useless. It's what people might do with it that matters.

Important examples would be restrictions on free speech and suppression of dissent. Imagine something like a credit score 2.0, created by analyzing a lifetime of private communication, online activity, and transactional data.

Those websites you visited 12 years ago? It's gonna cost you on your next car loan. And don't even think of running for city council -- the dirt will really come out then. Etc etc.

Obviously, technology brings a lot of great benefits. I'm all for that. I think we should just be aware of new pitfalls it brings as well, and try to account for them.


>The profile built up on people by ad brokers and spy agencies is a relationship? I don't think that's how most people would describe it.

Most people use language woefully imprecisely. The relationship I have with the barista at the cafe near my office isn't the same as the relationship I have with my sister but it is a relationship of the kind that's relevant here. Knowing what I order and when, recognizing me, etc.

>Uh, a core part of the problem is this information being coupled with the ability to lock people in cages (or exert power in other ways). Obviously the data by itself is inert and useless. It's what people might do with it that matters.

A nice thought, but in practice, when we try to fragment this power by privatizing police, prisons, military, firefighting, etc, all of which have many modern examples, things do not turn out well. As unreasonable as it may sound, the evidence suggests it's better to put all the eggs into one poorly run basket.

>Imagine something like a credit score 2.0, created by analyzing a lifetime of private communication, online activity, and transactional data....

Oh, I imagine.

https://news.ycombinator.com/item?id=12499525


> Most people use language woefully imprecisely. The relationship I have with the barista at the cafe near my office isn't the same as the relationship I have with my sister but it is a relationship of the kind that's relevant here. Knowing what I order and when, recognizing me, etc.

Yes but that is a very different type of relationship with quite different characteristics. I hope it isn't too difficult to infer I'm arguing not everyone wants these types of relationships. To call it "just another relationship" is not very helpful for the discussion.

This type of relationship may have significant extended and unforseen side effects. It's not well constrained and the preserved artifacts could easily be hijacked for countless unknown purposes decades in the future. It's a fundamentally new paradigm that we don't fully understand yet, and given humanity's historical tendency to abuse new mechanisms of power as they become available, I think some caution is very reasonable.

Perhaps to make my position a little more clear, a key point on why detailed data profiles could be quite dangerous is their scalable and programmatic nature. Never before could a single click of a button identify every individual who has been discussing topic X in the last year, or spit out a list of everyone with 2 degrees of connection to some targeted individual. The same unlimited possibilities that make this stuff exciting to technologists are also why it may be quite dangerous.

These powers are unprecedented. You would need a rotating team of investigators inside every home and every place of business in order to gather this data in previous eras, not to mention even trying to collate and process it. It's equivalent to someone in previous eras standing over your shoulder and writing down every newspaper article you read, taking notes on every conversation you have, etc. Because it is invisible, it doesn't feel this way, but that is what's happening.

> when we try to fragment this power by privatizing police, prisons, military, firefighting, etc, all of which have many modern examples, things do not turn out well

I never suggested we do that?


For me is way more bigger ethical issue is what millions sharing in social networks, on their own will.


To be honest, I am fairly surprised at the reaction here on HN. It's not really surprising to see such system, it would be more surprising if such system did not exist because offline ads is a huge business and the technology is here. This goes together with conversion tracking at physical shops, etc.

I am equally surprised by the comments about how come engineers implement such systems, how they find it ethical, etc. I'm sorry, but it sounds just a bit out of touch with the real world, or just outside of HN bubble. Given the things that money motivates people to do, it's probably one of the least unethical things that has been done.

I am not judging that this is right or wrong, I am simply stating the fact that nothing about this should be surprising. Yes, this is slightly sad, but that's simply the reality of technological advancement. It's not really possible to expect the rest of the world to use the technology only for things considered 'right', etc.


Well, nothing here is surprising beyond maybe the scale of things (if a random pizza joint now uses facial recognition in ads, who else is using them?). But those things still need to be called out and opposed, because peer pressure is an important part of morality in society. People are social animals, and are less likely to do things that are disliked by their friends.

Looking at things from a little distance, the whole thing is abhorrent, and paints a really sad state of our society. I wrote this many times, and will keep writing it: if you did the same things personally to your friend that people in advertising industry do to everyone, you'd most likely get punched in the face. And yet somehow marketing became a respectable occupation.


Do they need to be opposed though?

There isn't really much consensus---even on HN---that passive demographic data collection is a bad thing alone. People claim it is, and I believe they feel it is---then they turn around and do things that compromise their stated beliefs because it's convenient.

I liken it to the gap between the rhetoric around open source and free software and the reality that Windows and Mac OS make up approximately 90% of OS marketshare. You can believe what you want to believe, but from a business standpoint you'd be putting yourself at a disadvantage if you structure your business requiring FOSS operating systems to climb to even 25% of marketshare; there's a similar situation, probably, for customer data tracking and advertising preference tracking.


It's not surprising that it's possible. It's surprising that it's already happening.

It feels straight out of Black Mirror.


Lots of Black Mirror is commentary on where we are now, not where we're going, even if the episode itself is set in future (15 Million Merits is an easy example).


I think, such reaction is just because article is less "techy" than it could be, but more about "moral" aspect.

OTOH, what is so interesting in simple face recognition, innit? That future became a past quite fast, meanwhile teh human rights never get old. (smile.jpg)


As someone working on a similar project (specifically, emotion recognition) I'm highly interested to hear how such a product should look like to be not considered unethical. So far from the comments I see that:

- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera

- no raw data should be stored

- it should be used to collect statistics, not identify individuals (?)

Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?


The ethics are simple: If you don't get opt-in consent, many subjects are going to feel violated. Even if you assure them you anonymize the data.

It's not enough to put a warning next to the camera because you've already captured them at that point and it's too late. If anywhere it would need to be at the entrance to the store.

But guess what? Customers HATE this stuff. The backlash and lost business is not worth it. See: http://abc7news.com/business/philz-to-stop-tracking-customer...


If a store has a warning at the door that this happens inside, it's good because now I can avoid getting inside the store and silently hate and boycott the brand.

If a store has a warning label on the device engaging in this, it's bad because it's too late for not entering the store. I'm gonna complain right now at the store manager, maybe call the cops or sue. I'll be vocal about actively hating the store, the brand, the manager, the employees.

If I went to a store engaging in this without telling and I later learn about it, then I'm calling Keyser Söze and it's pitchforks and beheading time.

I suppose it will take a couple more generations of brainwashing to have the population ready to accept this kind of highly invasive technology. IIRC about 10 years the big brother awards was awarded to a french industry group for their blue book describing how to condition a population to accept surveillance and control technology over a few generations.


> maybe call the cops or sue

What exactly are you going to call the cops for let alone sue for?

While I don't agree with this kind of technology you seem to be overly emotional about it which really doesn't help the issue.


In some countries, like Sweden [1], this type of deployments of cameras are strictly regulated. A quick reading of the rules in Sweden tells me that you are unlikely to get permission for this easily.

[1] http://www.datainspektionen.se/lagar-och-regler/kameraoverva...


Sure, but I would bet money that GP isn't in a country that has those kinds of regulations. I'm addressing their emotional overreaction to something that require rational action (such as the law you mention).


Thanks for the detailed comment. Couple clarifying questions if you don't mind:

- do you know that loyalty cards are often used in stores to collect customer data (a kind of offline cookie)? do you consider it a bad/dangerous/unethical or does it sound ok for you?

- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?


I'm not the parent commenter but:

- do you know that loyalty cards are often used in stores to collect customer data (a kind of offline cookie)? do you consider it a bad/dangerous/unethical or does it sound ok for you?

Yes I do know that loyalty cards are used to collect data. I think most people do. I don't take loyalty cards for this reason and I'm glad that they are opt in although there is some financial pressure to take them.

- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?

I would feel bad about it and I think the person should ask my permission first.


> I would feel bad about it and I think the person should ask my permission first.

But if that person just memorizes customer reactions to understand how people on average react to particular products or actions, that's ok, right? Because this is what sellers and business owners do to improve their product. So is it about human-to-human interaction or some more subtle detail? I'm biased here, so sorry if I miss something obvious in this situation.


It's not subtle. If there was an employee standing next to you or following you around the store with a clipboard taking notes on you and your facial expressions, only then would you have something approaching an apples to apples comparison. Stop pretending that's normally "what sellers and owners do" and you're just automating it. Customers Do Not Want.


Loyalty cards are opt in. Security surveillance can be unsettling but customers understand its purpose and limited scope. What you're proposing is more invasive, and most people would not appreciate it if they knew about it.

Look, give up trying to justify it. Customers don't want it. You should find another application for this technology.


Well, I definitely do have other applications for it. For example, I know that similar software has been used in labs to estimate people's reaction to videos and game features, in mobile applications to improve interaction with a user, etc.

My interest to offline applications comes from personal experience: recently we demonstrated our product (not emotion recognition, but also capturing user's face) on an exposition. People came to our stand, used the product (so they clearly opted-in), asked questions, etc. After 2 days, we asked a girl at the stand "What do people think about the product"? "Well, in general, they are interested" she answered. Not much info, right? Definitely less informative than "65% expressed mild interest, 20% had no reaction and 5% found it disgusting, especially this feature".

So I don't try to justify this use case - my life doesn't depend on it - but I find it stupid not to try to understand your clients better when it doesn't introduce a moral conflict.


Loyalty cards are opt-in, and it's common knowledge that its explicit purpose is to track information about yourself -- so I think a lot more people find them (or at least their existence) acceptable.


> If a store has a warning at the door that this happens inside.

Don't many shops have a generic "you are under video surveillance"? Wouldn't that also cover something like this?


This is true, the vast vast majority of stores have surveillance of some kind. Advertising's impact on the human psyche cant be underestimated. In the last ten years alone this has become increasingly apparent. Whether it be photoshopping images that manipulate our conception of beauty or dating apps that make us feel lonely enough to install.

I don't mind being recorded at a checkout.

Recording me to decipher my thoughts instead of my actions crosses a line.


> If a store has a warning at the door that this happens inside, it's good because now I can avoid getting inside the store and silently hate and boycott the brand.

Given how widespread this kind of monitoring is, this approach is basically "I will punish the honest stores and reward a sneaky store by spending my money there instead"


It's actually pretty simple - don't use it on people.

Advertising? No. Sales? Definitely no.

Augmenting that single-player video game so that it adjusts content depending on emotions and gaze of the player? Ok. Better if the player is explicitly told the game will track their reactions though.

EDIT:

Also, another angle. Even for advertisers / "sales optimization", I'd forgive you if that was a local, on-site system. But if it's meant as a SaaS, with deployments connected to vendor's butt, then I am gonna actively try to screw with it if I learn there's one installed anywhere I frequent. Hopefully new EU laws will curb that, though.


Editing note: unless it was intentional, you appear to have your "cloud to butt" web extension enabled. ;)


I had it on for so long that, for my brain, the two words are basically the same now :). I keep forgetting about it when I edit a post (the substitution happens on display, not on submit).


I'm not sure what you mean by your last paragraph. How would facial recognition apply to SaaS products?


The usual way - many shops buying a "face recognition advertising service" from a single provider, who gets to aggregate all the data.


The only ethical possibility in my view is for it to not exist. I don't like having my emotions manipulated to make me buy more stuff, regardless of whether I am anonymized or not. But then again, I think similarly of a lot of the non-targeted advertising; the recognition just add whole new level of disgusting.


It feels both like an invasion of privacy and unhelpful beyond a point. I don't see the benefit rising above the societal cost.


> I don't like having my emotions manipulated

What about collecting statistics to make better decisions? Let's say, you go to your favorite jeans store, but find out that current collection is disgusting. Does it sound ok for you if some sort of system would analyze your attitude to the product to improve it in later versions?


Nope.

You can do controlled user-testing sessions with that system with specific people that have consented and are potentially compensated in some way. You will most probably also get more useful information out of that.

But being recorded "en masse" in a shop for that purpose I would think is invasive. I would totally avoid that shop if I knew that system was in place.

Also, I am not convinced that statistics lead to better design, so that would most probably be just wasteful, but that is another discussion :p


But isn't A/B testing doing the same thing? If it's different, what's the key difference between analyzing facial expression in a shop and analyzing user interaction on a site (given that both have a warning about data collection)?


The former has very weak (if any) consent, is indiscriminate, easy to abuse, and creates unnecessary conflict (e.g. i really like those pants sold in that shop but I don't like to be tracked, ok i'll go in just this time...).

In A/B testing there is a clear context and purpose, and is normally negotiated between actual humans.

There might be middle grounds (A/B testing can be done online and use facial recognition and in relatively large scale) but for me it has to be opt-in (as in, you have to fill a form to join) not opt-out (as in, leave this webpage/shop if you don't want my tracking). This is more challenging to the organization proposing the tracking, because they need to provide some value in exchange so people actually sign up for that. But in the long term being founded on the principle of consensual mutually benefiting relationships can only be good for your organization/brand, right? As in: at last a company that treats people like humans!


I've watched the same hysteria & concerns about all kinds of privacy-invading systems. Social Security Numbers, credit cards, computer IDs, camera GPS, search queries, and piles of other tech all start popularization with "OMG evil people can do evil things with that data to hurt you!" Save for a few holdouts (usually much older folks), society at large has completely accepted all that tech as normal. Just takes about a decade of the convenience overwhelming the fear. I despise SSNs, but cutting my taxes by $1500/yr (child tax credit) is motivating; credit cards suck for a zillion reasons, but swipe-and-done is so damn convenient; no question Google has an impressive model of me but those search results are enormously useful; etc.


I have a question: Do you do trials in a controlled environment, where you actually have proper feedback and a distinct comparison between self-described state and machine analysis? Because in my opinion, systems like these are like modern day version of astrology (at least, when they are only based on vision and not things like fMRT imaging or proper psychological analysis). I know seriously depressed people, who always had a smile on their face (maybe a social coping mechanism), as well as "angry" looking coworkers, who had a very good mood most of the time. It very easy do misinterpret a persons mood, when the only "interaction" is: looking at them, and analyzing their facial features.

When these things are used outside a controlled environment, things could get even more complicated: weird beards, squeezing of your eyes because of excessive sunshine, reflexive glasses, etc.


I see 2 separate issues here:

1. Accurate collection of facial features. Illumination, occlusions, head rotation, etc. may seriously affect accuracy, but this is exactly our main focus right now. We are at the very start of the process, yet early experiments and some recent papers show that it should be doable.

2. Correlation between real and detected emotional state. At the moment we concentrate on the 6 basic emotions and don't detect less common expressions like depression with smiling face. This topic is definitely interesting and I'm pretty much sure it's possible to implement given enough training data, but right now we try to concentrate on different things.


> I'm pretty much sure it's possible to implement given enough training data

No, the point of the comment you are replying to is that there are emotions that are impossible to detect using external information. We can hide our emotions very well. The question is to what extent does external emotional information provide monetizable value?


> there are emotions that are impossible to detect using external information. We can hide our emotions very well.

This is an assumption which I'm not convinced holds true. Just because we can hide our assumptions well enough to fool other people doesn't necessarily imply that's it's impossible to detect using external information.


I've seen some pretty convincing expressions of emotions from actors who were obviously not at the time, in love, in pain, in anger etc. I'm pretty certain that any system that takes your facial appearance and no other information (e.g. you are an actor, you are currently on a movie set), it would have no way to distinguish genuine from false emotion.


If we are talking about professional actors trying to trick the tracker, then yes, it should be pretty hard to design software to overcome it. But most people aren't that good, and although they can mislead their friends or collegues, they still leave clues to detect a fake emotion. If you are interested, Paul Ekman has quite a lot of literature on the topic, e.g. see [1].

[1]: http://www.ekmaninternational.com/paul-ekman-international-p...


But humans are notoriously bad at picking up on details, and things like music and scenery can have a big impact on our perceptions. I'm not saying that you're wrong, I'm just saying that in the absence of any evidence to the contrary I don't think we can just assume that you're right.


The fact that you are already working on this says something about your willingness to do something distasteful to earn a paycheck. A slightly bigger paycheck would probably mean you would relax you morals even further. Even if your product starts out with stickers and no logging, I bet it doesn't stay that way for long. Not if the paycheck can be bigger.


To me the only way this could be ethical is by the project being limited to private space (a lab, a room in your house). No data is to be recorded ever, runs on an airgapped computer, doesn't try to identify people, every people subjected to it has to be fully aware of what this is about and the implication it can have.

Facial recognition is by essence creepy.


The opinion of most people here is that facial recognition technology is for the most part creepy if used in a commercial setting. Mine is slightly different. I think it's fine if you want to show me a different advertisement or sign based on an interpretation of my expression. I also think it's ok if you track my position within a mall and see which shops I visit and when. I would draw the line at attaching personally identifiable information to that data such as a name or a photo of my face. Anyone who decides to do that is probably going to cause harm/inconvenience to me (I don't want junk email from shops I happened to visit but didn't buy in).

I should also state that I think the first use of my data is ultimately unprofitable. Will the extra cents you make by advertising cinnabon to depressed looking people or hairdressers to long haired people really offset the cost of developing such a system? If applied to a broad population any customisation effects will be marginal.

I also believe that the non-anonymous tracking system is much more likely to produce value for companies and it would be very tempting when gathering anonymous data to cross reference with actual individual information. My concern about any tracking system is that by the motivation of profit it could easily shift from an ethical to non ethical space.


I'm kind of surprised that it didn't have some sort of Data Protection warning near it already, but I'm not sure if the EU data protection directive covers Norway as well.


We have pretty strong laws regarding this. It has created several news articles for the past week, and the Norwegian Data Protection Authority has already commented and said they don't believe this is legal. Stickers where added after the initial discovery.


What else would you add to the list to make it reasonable?

For it to not be used without opt-in consent. And to not hold any other benefits/privileges/incentives behind the wall of opting in.


It should work without taking pictures or movement profiles of people that did not give consent in writing or using a qualified digital signature.


I think you're going to have to accept that what you are doing is disliked and considered unethical by the majority.


I'm curious what's the boundary between ethical and unethical. People constantly analyze each other's mood and it's perceived positively. But doing the same thing massively using automated tools is often considered inappropriate. So is it because of using technology, massiveness, purposes? I hope there's a way to make such things both efficient and not unethical.


It is unethical because there's no "opt-out" option. You have taken a photo of me without my consent, used it for intents to increase profit despite not having my consent; and furthermore, attaching personal data to it is Orwellian and a complete invasion of my privacy. I can go out in public and have nobody know who I am. A retailer should not have access to my identity (since they can cross-reference other data sets to deanonymize me)unless I interact with them and hand over information of my own volition.


As far as I know, storing personal data - including photo, name, email and sometimes even IP address - without explicit and clear consent is strictly forbidden in most countries, at least in EU.


The only way this could possibly be considered ethical is if you get informed consent from every single person the system is analysing. If you provided each person with a detailed explanation of what the data would be used for, and required them to opt-in before collecting it, that would be fine.

Otherwise, what you're doing is deeply unethical.


That's at the heart of it - is examining a person in public with automated tools, unethical? Just saying it is, isn't a compelling argument.

The FBI can use automated tools for surveillance - which doesn't speak to ethics directly but indirectly, as we hope ethics drove those rules.

I can sit in my private store and observer people out the window all day, even take notes. That's not unethical; that's a sociological experiment or some such, and done millions of times a day.

It may be jarring or creepy to imagine an advertisement is sizing me up. Again, ethics is more than 'does it make people uncomfortable'.


Manipulating people on a mass scale without their informed consent has always been considered unethical; its on you if you're trying to argue that its not.

And you're 'but what if a person does it' arguments are irrelevant - there's a clear difference of scale between the massively automated systems we're discussing and a single person with a pencil and paper.


It was one billboard ad - not really 'massively automated'. Would have been cheap to hire an intern to stand behind the billboard and make notes. Probably cheaper.


I think terms of service have something to do with it. In most profiling scenarios, the average consumer has no idea what's going to happen to the data collected on them. I'd be much more comfortable participating in a value-exchange involving your product if I knew precisely what information would be collected, how long it would be stored, who would have access to it during that time frame, what would happen to it at the end of that term, and precisely how that information would be capitalized upon. That probably seems ridiculous to you, but from my perspective, it represents a precise definition of the value I'm yielding to you, and a reasonably precise definition of the risk I'm incurring by doing so.


An opportunity to opt out? If I need to be near the unit for whatever reason I would be more comfortable if I could switch it off.


The fact that it's secret is what bothers me the most.


Uh, I got sidetracked and brain hammered by the devolving discussion on that Twitter thread, thus couldn't find the context for this pizza shop kiosk - Is it a customer service portal that attempts to identify the person in front of it to try and match up with an order, or a plain advertising display that is trying to capture the demographics of the people who happen to stop in front of it and look at it?


It's an ad. May be targeted advertising, or just simple engagement analysis.

http://www.dinside.no/okonomi/reklameskilt-ser-hvem-du-er/67...

Edit: there is more information in the article. Not going to read it, though.


To summarize, it's an experimental project, there is so far only one such screen at the train station. If someone stands within 5 meters of the screen it will try to classify their age and gender and show a targeted ad, and record how long they looked at it. The raw images are not saved.

The screen uses a software called Kairos to analyze faces. It can estimate age, gender, and whether you are "white, black, hispanic, asian, or other".

According to the marketing manager at Peppe's Pizza, he thought there would a label on the screen saying what's going on, but in fact there is just a small sticker on the back of it, which is quite hard to see.

The company making the screen, ProtoTV, says that people should be okay with this because ads on the internet are even more targeted. A government representative says that the system might violate laws about surveillance cameras.


Thank you, this was the first comment actually telling more on the matter after all the "it said I'm X years old!" comments :)

I guess using such a system just to analyze people in real-time might already violate some surveillance laws like you said (in my country all such cameras must be warned about, even traffic cameras). But do you have any idea if those things also record data on customers? I could see how keeping such a database on customers might be dancing on the fine line of creating a "registry", which is pretty heavily regulated by laws at least here. Even more so if there is a possibility to identify real persons from that data.


>The company making the screen, ProtoTV, says that people should be okay with this because ads on the internet are even more targeted.

What a terrible argument considering how universally hated internet ads are.


A UK-based company called Amscreen make a similar system: http://www.amscreen.eu/products/ds24-2/

(If the "Am" part of the name seems familiar it's because it's Alan Sugar's son)

They installed one at a local petrol station so they lost my business. Perhaps I should be more vocal about it.


It saddens me that there are people in our profession for whom implementing such a thing presents no moral or ethical dilemma.


It saddens me that most people refuse to accept that not everybody shares their morals, and that it might be them that are wrong.

I'm not saying there isn't an absolute right and wrong, but I certainly don't find this as abhorrent as most people here, apparently.


So what am I supposed to do if I find this invading my privacy? Not walk within 10m of such a billboard? It's not like there's any other active way to opt out of this.

I suppose a banner saying "you are being tracked" on tom of such a billboard could have quite an interesting effect. Come to think of it I've seen "Smile, you're being recorded!" in some shops.


Embrace the cyberpunk future and get a silly haircut/makeup:

https://cvdazzle.com/


This is brilliant.


You're supposed to come out at night with a countermeasure masking your face and destroy the billboard.

IIRC the countermeasure is quite easy and requires something like a LED and a battery, but I may be wrong it's been 10+ years.


An array of IR LEDs in a hat would probably work.


And tbh a balaclava and featureless black clothing would not hurt.


"So what am I supposed to do if I find this invading my privacy?"

There are laws and customs around public places and what may be done there. E.g. depending on your local laws, if you're in public, people can usually take pictures of you and there's also nothing you can do about that.

Don't like it? Petition to have the laws changed. This is how we deal with such things in a democracy.

Trying to guild the engineers who build this system is both IMO wrong, but also completely pointless in terms of real-world effect.


> if you're in public, people can usually take pictures of you and there's also nothing you can do about that.

Quite the opposite, in France "le droit à l'image" is a privacy right that allows anyone to request that any picture of them being taken to be deleted.


Photographs are legal in public. This is just taking that to the extreme. Address that law. What bothers me is ALPR. Taken to an extreme you can just put a camera on every intersection and effectively track all vehicles without a warrant.


Curious: If someone sat at a cafe near by and recorded on pen and paper visible details of people walking by, would that be as bad?


You'd have a point if this was a general statement, but in the case of automated facial recognition it's almost universally despised across the world.

If you're not sharing this position it could be that you are younger or have been subjected to the conditioning of population by industries to make intrusive surveillance technology acceptable to them which has been going on for at least 15+ years afaik.


>If you're not sharing this position it could be that you are younger or have been subjected to the conditioning of population by industries to make intrusive surveillance technology acceptable to them which has been going on for at least 15+ years afaik.

"if you don't agree with me you're either a kid or brainwashed" - nice.

From what I can tell this is anonymous analysis and classification - this kind of info is useful and I don't mind one bit that it's being collected - in fact if the data is accurate then I like it - I can provide feedback without effort. I prefer it much more than being spammed by pollsters or a service tracking and associating behavior with my profile.


"[...] in the case of automated facial recognition it's almost universally despised across the world."

I don't think that's true. Most of us tech-geeks are worried about privacy way more than the average person. I personally don't see this particular use case as too problematic, depending on what is done with the data - as others in the thread have pointed out, you're in a public place, other people could be taking pictures of you or writing down information about you, and I don't think most people are worried about that either.

My view is that as long as the technology exists or can exist, it will be developed used, so complaining about the people building it is completely fruitless. If you really dislike how it's used, help pass laws against it! Don't go around guilting people for building this stuff.


> you're in a public place, other people could be taking pictures of you or writing down information about you, and I don't think most people are worried about that either.

The difference is scale. It would be prohibitively expensive for every pizza shop to hire someone to collect demographics of passerby. These systems can run on a Raspberry Pi.


The point of the person you are replying to is that there isn't a clear consensus that the kind of facial recognition done by the pizza shop sign is unethical. Your argument is that it must be unethical because there is a clear consensus. Where are you getting your data from?

It's fine to speculate about and individual's reasoning for why they believe what they believe but it's entirely useless for determining what the majority believe.


Agreed. Who is agreeing to build such dystopian technologies?


I would totally get interested in building this, have a half-good working prototype. All before I'd even considered that someone else might use it for evil...


> someone else might use it for evil...

Correction: Someone will use it for evil. This kind of tech is hot stuff for people engaged in evil.


The technology interests me and I would gladly work on a system that implemented such features.

As an ethical programmer, I'd be sure to incorporate security, anonymization, and be able to draw the line so that I can help businesses make more money (since that's what they pay me for), and advance technology at the same time.

This is only unethical when it's used in a system that infers more information aside from general demographics (which, BTW nearly everywhere collects), and makes them vulnerable to interception outside of the pizza company.


How is any different from Google, Facebook and rest of adverting driven web companies?


It's happening in a space previously devoid of such automated tracking.


Has nothing to do with morals - people gotta eat


The alternative in our industry is not starvation.


"You can’t buy ethics offsets for the terrible things you do at your day job." https://deardesignstudent.com/ethics-cant-be-a-side-hustle-b...

and from the same guy:

"Don’t ask how you’re going to pay your rent working ethically. Ask why you’re open to behaving unethically in the first place." https://deardesignstudent.com/ethics-and-paying-rent-86e972c...


Has everything to do with ethics and morality. You can eat without being a cog in the mega big machine trying to crush us all.


It saddens me that people are complaining about this yet people are doing way worse in our profession like extorting business for money through ransom-ware, hacking personal information and bank accounts, creating robots that kill people. But no people are worried that while walking around in public place a picture is taken of them and an add is changed to target them.


I wouldn't call writing malware 'our profession'. Maybe you are misinterpreting the meaning of 'Hacker' in 'Hacker news'.


Here is the software the sign is using.

http://www.adflownetworks.com/audience-detection/


Though not at the level depicted in the movie, I am nonetheless reminded of Minority Report.


For those that haven't seen it, here's the scene where all the digital signage knows who John Anderton is: https://youtu.be/7bXJ_obaiYQ

The movie was released in 2002, so fairly prescient.


"Your video will play after this ad" oh the irony


A targeted ad nonetheless. Irony indeed.


I thought of the film as well. I think we should remember that for all the cool cars in the setting, it was meant to be dystopian.


Dystopian? I think it was meant to desensitize people to the point of acceptance.


Besides the movie's premise of psychic surveillance using disabled people and indefinite detention of pre-perpetrators, there was plenty of dystopian elements in that movie, even if the future wasn't specifically a dystopia. The spider bots, the vomit sticks, and the ads:

Spielberg: "The Internet is watching us now. If they want to. they can see what sites you visit. In the future, television will be watching us, and customizing itself to what it knows about us. The thrilling thing is, that will make us feel we're part of the medium. The scary thing us, we'll lose our right to privacy. An ad will appear in the air around us, talking directly to us."

http://www.rogerebert.com/interviews/spielberg-and-cruise-an...


It wasn't that prescient; public-location commercial facial recognition system had been deployed for several years, and companies were, IIRC, already actively promoting customer tracking and advertising applications at the time.


It's worth noting the 2002 movie Minority Report is based on a shorty story of the same name published by Philip K. Dick in 1956.

1. https://en.wikipedia.org/wiki/The_Minority_Report


I was attributing this bit to the film...I suspect the digital signage scene wasn't from the short story.


Ah, right, yep good point. I've read the story but my memory of the film overwhelms my memory of the story. I believe you are correct.


Yeah, the short story was very little like the film.


It was rare, bulky, expensive, slow, and highly inaccurate at the time.

http://www.enterstageright.com/archive/articles/0102/0102fac...


You might find this article about Coca-Cola showing personalized ads to shoppers based on smartphone data interesting.

http://www.digitalstrategyconsulting.com/intelligence/2017/0...


I've sifted through 80% of the comments here, and I couldn't find a mention about the unintended consequences of this technology.

Ethical vs. Unethical, Pro-Privacy vs. Against Privacy are the two common discussion points. I, however, think the bigger problem here is that there's a very non-zero probability that this technology may cause unintended consequences simply by relying on false/inaccurate data.

For one, I work in analytics (loaded catch-all occupation) and I work with people who would marry their "data skills" if they could. In my industry, false positives of 80% is acceptable, and openly admitted errors in "machine-learning" logic (quoted to highlight my company's buzz-word usage, but practically non-existent) are made daily. People create algorithms, and people make errors.

Let's let our imagination run wild here for a second: It's 2030, and this technology becomes ubiquitous to the point where no one objects. Businesses take all the data from sentiments, gender, age...etc. to optimize for their target demographic, and price accordingly. In other words, let's assume this tech is used for perfect price discrimination. Economic theory dictates this is a win-win for everyone since everyone starts paying their willingness to pay. But, let's assume there's a catastrophe and medicine is in dire need. Price discrimination works fine assuming perfect competition, and is a useful framework, but it breaks down empirically where we live in a society that doesn't behave so rationally. Who survives? Those willing to pay the most, and the algorithm worked flawlessly here. But it was not intended to dictate who survives.

What I'm trying to say is that we should be cognizant of the fact that we don't live in a perfect bubble, and technology like this should be scrutinized for it's effects exhaustively- including any unintended consequences. We live in a society (duh), and as a society, it is up to us, with the help of policy makers, to determine the fate of this technology.


IMHO, perfect price discrimination is not usually a good thing. It's often discussed more in the context of monopolies. Under simple market models (e.g. no externalities, downward sloping demand curve etc.), a profit-maximising monopolist will set prices above the market clearing prince, resulting in dead-weight loss (market inefficiency).

However, there is one situation where monopolies achieve market-efficiency: when the monopolist can perfectly price-discriminate. This eliminates the dead-weight loss. But, crucially, this also means that all of the gains from trade accrue entirely to the monopolist, as consumers are all paying their own individual 'indifference' prices.

It's a value judgement, but I don't see this as a socially optimal outcome even if it is the market efficient one.


Yes, I agree that it's not the optimal outcome. (I should have not used "win-win" and instead used "win-win for some consumers.") I was trying to emphasize the individual consumer only having to pay what they were willing to pay- not making a judgement call on what is optimal.

Furthermore, The scenario you highlight is price discrimination to the first degree, where the monopolist captures all the "surplus." Economists generally claim this outcome to be "unrealistic" but it helps us understand the more traditional outcome: https://courses.byui.edu/econ_150/econ_150_old_site/images/8...

As you can see, there is still consumer surplus from a monopoly price discriminating, but at the cost of a deadweight loss.


Fair enough, I agree that we could be here all day if we opened the value-judgement can of worms (though I do wish we did this when discussing public economic policy).

Economists generally claim this outcome to be "unrealistic" but it helps us understand the more traditional outcome

I agree with this. Just to add, pretty much every simplified market model you would find in an undergraduate-level textbook won't correspond to any market in reality. As you suggest, they're just very simplified models designed to 'kinda point you in the right direction', rather than be taken as a description of reality. The most dangerous people tend to be those who took micro 101 but were never told the latter :)


"Hello Mr. Yakamoto and welcome back to the GAP!" - Minority Report, after protagonist (Tom Cruise, clearly not Asian) buys black-market replacement eyeballs to avoid retina-based security


The future is beginning to look a lot like the worst parts of Ghost in the Shell, Minority Report and Brave New World combined.

But I would love to come back to this post in 2050 and be proven completely wrong!


Okay, time to start wearing ski masks and Santa Claus costumes in public at all times.


A defense using makeup and hair: https://ahprojects.com/projects/cv-dazzle/


Hah, looking through all those looks reminds me of the captcha problem though: computers won't be able to detect your face, but neither will people.


Eh, what's so great about other people anyway.


I have a feeling if this becomes widespread this will become part of training sets.


"It's that guy in the ski mask and santa claus outfit again" sadly makes you MORE identifiable (unless you can talk a ton of other people into doing the same).


Bring out the ugly t-shirts!

(see:

http://jesse-pearson.com/interviews/william-gibson/

from: "(..) What’s interesting with regards to the book while thinking of this stuff is that the garment that sort of helps to save the day is a t-shirt. You call it the ugliest t-shirt in the world."

for some expansion on the reference.)


If you're talking about shirts with faces on them, those won't work whatsoever. Even if there is evidence that it appears to work right now, it's an entirely solvable problem today, with very little effort required.

a) You're assuming that the AI is looking to find the first face it sees, rather than all faces in view - both your shirt and your real face will be picked up as two separate individuals. Even if it's "one face at a time", why would you assume the shirt gets picked up instead of your face?

b) It really would not be difficult teach a neural net to detect one real face located above a face on a shirt, and ignore the lower one. The only potential false positives would happen with a taller individual walking with a shorter person in front of them, whereby the shorter person may be filtered out as a shirt.

So no, shirts with faces on them are not a countermeasure. You're adding additional noise, but you're not eliminating your own face from being picked up as well.


No. The idea is that with a need to conduct covert ops in a world filled by automated camera systems, there'd be a back-door built in, such that whenever one tries to retrieve footage of an "ugly shirt" (a special, machine-readable pattern) - that footage would be deleted.

That way a camera blackout/missing footage wouldn't signal a covert action, while at the same time one could go on defending democracy(tm) without worrying creating a media-storm about the tyrannical methods used to ensure Freedom(tm).


John Cleese took part in a show where he tested different ways to fool facial recognition and large sunglasses were enough most of the time.

Also IR LEDs: http://boingboing.net/2008/02/20/infrared-leds-make-y.html




There should be one of these in front to the ad to give passers by more privacy:

http://hackaday.com/2010/10/15/window-curtain-moves-to-scree...


I'm a little surprised about the HN reaction on this one. You guys didn't seem to care about collection of passive biometrics a year ago: https://news.ycombinator.com/item?id=11172652 . What's changed?


That's the one benefit of Trump getting elected: half of the US cares about civil liberties and privacy again.


I suspect you're right; I've noticed we also like the 'news' media again. I guess it's only a problem when it's not "your side" invading people's privacy, spreading false propaganda, destroying liberal democracy etc. etc.


This is in itself not scary compared to what a random website does when I visit. That is - given that what we see in this log is actually all it does. What's scary is what we don't see (does it store this? does it cross reference anything? does it target ads based on it)?

I don't really think that's the case (here, yet) but I do think it's scary that it's so easy to do that its not just done as a proof of concept but actually used in production in a low tech industry.

Gathering demographic or sentiment without storing, cross referencing (has this person been here before etc) or otherwise using the data for anything such as targeting ads - is kind of acceptable. I mean it wouldn't be hard to do that manually via a camera if you wanted to test the engagement of an ad. I'm sort of hoping this is just some tech project from a university or something, and not an actual product you can buy and hook into some adtech service.

Edit: as someone else pointed out - it's not a proof of concept it's an adtech off the shelf product. Because of course :( http://www.adflownetworks.com/audience-detection/


And just today I got an ad (in the paper mail) from an electronics distributor notifying me of new parts they stock. Among them was an embedded face and expression recognition engine that would emit pretty much this data, in a convenient text output you can read into any little microcontroller and act on (omron B5T-007001-010 if anyone is interested). This is no longer exciting cutting edge technology, it's off the shelf. And terrifying.


If you visit my website and fail to complete a purchase, you may find a physical postcard in the mail from us in a few days with a surreptitious coupon code. No, you did not actually tell us your mailing address at any point; knowing your e-mail is sufficient.


That 100% guarantees I'll never complete a purchase with you, and I'll warn others to avoid you too.


But for every you, there's ten people who complete their purchase using the code they received, and my boss makes more money and I keep my job.

Until and unless those ratios reverse, it's going to be that way and you'll have fewer and fewer places to shop. (I'd happily make the case to my boss that he makes more money without retargeting tactics like this if such were the truth.)


I'm okay with that. I find your practices disgusting but I'm not in a position to be telling your customers what they should be choosing. So all I can do is vote against it with my money and reward shops that don't do evil. There's very few things that I need badly enough to put up with that crap.


You classify sending un-asked-for mail as evil? (I admit the company that provides this service is not doing God's work by combining/selling all their customers' databases to make this possible, and so by enabling them neither are we. I'm not seeing how it's exactly evil though.)


For consideration :

How to ZAP a Camera: Using Lasers to Temporarily Neutralize Camera Sensors

http://www.naimark.net/projects/zap/howto.html


For an even more disturbing idea, Titan Park in Beijing has machines in its toilets which scan peoples' faces before dispensing toilet paper.

https://nakedsecurity.sophos.com/2017/03/21/park-uses-facial...


Oh look, Ghostery's product pitch comes in handy now https://www.youtube.com/watch?v=EKzyifAvC_U


Ghostery was this tool aimed at privacy minded people that collected their data to provide advertisers for their effort of defeating measures taken by privacy minded people. No way I will trust this ever.


With this much personal care to really know their customers by face I'm sure they put just as much personal care into the quality and craft of the product /s


Despicable. Any authors of this work should be publicly shamed and punished. And don't get me started on what should befall the owners of capital that drove this.


A norwegian article on the subject: http://www.dinside.no/okonomi/reklameskilt-ser-hvem-du-er/67...

Google translate is readable, if not super-mega-accurate: https://translate.google.com/translate?sl=auto&tl=en&js=y&pr...


I saw a pitch for this tech 5 years ago. Not sure the name of the company. The idea is they can measure engagement (how long you looked), approximate age and sex.

Five years ago it didn't seem so sinister. A lot has happened since then, I guess.


I guess public perception of the technology is based on how "creepy" (read: accurate) it is.


So they show erratic data on screen, preferably flattering (eg younger, happier) or just obviously wrong and record the real data on the back-end. Instant public acceptance without even writing big numbers on a red bus.


It was sinister 20 years ago (probably even before that but I'm not aware of it), and was a problem enough for the industry that they looked for way to condition the population to accept the technology over a few generations.


I don't see why it is sinister today, so they show healthy food to women and meat to men. Seems like a win-win situation.

I think this is what they did, anyway.


Why would it be sinister to have an automated eye able to instantly sort humans in dubious categories ?

Maybe in a different political context where one category is sent to death camps, another to forced labor camps, etc.


I think this becomes sinister when it makes the move to being free for the business with the service provider records & collects the data to sell to third parties.


Funny that this has been posted by ambient/experimental music producer Lee Gamble https://soundcloud.com/leegamble


This has been happening for at least a decade. I had to do some updates on a system in 2008 that had this same functionality built in, and they were far from the first company to do it.


I don't see any evidence that this is a "facial recognition system."

It's likely hard to legislate against software that attempts to detect if there is a person, what their expression is, and guesses at their gender.

You could imagine that job being done by a person (just noting how many people stopped at the advertisement, and what their expression was). I don't think there's really a way to make that illegal.

I suppose I think it's something that people should be aware of, though.


I work in a big retail company here in Brazil.

If you enter most of our stores with a phone in your pocket, you're being tracked. They track where you went, in front of what shelves you stopped and for how long, if you went to the cashiers of just left...

And if we track people here in the third world, you can be sure you are being much more tracked in first world stores.


Original Reddit post with background:

https://www.reddit.com/r/norge/comments/67jox4/denne_kr%C3%A...

"...peppes pizza in Oslo S."


There are many solutions that do this, both proprietary and Open Source. Accuracy is influenced by lots of factors, some to do with the setup (camera angle, lighting) the hardware (camera quality, computation speed) and the subjects (race, facial hair, glasses). We used this in research projects involving elderly mood assessment and in television viewer's emphatic responses. The package we used was marketed by Noldus ( http://noldus.com ) and developed bu Vicarvision ( http://vicarvision.nl ), but most of these packages perform at about the same level.


This is nothing new; Ad Companies are actively marketing this features. See i.e. http://livedooh.com

Quote from their website: "Audience Measurement included

The information and statistics needed in order to realize audience targeting in DOOH is gathered through livedooh’s integrated anonymous video analysis, which collects information about gender, age and length of view. Audience metrics are used by the ad server’s decision engine to optimize advertisement delivery and increase performance."


It just makes too much sense to show ads based on the demographics. They now have robots in malls too. They are just recording everything, processing, logging, extracting, selling and up selling. There is no privacy. The problem is not only does it make economic sense just to have these robots, the added intelligence from the data mining makes it even more attractive.


This is illegal in both Sweden and Norway though.


The Island Airport(Billy Bishop) in Toronto is littered with adverts that are camera connected. Other than, maybe power management, looking at what is possible in OpenCV gives a good indication of what can, and probable is, being done. From tracking where you look to matching faces....

The problem is that this is an agency(of the government) owned facility.


Imagine you can automatically track customer loyalty and offer them discounts. Here is an integration example using arduino also https://www.hackster.io/dasdata/dascognitiveservices-c2d991


India is forcing this Orwelling nightmare upon all its citizens (under very shady circumstances).

http://www.rediff.com/news/column/the-aaadhar-effect-say-bye...


God the comments on that tweet are enough to make me lose all faith in humanity, honestly what is wrong with people.


I wonder if someone wore a shirt with this pattern while walking in front of it

https://thenextweb.com/tech/2017/01/04/anti-facial-recogniti...


how does it detect which gender you identify as? mind reading?


I wonder how accurate these measurements are in practice. They could just be placeholder implementations, right?


At Vitensenteret (science center) in Trondheim they have a web cam hooked up to a TV screen where they show a live view of some face scanner software (multiple faces simultaneously). It estimates your gender, age, and mood (happy, sad, angry, surprised).

Every time I've visited it's been quite accurate on me, my friends, and on the other visitors.

EDIT: here is a pic of what I'm describing: http://fredly.fhs.no/dancemixbloggen/wp-content/uploads/site...


20 years ago they were adequately accurate in university lab experiments, 10 years ago it was very accurate, nowadays it is scary how accurate and fast this is.


They could, but chances are someone is trying, and may succeed soon, in actually doing this.


male young adult

I think they zeroed in on their demographic, good job!


Males are offered meat ads, while females are offered salad ads. Kinda sad in a way.


What kind of psychopat buys sallad at a pizza joint?


Me? And I'm not even female...


A Caesar salad goes great with a slice of pizza.


Is this illegal? They have the right to process their own footage, I would think.


This has been a feature of these digital sign products for a few years, generally they aren't interested in specific faces, just if faces are seen looking at the sign and for how long. It's all just simple opencv stuff.


Well, that's disturbing. I bet we'll see much more of this in the coming days.


For sure. And when you consider that research is improving on the topic of guessing someones race, age, gender, emotion via facial video/stills, as well as identifying other quasi-unique features such as gestures, gait, and facial expressions, well... it seems like it'd be hard to evade advanced ad or government tracking.


It really annoys me that these are deployed in public spaces where you essentially can't opt-out or fight the tracking like you can online (ad blockers, script execution control etc).


Might be time to pull out the regulations (a.k.a protections). You could destroy the market for this sort of thing if executives went to jail or were fined out of business.


As much as I can't stand the guy, I think David Brin is right here. The government does this to you anyway; the solution might not be regulation (which they won't care about anyway - see the NSA) but reciprocation. Let's watch everyone (in public), and make that data accessible to everyone.


IIRC this was something Cory Doctorow addressed in a talk or book, right now we can do something in the digital online realm, but as soon as this hits public space we're fucked and it had already started at the time.


can you imagine a world where automatic identification/classification, tracking, and targeting would decrease?

seems like you would either need a collapse of the economy to such a degree that cameras and computers aren't affordable...or some kind of extremely aggressive regulation.

i can't see how the latter would ever come about or be effective.


> i can't see how the latter would ever come about or be effective.

Try picturing a future where having access to clean drinking water is a privilege that only some have, with no cheap energy available and unstable climate. This is the future we built for ourselves and admittedly internet and computer are useless when you don't have electricity.


How is this disturbing? its just like some a public webcam that gets used to identify people. If it is in the public you have no privacy. If you want privacy go home as soon as you are out of your house you lose your privacy. I am a big voter for privacy in your home and private but out in the public your are not private anymore and therefore privacy falls away. Privacy and private kinda go hand in hand. you cant have privacy in a public space it is impossible.


Assuming there is no privacy in public space, please tell us all the name, age, address, SSNs, CC numbers, health records, gender, buying history, sexual orientation, facial features of everyone you walked by in public space over the course of one day last week.

Now that you have failed to do that, try doing it for one day 5 years ago, a whole week, a whole month, everyday since you were born.

A public surveillance apparatus such as the one featured here who record everyone, every day, not forgetting a single thing or person.

Try thinking a bit and free yourself from the backfire effect before making claims that make you look bad.


Utter crap.

Just because I am not in my house does not mean I consent to being tracked everywhere I go. We make the rules in our society, and with enough political will we can restrict this stuff.


This is not used for tracking.


...Yet?


If it is in the public you have no privacy.

Norwegian law disagrees with you.


Is this unethical if everything is done locally and no data is stored or resold?


"Is this unethical if everything is done locally and no data is stored or resold?"

If it is hidden (no clear warning) and there's no way to opt-out, then yes, it is unethical imho.


Nothing unethical, just the fact that it is hidden may cause some outrage.


Then what would be the purpose of collecting that data in the first place?


Yes. It is unethical regardless of how it is done.



This is one case where a tinfoil hat might actually be effective.


Is this new? Things like Affectiva has been out for a long time.


I was at a retail technology conference ~1998 and I remember IBM presenting their work in the area; point of sale cameras was the topic.

Not as advanced as this, obviously, but industry has been thinking about ways to do automated visual analysis like this for a long time.


Can confirm, this was already a thing 20 years ago.


I think the worst thing is that system assumed their gender.


From that thread I learned all you have to do to get away with evil is get the technologists to argue over "logs" vs "code".


LPT(1984): Keep a roll of small round dot stickers handy. Black dots are harder to notice on the lens.


Makes me wonder if this is something snapchat does whenever you look at some of their featured stories.


hrm....so as it turns out burkas really are the new expression of freedom. whom would have guessed?


I think that now, my favorite subversive street action will be to put mirrors in front of ads.


Cognitive Services enabled applications, why so complicated? It's like spotted an new hipo into the wild https://azure.microsoft.com/en-us/services/cognitive-service...


Has no one watched "Person of Interest"!? Just watch the show intro, that's all you need to know.

Ubiquitous surveillance is only going to get more, uh, ubiquitous. It's the end of privacy but also the end of crime...


Why are people so surprised by this? Imagine you're a company building digital signage/advertising products. Wouldn't this be one of the first ideas that pop in your head? The technology is out there for free...


just so im clear, this is someone in a pizza shop looking at a windows kiosk with a camera?

it would be interesting to see the ad and how / if it changes based on who is watching


Not many smiles :(


Why it doesn't surprise me?


Windows.. TeamViewer.. using the primary screen to display the ad.. and the camera is not even hidden..

Amateurs.

I wouldn't be alarmed by this, they probably don't even now the accuracy of the algorithm they are using or how to interpret the collected data correctly.


Who says the same people set it up and receive the data?


The subsequent Twitter thread featuring @justkelly_ok et al. is probably the worst things about Twitter all bundled up in in one. It's a pure cringefest.


Agreed... I had a response written, realized life is too short, and just closed the window before posting.


Its absurd people are outraged about something like this, relatively harmless and at the same time use Facebook. The social network has your face, all your life, moods, expressions, interests,personal conversation etc. Now THAT is worrying and not some pizza shop which gathers stats to know what type of customer is their frequent visitor.


> Its absurd people are outraged about something like this, relatively harmless and at the same time use Facebook.

What is absurd is there is someone like you in every privacy-related thread claiming that everyone who is outraged is also a Facebook user, or somehow is fine with what Facebook does.


Though i disagree about this being harmless, I get your point about facebook which is way more intrusive and has its own controversial facial recognition running in the background on all pictures.

But here's the thing, you can choose to not use facebook or deploy some kind of mitigating strategies for online surveillance[1]. On the other hand ublock origin is useless to protect you from meat space surveillance and tracking.

Understand that surveillance in public space physical world is different from its online counterpart, that it is way harder to detect and counter and as such ought to draw a bigger outrage before it gets generalized.

[1]: https://www.youtube.com/watch?v=b1Vc6oJ2qOM


You're in control with Facebook. It's you who gives them the data.

If this is true, it's all stuff that they're taking by stealth.


I'm pretty sure my 300 friends have given fb more data about me, than I have. [ed: and that doesn't even interlude the tracking from "like"-buttons etc]


Sure, but you can choose not to be on Facebook.


Which is not enough to not be on facebook or even preventing facebook from profiling you.

I've chosen to not be on facebook and I've been shown an account in my name made by someone else, pictures where I'm tagged, public posts and comments mentioning me, private message mentioning me.

This is the tip of the iceberg, I have not been shown the facial features facebook has associated to me, the "social graph" they have linked to me and countless other internal facebook stuff the general public is supposed to not know about.


How does that help? Fb will use facial recognition and track you in pictures, build up a friend-graph - and if they can - associate that profile with tracking data from "like"-buttons etc.

You can choose not to use the Internet, or only use hardened devices over tor - but it's not exactly the same as "you can choose not to drink coca cola" (incidentally, it might be difficult in places to completely avoid products by the coca cola company, as opposed to just "coke, the coca cola soft drink").

Added to this, the network effect make fb hard to avoid - people use fb/messenger for a lot of communications, volunteer groups, political groups, education... You are free to argue it's I'll advised (and I agree). But wishful thinking alone does not mean "choosing not to use Facebook", might not include: missing social, educational and work opportunities.


You can't choose to not have a shadow profile tho...


I wonder where you get this idea, pretty much everything points to facebook being in control. Even when you have never registered with them you are profiled and tracked.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: