Hacker News new | past | comments | ask | show | jobs | submit login
How to hide from the AI surveillance state with a color printout (technologyreview.com)
257 points by hsnewman on April 25, 2019 | hide | past | favorite | 134 comments



Figure 7 in the paper [0] shows something really important. The learned adversarial patch is not general at all and has to be held in a specific place with a specific orientation to function. This means it's not going to be generalizable to actual surveillance where you'll be viewed from a lot of different potential angles or say onto a shirt or jacket you can just wear.

[0] https://arxiv.org/pdf/1904.08653.pdf


From what I understand, all of these methods to fool image recognition systems have to be very specifically tailored to a very specific image recognition system, which makes them completely useless for any kind of practical use.

To really avoid image recognition, you need to create patterns that disrupt the actual patterns that these systems look for. For facial recognition, make your eyes and jawline hard to find, like Dazzle makeup does. To defeat full-body recognition systems, wear clothes with heavy patterns that make it hard to distinguish where your arms, legs and head are.

Even then, it's unlikely to work from all angles and against all backgrounds. You'd need something that adapts to the background you're likely to be seen against.


I think it's worth nothing that traditional camouflage aims to break up the transition between the object and the background so the outline of the object cannot be properly seen and therefore the object cannot be picked out from the background.

Dazzle works not to prevent detection of the shape/outline but to prevent any further information from being discerned (e.g. heading, precise shape and distance).

Depending on the kind of system you're trying to fool the former approach may be far easier.

I once saw a woman wearing black and white patterned leggings that perfectly replicated the intended effect of dazzle. I think that with clothing that isn't skin tight and therefore does not perfectly match the shape of one's body it would be even easier to pull off.


I'm reminded a bit of O'ahu tree snails: https://en.wikipedia.org/wiki/Oʻahu_tree_snail

They have brightly patterned shells, and each one is different. One hypothesis for why this is is that it deprives the birds that eat them of a consistent pattern to look for.


Disco suit completely covered in mirrors. I can dig it!

More seriously: I got the book How to Survive a Robot Uprising many years ago, and it has all sorts of tips like this one for defeating various parts of a robot's sensory and locomotion systems.



This sounds very similar to "traditional", i.e., military, camouflage. Does camo gear help with making it hard for AI to see you, like it does with humans?


Seems pretty reasonable.

Basic image searches are looking for patterns so if you can have colors that break lines then you'd get the same effect. You would probably want to go with grey's and blue for city colors. Head to toe camouflage in the city puts you on the radar in other ways.

But the other trick about camouflage is to look like everyone else.

Hence why all soldiers dress the same out field and rank insignia is displayed in very mute colors on uniforms. This has the effect of hiding the officers in with the herd. Just don't be the guy holding the radio :P


If you really wanted to go somewhere and be untrackable you could just wear religious clothing that covers your face and body. That way you blend in with the whole group without looking like a crazy person with facepaint.


> Does camo gear help with making it hard for AI to see you, like it does with humans?

Camo is all relative to the background. A gillie suit tailored for the local vegetation will fool literally every form of visible spectrum attempts at detecting the wearer (you need to look for heat or scent instead). It won't work very well at the mall though.


Military camouflage like Multicam is optimized for the outdoors (forest, desert, etc.) It's unlikely to work well in an urban environment where most facial recognition takes place.


And they will also use gait detection and cross reference with other data. It's a lost cause.


It's commonly claimed that putting a pebble in your shoe is a good way to disrupt gait detection. (Presumably you shouldn't always use the same pebble in the same shoe!)


Walk without rhythm, it won't attract the...


Frank Herbert was preparing us for the Butlerian Jihad


We are among you.


Put on a burka and ride a hoover board.


Yeah, it seems Islam tradition solved identity recognition ages ago. It certainly makes business negotiations difficult; the burka wearer has the advantage of emitting no body language. An advantage if they are being pitched to, that is.


To defeat gait detection, I suspect the only options are really long, wide skirts, or a pebble in your shoe.


or a wheelchair, but you're just trading identifiers.


Elevator shoes maybe?


JNCOs?


> To defeat full-body recognition systems, wear clothes with heavy patterns that make it hard to distinguish where your arms, legs and head are.

Isn't this basically how camo defeats the human recognition system in our wetware?


Would it not be easier at some point to “simply” don a disguise ala mission impossible? The one issue is ensuring emergence is not detected (this disguise came out from this building/house).


I’ve heard putting a grape in your mouth/cheek (inside) will throw off facial key-point detectors.


What about a slower event like gaining of losing weight, does that frustrate such systems?


A Scanner Darkly suit?


capes and cowls are a classic look


Just use burqa technology.


CV Dazzle [0] is/was a style of makeup that defeated facial recognition, and it seemed to work from multiple angles. Unfortunately it was released something like 5 years ago so I'm certain it won't withstand today's state of the art techniques. Looks like it was never really tested to work against deep learning-based techniques.

That said, I think somebody could very easily come up with an adversarial makeup camouflage recommender. Imagine an interactive visualization of the regions of confidence of a facial recommendation system, hooked up to your webcam. You could have a visual overlay of where to apply make up to most damage the classifier's confidence of your identity.

[0] https://cvdazzle.com/


thing is, when you walk around on the street looking like the people on their website you don't actually need surveillance camera's because every single person will remember seeing you.


It's meant to make you hard to recognise by machines. People are fine.

At the very least, this could justify weird fashions in cyberpunk and other dystopian settings.


Looking through the pictures I was thinking, there's parts of the world where people just dress that way.


I feel like if the facial recognition systems are that good, it means if I can recognize a face, then so can it


I don't think that's true. I'm no expert on facial recognition, but I believe those systems look at more specific details, like the corners of your eyes and mouth, and where they are relative to your jawline. They're less interested in things like "could this be part of a cheek? does that hair cover a head? does the picture as a whole make sense?". They're very precise (more than we are) at measuring the exact locations of specific features, but less good at checking if the picture as a whole could be hiding a person.

So they're better than us at some things, but can still fail dramatically at things that seem trivial to us.


It's certainly not a way to stay stealthy, yeah. And frankly it's probably worse at defeating face recognition than wearing a bulky hoody and staring at your shoes while you walk. I don't think this will be identity-concealing except maybe at some cybergoth raves...

It's an awfully cool project, though; both artistic and a transformation-resilient adversarial input.


The intersection of high fashion, cutting-edge technology, political commentary, and the eternal war for human rights is an incredible thing.

Does anyone know of any other such projects in the same vein?


As Randall Monroe illustrates here: https://xkcd.com/1105/

I guess the solution could eventually be digital paint which rapidly and randomly alters itself, something akin to the scramble suits from A Scanner Darkly. https://www.dailymotion.com/video/xqrvzb


That CV Dazzle project is fascinating. The link to the warship Dazzle camouflage is also hilarious in the sense that that is what people have to turn to in 2019.

It reminds me of the comic book The Private Eye by Brian K Vaughn. In that story the cloud "bursts" and online privacy evaporates overnight so everyone turns to intense camouflage to protect their identity. (https://en.wikipedia.org/wiki/The_Private_Eye)


Does face detection today use the same kind of trained neural nets that YoLo or others use? Watching this video [0] from their site it seems it's using a much more static algorithm trying to detect particular features which is what the CV Dazzle makeup aims to disrupt. As far as I understand one of the benefits of the neural net models is that they're less reliant on defined features than the older CV models.

[0] https://vimeo.com/12774628


On the website it talks about openCV and how to fool that, as well as the general techniques used. I still think from the tips shown that it would be very effective today. For exaple, it talks about how the most distinct features of a face are the nose and eyes, and that hiding or obscuring them works well.

Newer technology likely gets much better at spotting noses and eyes, but taking them entirely out of the picture probably helps. Also the asymmetry I can imagine helps a lot.


This style retcons all those cyberpunk fashion accessories beautifully.


CV Dazzle is perfect for the cyberpunk future that will never happen.


Well, I guess that that would be the next step.

Another problem is that the minute that AI-proof clothing goes mass market is the minute it stops working (and for that reason, I suppose that you wouldn't want to have a product image, either.)


The minute clothing that is really effective against AI video surveillance is available when AI video surveillance is widespread is the minute it will become a criminal offence to sell or wear such clothing...


Here's a better question:

Will the book, "How To Make AI-proof Clothing" stop working the minute it is published, or will it teach generalizable skills.

It would seem unfair to pit a static object (prefab clothes) against a dynamic opponent (person detection AI). Better to compare two intelligent opponents.


> AI-proof clothing goes mass market is the minute it stops working

What prevents GAN's from learning a new patch for them ?


That's the point.


I'm not 100% sure a static image on cloth will ever work well for that kind of general countermeasure. So far all adversarial stuff except the dazzle camouflage face paint require pretty precise position or view relative to the object being labeled.


It is already reasonably robust though: https://www.youtube.com/watch?v=MIbFvK2S9g8 It is not magic, but I can imagine the method being extended to be more robust to rotation/shearing (by improving the metric being optimized for).


I wouldn't call having to keep it mostly level and centered at hip height robust in the slightest. Check out around 0:35 in that video where he starts moving it around once he rotates it just a little out of orientation (both directly towards the camera which seems very important and also the slight <20 degree rotation) that breaks the patch and YoLo figures out it's a person. Or again at 0:55 approx where it's moved slightly to the side it again breaks pretty immediately.


I can't believe he didn't move it higher or lower!!


He did once but it broken the patch and YoLo correctly labelled his torso so he put it back down. They say directly in the paper that it's quite sensitive to the position of the patch relative to the bounding box of the detected object.


More to the point, you won't have access to the actual surveillance network's AI, so you won't be able to train your patch adversarially in a reasonable amount of time.


eh some adversarial inputs are very robust to perturbations/orientation. see the famous turtle video.


That was pretty impressive though it's still on a rigid surface which gives you a fixed relationship between the shape and the pattern.


that's a good point about the rigid surface.


News: dies after being run over by a Tesla in autopilot mode. Was carrying a weird printout.

Tesla will not pay anything: “he was deliberately hiding from cameras”.


If you're paranoid enough to wear that kind of clothing, you'd probably be paranoid enough to assume most drivers are distracted morons as well - and add accordingly.

(I still found the comment amusing!)


I didn't find any paranoia in the parent/above comment. I think it is a very realistic possibility.

Many auto pilots, certainly Tesla's, work by using cameras. And, if you do fool cameras then the car might indeed kill you. Hell, they have a hard time keeping in the lane already - a couple of people have already died. Imagine what will happen if you fool it - OOPSE

And, if it goes to court, whoever has more expensive lawyer wins.

Also, just because someone's paranoid doesn't mean they're wrong.


My point was that if you're a certain level of paranoid then you might also be above average aware of your surroundings - so it wouldn't matter if a self driving system went awry, or someone was on their phone.


>If you're paranoid enough to wear that kind of clothing

The interesting thing is, for me: what if the printout is not made with the specific purpose of avoid recognition, just a a weird, colorful, hippie cloth and garments that AI struggle with.


Well, yes, possibly.

However, it may turn out to be a canvas of "art", who knows?


That's my main worry with Tesla not using LIDAR as a backup. Sure, the image recognition will work in almost all the cases, but occasionally (hopefully very rarely) there is going to be a pattern that it might not recognize (either deliberate by an attacker, or occurring naturally).


hahaha :) you are funny


Interesting, however this just reflects an overfitted model. Fundamentally, this photo is not that different from a fashion influencer's posts.

We're a fashion engine and our system fully detected both of the people and all of their apparel (the person in question's shirt, his pants and it sees the "printout" as a low confidence handbag, as well as his shoes).

Not to say it would be impossible to trick our system, however, this method would not be sufficient given a good object hierarchy. Our system would have to have a triple miss across two methods - would need to miss his pants and his shirt and his body with the localizer, as well as his pants and shirt with the segmenter. And, if we were serious about detecting hiding people, you'd be surprised how gosh darn reliable the shoe detector portion is.

I don't see it being terribly feasible (and definitely not reliably so). Let's just say, it's not even close at all at this point. We miss zero of these things today.


Same here, full detection. Our customers are the theoretical targets of this method, and I can verify it is not effective.


Here is a good video showing the same two guys passing the card back and forth:

https://www.youtube.com/watch?v=MIbFvK2S9g8

If a security camera uses this YOLO to identify people moving into the frame, then this important new development can help prevent security camera systems from making annoying sounds that would disturb the sleep of security guards.


Isn't it just ludicrous, using the term “AI” for a piece of technology that can't even see straight, let alone “think”.


Humans and some animals are also vulnerable to optical illusions. The unusual thing about this category of "AI optical illusion" is that it's nonlocal - having the patch in the frame defeats recognition of other objects.


Also while humans experience many optical illusions we are quite good at using temporal information and knowledge about how objects behave. We often flag our own observations as implausible and try to get a view from a different angle to defeat the illusion. Most AI can't do that, simply because it has no possibility to get a second perspective on the same situation.



I think the saying goes like this: if it works it's just machine learning, if it fails it's AI.


Machine learning is usually written in Python, AI is written in PowerPoint.


Then what's get written in C++?


Python was written in C. :)


AI is a much abused term but there are unresolved practically philosophical questions about the brain.

If there is no "special sauce" and our brains are just very large neural networks that have what we see as intelligence is an emergent property would we be truly "thinking" by those standards?

We also suffer from adversarial patterns - they are called camouflage and optical illusions. Different from the ones that affect machines of course.


Most of the things people call AI isn't actually AI. Buzz words just help with funding, and at the same time the projects are undoubtedly useful so everyone just goes with it. News sites get more clicks and ad revenue for writing articles on AI, researchers get funding, and the world gets better image classifiers.


Exactly. Just look at IBM Watson. That project started as some machine learning technology until the huge enterprise marketing arm got ahold of it, and then they began repositioning what are essentially shell scripts and regular expressions parsing data into "Watson XXX Vertical AI"


There's another way of looking at it. Given that we accept a spectrum of intelligence in biological forms -- i.e., a spider is not as intelligent as a sheep is not as intelligent as a crow is not as intelligent as a human -- why wouldn't we also have a spectrum for A.I.?


you are right. there should be.


I hate the abuse of the term AI too. It's not AI. AI doesn't exist yet. It's machine learning.


Impossible not to think in "A scanner Darkly" https://www.youtube.com/watch?v=aqWBCsWRdw4


Came here to say this.


We've already lost this cat and mouse game, because there is no feedback loop.

Walk through a city full of AI-powered facial recognition cameras holding a sign like this. Even if it works super well, you'll never know if it really does, or when an upgrade breaks your sign.


What is faster to patch, a patch you buy from patchemeout.com to wear on your t-shirt or a patchable person detection system updated with patches in realtime as adverserial patches are detected being worn by patched people?


Printing it on a color printer?

Real-time LCD display?


By no means do the authors suggest that this is an anti-surveillance system. Perhaps the methodology could be utilized, as has been shown in a variety of adversarial attacks on DNNs. But the GAN approach is entirely target DNN specific. Any minimal change to the DNN would require an entirely different adversarial solution. More practical approaches have existed for some time to thwart surveillance (https://hackaday.com/2008/08/27/testing-ir-camera-blocking/)


AI derived printed textiles with randomized color blocks. Use AI against AI. If your clothing looks like it's glitched out, is that the video or is it a new form of personal camouflage?


I want a vehicle wrap with license plate-looking characters.


My god this little info blurb is getting printed, regurgitated everywhere. This article is the 14th one I've seen about this specific project.


You mean, people are using printouts of the article to hide from surveillance cameras? That would be most appropriate.


Where is the original article?



Of course the surveillance state is using YoloV2 for all their tracking needs ...


You'd be surprised how many surveillance companies that offers object detection use it behind the scenes


Coming up: AI technology that can identify people who try to hide from AI surveillance state with a color printout


Interesting. But since, these adversarial images have to be tuned to the recognition engine in question, what keeps the authors of the recognition engines in question from updating their engines or models or algorithms from recognizing and disregarding such adversarial images?


Is it not a little surprising that the patch just has to be roughly where the person's belly is? You'd think the NN identifies people by finding a face and maybe limbs. Or a head-and-shoulders. But maybe it doesn't.

What would happen to a gorilla wearing the patch?


The neural network isn't looking for a head, limbs etc... It's just looking for overall patterns.

The soft used in the article is Yolo and I work a lot with it, and the results depends A LOT on the data you used for training. Some datasets will detect a gorilla as an human, some won't. Some even will detect two upside down fingers as a human.

So if you want your NN to not detect gorillas as a human, you have to include pictures containing gorillas where they are not labelled as humans.


It makes a certain sort of sense - if a person's torso isn't connected to their legs, it's not very person shaped, is it.

I'd agree it's surprising, just that thinking about it you can see where it goes wrong. I imagine combining this with some kind of makeup to change the contours of one's face would work wonders.


> if a person's torso isn't connected to their legs, it's not very person shaped, is it.

I suppose it really depends on the training set. In some sense the set can be misleading. For instance if I have a load of pictures of people sitting by a table with their legs visible, that model might actually require a gap between your torso and your legs.


That's true. In fact in the video, the man's lower legs are obscured by the chair and this poses little problem for the system. However, I would say that if this was all that were required, they would have used a blank square.

Instead, they provide cues that mess with the system's sense of scale, by having almost like a window into another, smaller scene, with vague, human-like images that the network presumably recognises as faces.

A group of tiny faces on the top of a single set of legs very much does not look like a human.


Is anyone starting a clothing brand?


There was a trend in San Francisco (and probably elsewhere) about 10 years ago for clothing with really complicated and brightly colored small repeating designs on, almost like kindergarden childrenswear. You'd see sketchy looking people in the tenderloin and market street wearing them. I asked a lawyer about it and apparently it made identification in court cases harder, because you could argue over whether the colorful hoodie the accused was wearing had aircraft or birds on it, and were they green or blue etc while seen at the crime scene by witnesses.


That's why white T-shirts were popular among street dealers. If everyone wears the same common thing then it makes it harder to identify perps.



I refuse to identify the wearer as human as well. I'm not a robot I just did a captcha to check


I haven't found one yet, but I'm seriously considering doing this as a side project.


So this algorithm requires the access to the deep learning model, such as yolo2, right? But for a real system, we usually don't have access to the model.


If only a few people are doing this, they could train to detect something like unidentified dynamic object and get a human eye to look at it getting you even more surveillance than otherwise. Similar techniques could be useful in coordination like with Guy Fawkes masks but solo use will likely make things worse not better.


Seems all for not as (others have mentioned) these techniques likely won't work across the board.

I don't really know much anything about this stuff or the cameras that might be used so forgive the noobness of this question, but I've been wondering - what about jamming? Thinking along the lines of super-bright IR (UV?) LEDs flashing in random sequences (i.e. around license plates).


If the goal is to not draw attention, attaching a bunch of flashing LEDs to yourself is not a good idea.


940nm IREDs are not visible to the human eye but are visible to a CMOS sensor, so that would be pretty easy to rig up without being too noticeable. The main issue is that you would need a lot of them because most security cameras are filtered for IR light and you would have to sufficiently overpower them unless it was night time.


This will get fixed quickly. Gimmick.


Would a hat like this serve a similar function? https://www.iheartraves.com/products/winter-lights-dad-hat


Print it on a shirt, and you have the thing William Gibson dreamed up in "Zero History":

"What’s that?" she asked.

"The ugliest T-shirt in the world," he said… "So ugly that digital cameras forget they’ve seen it."


Was thinking the exact same thing.

Though I got the impression that it was a deliberate backdoor to allow security services to operate incognito, rather than an AI-fooling hack. Maybe just because the book was written before modern neural nets became so widespread.

A friend used to design systems for a major CCTV company. I asked him to add a similar backdoor for me, but sadly he never did...


Brilliant plot device. And like much of Gibson’s other plot devices, bourne out in due time.

Remember, the penultimate device in the novel was an antihero’s cornering the global marketplace order flow.


Is carrying a printed card around going to be the tinfoil hat of the new millenium?


Maybe, but I would say it's a little less conspirator or crazy since we know such systems are being deployed to track people.

Radio waves never could read your thoughts or other crazy stuff so it would be less loony to see someone with a printed card than wearing a tinfoil hat.


Could electromagnetic radiation ever disrupt one's thoughts or cause damage to brain tissue? Do you think there has been any research into altering brain function wirelessly? If such a technology could be implemented, do you think you would be made aware of its existence?


Just remember then that it's all up to you to avoid the autonomous vechicles.


oh no, how am I going to safely stroll across the highway like I've become so accustomed to doing in our AI golden age?


Well this is my daily dose of dystopia.


I'd actually like a daily dose of dystopia, anyone know of any reliable places to view it online?


prisonplanet maybe


Why is there yet no movie that explore this ? Would make a good sci fi thriller.


Have you ever seen Person of interest? First season is mostly detective, but starting second season picks it up and explores ai & surveillance more.


Yes! I was actually recommended here on HN. I did however not make it into the second season. All episodes where pretty much the same so it became boring for me. Maybe I'll give it another chance.


How mouse can fool cat. Once.


So basically if you printed a strange design on one of those T-shirts that can be printed on all over, you could fool the AI in most cases... the same way that "Dazzle" camouflage for ships, works... https://en.wikipedia.org/wiki/Dazzle_camouflage


Unfortunately no. As soon as the image is rotated slightly to one side it disables the adversary print out. You can see this in the original paper with the final image.

Now, that doesn’t mean that a camouflage couldn’t be created that more generally avoids detection from these algorithms.


Thanks for the useful info! That’s what I was looking for




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: