Hacker News new | past | comments | ask | show | jobs | submit login
Blur Tools for Signal (signal.org)
603 points by tosh on June 4, 2020 | hide | past | favorite | 241 comments



Without sold proof this is not possible to circumvent, this maybe more dangerous than not.

Here’s an example of AI being able to identify a blurred face: https://twitter.com/ak92501/status/1267609424597835777

Identifying an individual is not just about a face, but number of factors that are much more complex and very hard to account for in a systematic way.

—-

If Signal is really concerned about allowing individuals to control the information they leak, they need to prioritize releasing the feature that will allow users to use Signal without providing phone numbers; one of their staff recently publicly stated this is finally likely to become a feature. Not to mention stop repeatedly asking for the user to provide their name, access to contacts lists, etc.


That's not removing blur, that's making a face (out of millions) that matches the same pixellization. There's no telling what the original face was, and it's disingenuous that they don't show you the original photo.


At the end of the video they posted[1], they show the original photos of the authors, the downscaled inputs, and the outputs.

[1] https://twitter.com/ak92501/status/1267609090689323008


And, IMHO they don't look like the same person any more, at all.


Ah, thanks, I missed that in the Tweet.


They have a sandbox you can run the code yourself--I don't think we're at dystopian surveillance level just yet https://imgur.com/a/IfdLWau


You could, however, probably tile the downscale "rainbow table" in a way that would let you predict some degree of novel original from a sufficient number of tile samples.

Thing about downscale blur is that it's nearest-neighborish, so can be addressed with divide+conquer as blur effects stay local. You'd end up with a fairly large combination of potential tiles. Some wouldn't be viable faces, but we have classifiers for that already.

Entire combination trees can be culled that way to make the problem radically smaller, as long as you know it's supposed to be a face, so I don't know how hard it would really be. It's possibly pretty easy to come up with the N possible original faces with enough certainty to then match with potential targets of interest and make N small enough to use.


Isn't that exactly what the paper is doing?


I only made it through the abstract but it looks like they’re matching the entire given LR image to a known entire HR image.

I’m saying with enough data you could potentially create a more predictive “magic sharpening” algorithm that didn’t strive to match a known original picture, but instead used that matching on divide & conquer subtiles of the original LR image against of a rainbow table of reduced HR tiles to predict a set of plausible HR images.

Basically if you can figure out with whatever context you have that the 4x4 brown smudge is very likely a brown cat, you can replace it with a brown cat. And if you know that, the orange/white/black smudge next to it is probably a calico, so stitch it in.

Of course the source image would have to be bigger than this, so it couldn’t be CSI-enhance icon to landscape, really more like a really good AI upscaler. You’d need a strong way to identify plausible scenes too. We can generate novel faces now, think this fuses the two concepts.


The resolutions on those are way higher than what Signal is doing. It's not surprising that a neural network can give a decent guess at what a face can look like. Faces don't have that much entropy. But you can blur them out if you get it down to like 4x4 pixels.

Anyway, if you want scarier panopticon stuff, you should look into gait recognition, which is way harder to censor.


Besides the fact that Signals "blur" doesn't even look remotely close to your example, they're working on the phone number issue:

> PINs will also help facilitate new features like addressing that isn’t based exclusively on phone numbers, since the system address book will no longer be a viable way to maintain your network of contacts.

https://signal.org/blog/signal-pins/


I think this feature may not do as good a job of blurring faces as people expect in some cases, especially if the faces is large in the frame. I tested it and got results that were noticeably less obfuscated than the image advertised on the announcement page by Signal:

http://lelandbatey.com/projects/signal_blur_comparison/


I too tested this out immediately and noticed the surprisingly minimal blur. However, I’ve gotten another update since then and it appears to be extremely blurry now. Actually, when I first did it after the update, I thought it was just a solid color. After blurring the majority of the photo I could tell it wasn’t a solid, just extremely blurry.


What is left to prove about (Gaussian) blurring?


The intent of the blur is to hide the identity of the individual face that has been blurred. Average human sees a blurry face and assumes the person’s identity is safe. Research has repeatedly should this is false, especially when combined with other data.

Here’s another example of such research:

https://www.wired.co.uk/article/facial-recognition-systems-c...

>> “researchers said only 10 fully-visible examples of a person's face were needed to identify a blurred image with 91.5 per cent accuracy.“


A Guassian blur is not reversable, information is lost. No research shows otherwise, because it's a mathematical property of the Gaussian transform.

Some methods can be used to find one of many solutions to the blur, where certain high frequency information is preferred over others because we know the end results looks like a human face, and not just any solution. But that only means you can get out many possible faces; if your reconstruction tool only gives you want it was simply over-trained.

[edit] You just updated your post. If you have tagged, unblurred photos of the face in your blurred photo, you can (as expected) constrain the end solutions further. WHat's not clear to me from the paper is whether or not the blurred face was tagged as well. Scenario S3 seems most likely the type of scenario encountered in surveillance programs, where the results are nowhere near 91% accurate.


> A Gaussian blur is not reversable

This might be narrowly true (it’s hard to recover precisely the original image), but is not really an accurate summary in this context, if the only goal of reversal here is to recognize the face. Deconvolution will quite effectively undo gaussian blur. https://en.wikipedia.org/wiki/Deconvolution https://en.wikipedia.org/wiki/Richardson–Lucy_deconvolution https://en.wikipedia.org/wiki/Blind_deconvolution

In Photoshop, the deconvolution tool is called “Smart Sharpen”, and has a preset for a gaussian PSF.


Ironically I think a Gaussian blur is one of the few transforms that should be totally reversible. Since the Fourier transform of a Gaussian kernel is also Gaussian, it is nonzero everywhere, meaning you can in theory just divide the Fourier transform of the image by the Fourier transform of the kernel to get the original back :)


The quantization to the image colorspace and depth is probably the limiting factor, moreso if dithering is used.


As the resolution increases, the ability to reconstruct a lower resolution image goes up as well, which will be more than enough for most identification purposes.

Security as an accidental quality of a system is not security.


Fair point. Elsewhere in thread it looks like they are using a fixed resolution. I guess at that point it comes down to whether they've left enough mid-frequency content for an algorithm to identify the slightly darker/lighter patches of facial features, and whether that spacing can actually identify a person.


Wait: informations is lost if the blur is truly a gaussian process. The simulation of blur by means of a convolution can perfectly well be reversible.

Image blur is not a gaussian process.


Are you saying that convolution with a gaussian kernel is not real gaussian blur?

I'm legitimately asking. I'm really ignorant about this subject.


In theory it is, in reality there is discretization of the signal and noise


No: it is a simulation, because it is a discretization and the map can be injective (or “almost so”).


Gaussian blurring does not seem to lose enough information.

A hard 3×3 pixelization would be much more reliable, if less æsthetically pleasing.


You could make it aesthetically pleasing. Just make sure the data is boiled down to just a handful of bytes first and there won’t be anyway to reverse it.


Just blur the pixelization and it will look similar to the human eye, but with guaranteed information loss. Note that you should also discretize the colors of the pixels, which otherwise have 24 bits of information each; and maybe 2x2 pixelization might be a better idea.


Inverse problems are ill-posed, you cannot just invert the kernel, information is lost.


I don't doubt that Signal has put a significant amount of effort into making sure its blurs can't be reversed. But when I look at the results in the photo, I don't understand why they would put in that effort. Why derive a blur from the base photo at all?

Things that seem way easier to me:

A) blacking out the face entirely with a solid color,

B) if that looks ugly, replacing it with some kind of clip-art,

C) if that still looks ugly, replacing it with a gradient

D) if that still looks ugly, replacing it with a pre-blurred face from a generic set of buckets.

I sort of get the aesthetic argument, but I also really don't, because the way Signal is blurring faces is ugly, at least in the photo they show. It's not a seamless thing that blends into the background and looks way better than a solid color. It's giant squares, and the amount of blurring means that the contents are basically indistinguishable from a radial gradient to my eyes anyway. Am I missing something? Would a gradient really look any worse than this?

Is there some kind of use-case where blurs give aesthetically much better results than what we're seeing in the photo? Are the concerns I'm seeing below about de-masking just fear-mongering? Are blurs in general just pretty safe, fast, and easy to do? Moxie isn't stupid, I assume in situations like this he knows what he's doing.


In https://arxiv.org/abs/1905.05243, we presented a measure of the effectiveness of eight face obscuration techniques. We did so by attacking the redacted faces in three scenarios: obscured face identification, verification, and reconstruction. Based on our evaluation, we show that the k-same based methods are the most effective.


Also consider the social effect. To the ordinary eye, it's not obvious that this blur is resistant to reversal, unlike standard blur effects. However, someone seeing the use of this blur effect, without understanding what it is, may come to the false conclusion that a non-Signal blur is safe too.


That's a good point. I've already seen at least one other person on this post comment to a Github repo that blurs faces. It might be secure, it might not be. I wouldn't trust it by default.

With a static overlay, the method is simple enough that I can evaluate the security. With a blur, I don't know the difference between a good one and a bad one, so I can only trust the reputation of the author.

I like to be able to look at the output of an anonymizer and to be able to tell myself at a glance whether it worked.


I'm not even sure the Signal blur is safe, look at the comparison in this comment: https://news.ycombinator.com/item?id=23422993


I was just commenting the same though, I like your ideas more too. I guess for most people it would look more aesthetically pleasing as you point out.


[flagged]


Presumably because that kind of camouflage confuses face-tracking technology the worst.


Nonsense. Source?

Well defined 2D pattern stretched across as 3D surface would be easier to circumvent than a generic non-stretching mask.

Beyond that, thermal & inferred imaging of particular covered faces have easily been able to identify an individual.

Lastly, as I mentioned in my other comment, identifying a person is easily done via number of methods.

Please refrain from “presumably” comments, they help no one.


In case anyone feels like playing around with it, a friend and I made a project to do auto-blurring of faces with OpenCV a few years ago, with both iOS and node frontends ..

iOS module:

https://gitlab.com/seclorum/groupie/-/tree/master/ios/groupi...

Main node.js app:

https://gitlab.com/seclorum/groupie/


The first URL doesn't seem to work, and the second URL brings to an "empty" project. Just to make sure -- maybe it's just me?


Hmm, I guess I got the URL's wrong, and can't edit now:

https://gitlab.com/seclorum/groupie/

Works on Linux and Darwin, just type 'make'. ;)


The project is public, but the repository is probably private. We can't see any of the code.


Hmm, dunno how that happened .. maybe its better now?


Yes, fixed!


no. i don't know gitlab well enough but maybe the project is not really set up as public.


What kind of blur is used? Blurs are annoyingly bad at obscuring things like faces. They may be good at making faces unrecognisable to people, but they’re not nearly as good at making faces unrecognisable to machines.


I've seen this sentiment mentioned quite a bit, but is it still true with the level of blur being shown in their example images? the blur level is extremely high to the point that it has essentially left behind a smooth gradient. Even with the algorithm known is there enough reversible information left?


Well, if you want to leave behind a smooth gradient, then leave behind a smooth gradient.

    Suggestion for an algorithm:
    * start with the blur
    * sample the four colors at the four corners of the blurred region
    * quantize them
    * fill in the region with bilinear interpolation.
Then your whole region can only reveal these four quantized color values. If you only blur then you will have a harder time proving the leaked information content.


Or perhaps

  * detect (or get the user to select) faces
  * replace the pixels in the face bounding box with a generic face
  * blur the bounding box edges
  * do whatever blur you think ends up "looking nice"


It's missing the step of detecting and applying detected skin tone to the generic face. Right now I can't imagine people being to happy about "generic white face blur", implying the generic face would be white.


I reckon all faces should de-blur into George Floyd's face...

(From the examples in the Signal blog post, I don't think that grey gradient box is gonna be able to specifically imply white or POC faces...)


I suspect there's not much information in the individual blurred face, but I wonder if given enough examples you'd be able to determine if an unblurred face is the one in a sample of images with any level of confidence? You can do that with text (http://dheera.net/projects/blur).


The face and its surroundings are blurred almost to a single colour. The average RGB value of my face might be unique-ish, but if you mix in some variable background, photographed on a camera whose lens has been smeared against the pocket of someone's jeans, the result should be human, not individual.

A side comment: AFAICT what the Signal developers have done is take code that was developed so that the phone camera could autofocus on faces, and and used that code to defocus faces. What a sweet hack.


A very sweet hack, but I think the concern was based on the example image provided in the link posted. While the face is blurred, there's still a lot of information you can glean about the person: their haircut, their neck, the clothes worn etc. -- so I'm guessing the threat vector here is that if you also have a general set of pictures from the same demo, you may be able to automatically identify who the blurred person is.

Blurring is better than nothing but the best picture when it comes to avoid being traced is the picture that was never taken.


Let's be real for 2 seconds here, this is pure nonsense. No court of law would do anything about "hey we arrested that guy because he has 2 eyes, a mouth and the same tshirt as that other guy who was protesting yesterday", if it comes to this you wouldn't even need a picture of blurred faces, just arrest whoever you want and provide forged evidences (or none) because that's exactly the same thing

And even then law enforcement are already filming them (cctv + from the air) and tracking their phones, the last thing you have to worry about is a 100% blurred face that no amount of technical power would be able to process or match back to you.


picture A of an individual, unblurred, protesting peacefully.

picture B of a blurred individual from later on in the same protest, wearing the exact same clothes, commiting questionable acts, is circumstantially incriminating.


bikeshedding? on hacker news? no way


You're overthinking it. Police already have their own camera people doing video surveillance in addition to CCTV and other surveillance tools. The sort of forensic analysis you mention is of course possible and is sometimes engaged in, but obscuring all such information would defeat the purpose of photojournalism altogether.


Yes. Someone who has access to many photos of the same set of people might well able to identify people on one photo, even though their faces are blurred on that particular one.

I'm not sure whether the large number of photos nowadays is a net negative, though. That's also what finally stopped Derek Chauvin.


It shouldn't blur, it should be a black box.

You could definitely take signals code, and run it over the set of test images and find which output matches closest to the target image.


What set of test images?

https://www.androidpolice.com/wp-content/uploads/2020/06/04/... is blurred by Signal. Suppose that you have all the photos that have been posted to Facebook, and that both of those women are on Facebook, and lastly that you have resources enough to run all of those through the Signal code. How would you match those other photos to the blurred part of this one?


Paging clearview.ai's enterprise sales department. Call for you on line 7...


Not just any black spot either. A black spot of random size larger than what you want to redact. That way you avoid leaking the size of what's being redacted. The size of what's being redacted can sometimes provide enough information to determine plausible contents: http://blog.nuclearsecrecy.com/2014/07/11/smeared-richard-fe...


Ok, but just to be clear, we're redacting faces here. There isn't much meaningful here other than an exceptionally rough indication of age/development.


The examples on the Signal website give you hair color, hair style, likely race, and the shape of the top of the protesters' ears. While it's not definitive, given that a fuller redaction is easy and has no disadvantages, I don't see why someone shouldn't try.


> "The face and its surroundings are blurred almost to a single colour."

To your eyes, maybe. To a machine, you have an array of pixels, each with different values which, using an algorithm, could be adjusted into something your eyes can resolve into a unique face.


Seriously? Look at https://www.androidpolice.com/wp-content/uploads/2020/06/04/... — do you really think there's enough information in those two rectangles to reconstruct the faces even approximately?


Hard to tell, I'm not a computer; but it does look better than most. To be fair (to me), I was basing my critique on the picture in TFA, which seems to have far more detail in it.

That said, the whole point of my post was that humans are really bad at judging this. Many blur algorithms can be reversed because they just modify the color values of the pixels in a reversible way. You can't always tell by looking at a picture what data is still there, in much the same way you can't see the stars in an ISO 200 picture of the night sky. It's not until you open it in GIMP and crank the exposure up to max that you see just how much data is there that your eyes couldn't perceive.


I kinda hope there's not only enough information there to algorithmically reverse, but that when you do that - all the de-blurred faces end up being George Floyd.


The number of different characters is quite limited. Does this work for Chinese or only for Latin type languages?


Probably not. You can remove Gaussian blur by performing the inverse convolution (it's tricky because you need to find the actual parameters that caused the initial blur), you can remove motion blur the same way, etc. This looks like there isn't nearly enough information there to do any of this, though.


No chance. That level of blur is essentially impossible to reverse. I think lots of people here are a bit confused because they know that all gaussian blurs are theoretically reversible. But they aren't thinking about how ill-conditioned the inverse gets as the blur gets larger and larger.


There's another concern—even if I can't usably invert the convolution, if I have photos of a thousand people's faces and one of them is that blurred face, can I figure out which one with high confidence?


No, not with this level of blur.


Why dont they just literally cut-out/replace whatever would be blurred with just black pixels? Why blur anything anyway?


Aesthetic. A blurred face looks better on a picture than a fat black box.

But like others have pointed out, you can achieve (allmost) the same effect, if you remove enough information before blurring, or just drawing a smooth gradient, but this alone is harder to make it look as nice, as blurring the actual image.


They should simply explain what convolution they are using, and it would be easy to know.


I've been digging through the latest related commit in the repo: https://github.com/signalapp/Signal-Android/commits/master

They appear to use "com.google.firebase:firebase-ml-vision-face-model:20.0.1" to detect the faces.

The actual blur appears to be done here: https://github.com/signalapp/Signal-Android/blob/514048171bf...

Not sure what "ScriptIntrinsicBlur" stands for exactly, it appears to come from the android SDK itself: import android.renderscript.RenderScript;

EDIT: https://developer.android.com/reference/kotlin/android/rende...

It's a gaussian blur filter with a radius of 25px if I understand the code correctly.


FWIW this was updated before release to also scale down the image before blurring it. We cut the size in half, or cap it to 300x300, whichever is smaller. This was to ensure that the effectiveness of the blur isn't reduced on higher-resolution images. https://github.com/signalapp/Signal-Android/blob/master/app/...


You should perform a non-invertible blur though... Or even easier, use the same noise image for all faces.

EDIT: come to think of it, you can generate random noise using a palette from the color in the blur area (say, take four or five colors and mix them).

Applying convolutional blur for anonymizing is very very risky. Because you might end up with something either invertible or nearly so.


Ouch: gaussian blur might be invertible if you are not careful. That is why you need the explicit parameters of the convolution.

Thanks for digging.


I thought security through obscurity didn't work ^^. /s


Doesn't seem to be that good:

https://news.ycombinator.com/item?id=23422993

In fact it almost seems like the actual blur used in the app is different from what they show in the article.


I tend to pixelate regions I want to make unrecognizable in photos instead of using a gaussian blur due to this reason. Pixelation should be safe as long as the pixels are large enough, right?

I wonder why Signal didn't do something like that...


I think there's a way to do super-resolution on pixellated video.

It's okay for still images, but videos have a lot of information to leak. Just black everything out.


You should also discretize the pixel colors.

Note that if you blur the pixelated region, it'll be just as aesthetically pleasing as the reversible Gaussian blur.


A good blur sheds far too much information to be meaningfully reversible.


But a bad blur doesn't. That's why your parent comment asks what kind of blur they use.

Edit: Turns out it's a 25px Gaussian blur. There's some downsampling beforehand, but not much, and no color discretization. In other words, they use a bad blur, but compensate with a large security margin. I wouldn't be surprised if this was vulnerable to "if I have a thousand photos of faces and I think one of them matches the blurred face, I can figure out which one with high confidence", and they can get basically the same aesthetic effect if they heavily pixelate and discretize colors before blurring.

https://news.ycombinator.com/item?id=23415600


> "if I have a thousand photos of faces and I think one of them matches the blurred face, I can figure out which one with high confidence",

Is that actually a practical attack here? I can see it working if you have 1000 passport photo or mugshot frames photos, and a blurred photo with the same level/front-on framing. But is there a practical attack for non direct-facing-camera blurred pictures? (Assuming the scale of "locals at a local protest" instead of "Find Edward Snowden's blurred face from any BLM protest, no matter what the cost!!!")


Good point, I'd be surprised if lighting and orientation doesn't overwhelm all other information at this level of blur.

But I've been surprised by impressive digital forensics before—what if you can determine lighting and orientation from the rest of the photo, and then simulate them on each passport photo/mugshot? I'd still feel much more comfortable if they pixelized and color-discretized before blurring, and I still think the aesthetic effect would be much the same.


Citation needed.


2012: https://www.instantfundas.com/2012/10/how-to-unblur-out-of-f...

2017: https://arxiv.org/pdf/1702.00783.pdf (Pixel Recursive Super Resolution)

2020: https://venturebeat.com/2020/01/22/researchers-use-ai-to-deb...

Edit: Most face recognition software works by down-sizing and blurring an image to faster detect face features. So in theory it is very easy to detect face features from a blurred image. A deblur tool can then use this information to better deblur a face.


But is deblurring from handshake or lens out of focus or even a Gaussian blur the same as some random gradient blur they seem to be using?

Edit: The images in the Signal article don't look like images of blurred faces. They look like blurry images overlaid onto faces. If you don't blur the face, how can it be unblurred?


Yes, they are using a 25px Gaussian blur, they are not overlaying a different image on top: https://news.ycombinator.com/item?id=23415600


Impressive examples. Thanks for posting them!

So the ridiculous „Enhance!“ one sees in TV show crime dramas could one day actually become true.


You can't make data that isn't there. It's fundamentally going to be a guess. You can enhance your way to a face or a license plate but there is zero guarantee it will be the face or the license plate that the low quality image/video is of. This is why solid blocks of color or emojis are so effective at censoring images, it takes the data and replaces it with pure junk.


If you know it's a gaussian blur with a known radius, you can uniquely reverse it.


You do, however, lose colour depth information (e.g. deep color to true colour, true colour to high colour / 256 colours). Still enough to detect a face.


Yes, that works if the face itself is blurred, not if random noise is used in place of the face.


They're not using random noise in place of the face, they're using a 25px Gaussian blur: https://news.ycombinator.com/item?id=23415600


Nice find. That is unfortunate then. I thought they'd make more effort.


Trivial to test by deblurring and sharpening a blurred pic and passing it to, say, opencv


Would it be practical to take a facial recognition algorithm and use it to warp the identifying characteristics of faces in a scene such that the faces lose enough uniqueness to make facial recognition ineffective?

My understanding of facial recognition is that it operates on relative positions of facial elements. If you can "delete" this uniqueness from the source material by warping faces towards a limited handful of generic shapes, you make the video less useful to Government intelligence.

You could still blur the result, but you might be able to get away with less blur. Remember that it's important to see that people have faces otherwise they can be more easily dehumanised.


Ideally you'd run something like thispersondoesnotexist to generate random faces to paste overtop people before blurring it. That way if you somehow manage to revert the blur there's still no chance of revealing the original person.

Of course humans are pretty good at filling in detail, so with a sufficient blur you can get away with surprisingly poor approximations of a human face.


I’m thinking more about targeted distortions to maximally thwart fingerprinting while minimally dehumanising.


Just DeepFake Nicolas Cage onto everyone.


Excellent idea, but I think Snowden would be more appropriate.


Malkovich Malkovich, Malkovich?


Snowden is actually a character that Nicolas Cage is working on right now. Cage has such a dedication to his craft.


Yeah, or maybe someone could implement a feature to somehow distort, or "blur" the faces if you will.


There are face blender type algorithms that merge X number of images of faces. Could use something like that. Grab 10,000 facial images off the net, merge them, then use that image in every shot, for every face, so everyone looks the same.


I've seen it said that blurs can relatively easily be reversed. I wouldn't expect that to be unknown to the Signal team, so I wonder if anyone knows how they dealt with that. A different blur method that is not reversible?


You are correct a standard Gaussian Blur can be reversed except along the edges where data is effectively lost outside the blurred rectangle. In this case the radius of the blur is large enough that a lot of data will be lost. Combined with JPEG compression removing a lot of information too, reversing this blur should be impossible.

A better blur algorithm (in that it can easily be proven not to be reversible and is faster to process) is to divide the area to be blurred into a small number of cells, (9,16 or 25) get the averaged colour in each cell and then apply an interpolation between those colours as your output. This algorithm is essentially O(n) where n is the number of pixels to be blurred. You can easily prove that the information in the image is at most 3 bytes (each colour) * 25 (number of cells) = 75 bytes which is not enough to encode a face however it may be enough to encode some limited details (such as skin colour, distinctive clothing etc.) so always better to use a black box.


Provided you know the exact method you can in theory recover even the edges. Although this is very numerically unstable, to the extent that just double precision might not be quite enough. That said, that's just the theoretical exact inverse. With proper regularization you might be able to recover far more (although with a complex prior like a neural network it becomes debatable what information you are recovering and what information you are putting in yourself).

Side note, even with a mere 25px image (effectively) of someone's face I'm not sure if it leaks as little information as you think it does. Just 33 bits would be enough to uniquely identify someone, let alone 75 bytes. Practically you wouldn't be able to recover more than some basic estimates of skin colour and distance between the eyes etc, but in extreme cases that might still too much.


You're right about information being lost at the edges, but I do wonder if that leaves a region in the center of the image that's got enough information to be recognisable. There's one way to check, I guess...

Also I can't help but wonder, in a case like this where you've got the rest of the image, whether the pixels around the border of the blurred region are useful. There's going to be a probability that they're a similar colour to the outer ring of pixels that got blurred, and that might give you enough to start working inwards.


For the case of text, blur can be brute forced.

If you redact, say, a credit card number with a blur, and I know what typeface the number would have been written in, and have a reasonable guess as to your blur radius, it might not be infeasible to compare the blurred version of every possible credit card number.

If you redact an email address with a blur, brute-forcing every possible email address will be harder. But if someone (say) leaks information to you, and you merely blur out their address, it's not infeasible that someone else could apply the same blur to a known suspect's email to verify whether it was them or not.

Of course, with a large enough blur radius it's not an issue. Still, a non-zero amount of times, it's been done badly enough I've been able to mostly "reverse" a blur by just squinting and sitting back a few feet.

Always redact text with solid blocks.

I don't know how feasible this approach would be to human faces. I think Signal has blurred it such to make such an attack infeasible.

I also don't think it's sufficient; if you don't want someone to be identified don't take photos of them, full stop and/or period. Take the photo at the top of the blog post. Who on that day, had that a backpack with that type of strap, a blue mask in exactly that shade of blue, that haircut, and that exact BLM t-shirt, in that place at that time of day? That could be sufficient information for a "fingerprint", though maybe not deanonymisation.


The blurs shown on the page can most certainly not be reversed because the information has been lost.

Things like swirls can, though: https://thelede.blogs.nytimes.com/2007/10/08/interpol-untwir...


That's like saying that JPEGs can never be displayed because information has been lost.

The belief that you cannot identify someone from a blurred face is an extremely strong assumption that is just begging to be demolished using some sufficiently advanced technology.

In particular, if you only need to go from a list of 10,000 candidate persons (thanks cellphone mass surveillance) to three or four candidate persons (shoot them all and let god sort it out) then I am think it is fairly that you could do so with more or less existent technology. (essentially, use machine learning to transplant faces from DMV photos into the scene and then redo the blur and select the most likely matches).

Think of it this way: if you want to winnow 10k candidates down to four people you need to extract less than 12 bits of entropy. It's not trivial because the scene, pose, lighting, etc. make all your measurements noisy and non-independent.


Certain blurs can be undone indeed [0] is just one link, search for something like "undo guassian blur point spread function". There are limits though.

[0] https://en.wikipedia.org/wiki/Deblurring


They use very large blur radius. At this radius rounding to 8bit + lossy image compression should be destructive enough.

It's impossible to tell from the screenshot, but if they're smart, they should have an explicit degradation step before blurring (e.g. pixelate/lower resolution first).


Context, i.e. sample link about reversing blur (not just swirl): https://news.ycombinator.com/item?id=4679801


An actual blur that directly modifies multiple pixel values cannot be reversed. Things like swirls and motion "blurs" potentially can be - but I wouldn't even call those blurs as they are more of a directional transformations.


Hmm, given we know it's a face, and we know their skin tone from the rest of the photo, I wonder what a computer would be able to reconstruct... Any papers about this?


"For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw" etc, says Wikipedia.

You can reconstruct a plausible face by deblurring, ie. one that looks sharp and human. But if you want to identify someone having a plausible picture with a pair of eyes in a plausible position doesn't help, you need a fairly accurate assessment of the distance between the correct eyes, and that's susceptible to loss of information during blurring.


I don’t think so based on what we are seeing in the link. It isn’t really a blur at that point.


That's false. There's an entire field dedicated to reversing blur. Even Photoshop uses techniques like this.

https://en.wikipedia.org/wiki/Deconvolution

https://en.wikipedia.org/wiki/Deblurring


Honestly, I can't keep up with acquisitions, full e2e encryption claims, then those claims get debunked, and you can't find out what the truth is.

Based on all information out there, in year 2020, what is the most secure IM app?

What do you recommend to your friends if they care about privacy?


If you dig deeper, you can easily find that the consensus among IT security experts is Signal for privacy / security.

Matrix is interesting and I hope it will catch up eventually, but currently it is not E2EE by default and it leaks way more metadata than Signal. These point make it strictly worse than Signal for 1:1 IM.

The advantage of Matrix is in federation, but regarding privacy / security, it is still behind (much to my regret).

Other apps that could provide similar guarantees in theory are less used and have received less scrutiny, so more not yet exposed bugs and design flaws should be expected. Other apps have been relatively well studied, but have well-known design flaws that also make them worse than Signal (WhatsApp and Wire leak way more metadata).


Matrix is E2EE by default now and has been for the last month: https://matrix.org/blog/2020/05/06/cross-signing-and-end-to-...


Matrix is E2EE for some clients and implementations. You have to research what you use carefully, even in the HN discussion about that post people were recommending clients that didn't support encryption.


I recommend Signal. Sure, something selfhosted would be nicer (provided I can be trusted to get encrytion rightly implemented and my server updated etc) but Signal hits the best balance for me between trust, hassle and features.


You can self-host Signal if you want. It's not easy, or fun, and you'll need to replace the dependencies on cloud tools if you want to host it on bare metal, but it can be done (I have done it).

Bear in mind, the server is open source only in name, the state of documentation and configurability is extremely hostile towards running it yourself, to the point that the only way to configure it to run correctly requires reading the code to find the type, size, syntax and everything else about every piece of configuration because none of it is documented or clear.


Can you document it?


Sadly the answer is not the one I'd like to give. Were it in my hands, I'd have documented it publicly already, as an advocate of open source and sharing.

I did this as part of my day job, which included, at the time, documenting it. It's impossible for me to share that documentation I did on company time. As for doing it again, I'd have to check my contract and/or discuss it with said employer.


Matrix is selfhosted and has e2e encryption support.


I’ve pulled down both the backend for Matrix and Signal and find that the LOC is a lot simpler with Signal. Plus with a bit of work, selfhosting Signal would require the mobile apps to be configurable (or fixedly reconfigured) toward your own backend server.


There's pretty wide consensus that Signal is the most secure for now (minus the phone-number requirement, but whatever, not going there). Signal will likely continue to be the most secure platform for the short-to-medium future, and potentially longer. I disagree with Moxie on many things, but he really understands how to write secure code, and I trust him on security in a way that I don't trust many other people.

Matrix is significantly less proven, leaks a bit more metadata (at the moment) and has had a few incidents that make people cautious about trusting it for real activism -- but it's a more future-proof investment if you're not currently an activist, and some of the stuff they're working on (most recently around P2P and mesh networks) may be really valuable in the future.

Matrix is taking an explicit stance that concepts like federation and custom clients are not antithetical to privacy. It's yet to be seen whether they're right about that, but a lot of us want them to be right. We'd prefer to live in the world that they describe.

Apps like WhatsApp and Telegram also exist, and I guess some people like them, but I don't see any reason to bring them to the table since Signal already exists and is already the gold standard for privacy. The only reason Matrix is on the table is because Matrix is fundamentally different from Signal in ways that are worth caring about.

So in short:

- If you really need to make sure nobody reads your messages, use Signal. Hands down, not even a contest.

- If you're invested in the future of apps like this, and you have auxiliary concerns around federation, openness, and bridges that might outweigh your worries about potential vulnerabilities, then consider using Matrix.

Nearly all of these apps are better than doing something like encrypting an email. Email encryption is a minefield of insecure clients and foot-guns.


Generally signal is a solid option.

In addition I have a private mattermost server, which is heavily restricted in terms of firewall and users but this is reserved only for a very small selected group of people that I trust and I am 1000% sure that they know what they are doing.


Matrix: https://matrix.org/

Unlike Signal, it does not rely on a single server.




Other than Signal, I also recommend Threema. It doesn't rely on mobile numbers, possible to configure to run on your private server, etc. It's just not free (as in beer). Also, it's from Switzerland, a country respects your privacy more than the USA[0].

[0]: https://www.reddit.com/r/privacy/comments/gukg5z/threema_win...


> Also, it's from Switzerland, a country respects your privacy more than the USA

Well gouv.ch might, but Crypto AG was an NSA front for decades so I wouldn't be so certain about the companies.

If I wanted to lure people in on the pretence of security and privacy, Being Swiss would be good bait.


CIA front, I believe.


It doesn't look like they are open source, does it? https://threema.ch/en/faq/source_code


Wire should also be e2ee, but not sure if you can self-host the server (they seem to have started opens-sourcing it years ago, but not sure if that is ready).


Beyond this, is there a somewhat complete, recent guide for low-medium technically literate people to secure themselves, in terms of both privacy and security? I'm going through the easy steps now, like deleting Facebook, using 1Password, Firefox, ProtonMail, FileVault. Tor is too complicated for me to figure out though. Is anyone aware of other "good enough" practices?


Try session - https://getsession.org/

Can someone explain the downvote? I am not complaining but are there security problems with it? Could you explain or highlight them?


Don't ask why you were downvoted. It's right there in the Hacker News Guidelines. [1]

Secondly, you were probably downvoted because you didn't add any content to the discussion other than a link.

Session goes a long way to fixing Signal's problems like its reliance on a centralized server and phone numbers but it's still very early days with an unproven product. Messages still get lost all the time and if you thought it was hard to find your friends on Signal, it's the Sahara Desert on Session. You'd be putting in months and months of fervent pontification to friends and family you've probably just managed to migrate to your other privacy chat platform of choice.

[1] https://news.ycombinator.com/newsguidelines.html


If you want to be completely sure you can't beat boring old PGP. It runs on top of XMPP and is too simple to hide anything in.

The new XMPP hotness is OMEMO. Conversations is a good mobile client that supports both PGP and OMEMO.


You probably meant to say OTR, not PGP?

OMEMO is five years old, and supported by all major clients, so it's not very "hot" anymore".

OTRv4 is somewhat hot and new. It's not in wide use (yet) and it's unclear if it is enough of an improvement to take over.


>You probably meant to say OTR, not PGP?

No. OTR depends entirely on fingerprints for identity. The poster was referring to the difficulty of knowing for sure that you are really end to end. PGP has the advantage here in that you can be completely sure because you can exchange the keys yourself.


Can someone school me as to why we'd use blur when you can just put a solid block of pixels of the same color over the face? The hard part is face detection, right?


Blur looks better.


I wonder how hard it would be to replace the face with something akin to Photoshop smart fill then blur the box into obscurity?

Keeps the aesthetics of the image but also removes the face entirely.


I've tried this feature out and found that it doesn't do as good a job of blurring faces as I'd like, especially when those faces take up more of the frame. I posted some pictures here:

http://lelandbatey.com/projects/signal_blur_comparison/

Basically, I think they're using a constant blur size which fails to adequately obscure faces that take up a lot of the image because when a face takes up a lot of the image then the features of that face become large, which would require even MORE blurring to obscure. And they're not doing "more blurring" when the area which needs blurring grows, or at least they aren't doing enough additional blurring.


They are also distributing physicals masks? It's not even a filtering type mask is it? How odd.

Is the blurring some type of encryption that the user can unblurr or is this a one way road? I am just thinking off some odd circumstance where say they realize they had a picture of a vandal somewhere. But I guess you can then be forced to unblurr everything by law enforcement which might be undesirable in some cases.

Slight off topic from the article, I was reading about the sting ray discussion here on HN yesterday. Signal supports some sort of mesh network communication right? Is that a work around for sting rays? Thanks.


I don't think Signal has any mesh networking there are other apps like Firechat and Bridgify (haven't used either of them just googling).

As for the mask it'll do a little bit for CS and mace probably with eye protection but the goal is mostly protecting protesters by keeping them from being identified and retaliated against later. It's also way easier to make a buff style covering and it can be worn over many types of filtering masks.


If you're looking for mesh network encrypted chat, check out Briar.

https://briarproject.org/

https://www.youtube.com/watch?v=iRJ8vIh3dVU


>> “ Slight off topic from the article, I was reading about the sting ray discussion here on HN yesterday. Signal supports some sort of mesh network communication right? Is that a work around for sting rays?”

Believe you’re talking about Signal using “domain fronting” - which is unrelated to stingrays; more information is here: https://signal.org/blog/doodles-stickers-censorship/

As for stingrays, here’s recent article on countermeasures: https://puri.sm/posts/taking-the-sting-out-of-stingray/


I was also surprised by the physical masks. It seems they are intended to 'encrypt your face' which gives me the impression it should make you unidentifiable.

When peacefully protesting, I can't imagine why you would need to hide your face.

If not peacefully protesting and/or looting, such a mask has use for criminals, but I can't imagine that's the intention of Signal.

I think in free, democratic countries, you shouldn't be allowed to hide your face, so you can be held accountable for your deeds.

In non-free countries I can imagine you would need to hide your identity, but would Signal be able to distribute them there?

Questions, questions ;)


Saying only criminals would want to cover their face is the equivalent of saying only criminals worry about privacy. The old "if you aren't doing anything wrong then you have nothing to worry about" argument. I've never looted a store in my life and I don't ever plan to but I still don't want images of my face stored in a police database or used in facial recognition software. Wanting to protect my right to privacy is not and cannot become a presumption of criminal intent.


As the looks of it, US is pretty non-free when it comes to peacefully protesting. So I guess this feature is very timely and directed towards users there ;)


It's most certainly not just the US. In the (western European) country where I live, for instance, even a static protest or demonstration with no chanting or marching and only a few participants requires non-trivial and somewhat expensive police approval ahead of time. Most larger spontaneous events seem to just ignore this and the police haven't generally responded violently, to their credit.


> As the looks of it, US is pretty non-free when it comes to peacefully protesting

What is your definition of peaceful protest? What we see in the US now is definitely not within my range.

Thrashing stores, looting, torching vehicles.


What makes you thinks that?

When I think non free, I think of the CCP prohibiting peaceful rememberance of Tianamen square.


Is a police office in a free, democratic country allowed to hide their badge number?


Depends what you mean by "allowed". If the rules say "you definitely can't do this", but there is no penalty for going ahead and doing it anyway, is it allowed?


instead of bluring faces we should be replacing them with computer generated faces, double up on fuzzyness and destroying the possibility of easily detecting "its been blurred, i must then take out my best guessing tools then"


Why not just block it out instead of blur? Like all out white or black block, or any color, or a 'redacted' button?


Impressive how quickly they've reacted.


They probably had worked on this feature for some time and are using the current times as an opportunity to introduce it. It's hard to believe they had the capacity to react to the traffic increase and develop a sharp new feature in less than a week.


If you look at the code it's not that far fetched. The facial recognition uses "off the shelf" third party libraries and a gaussian blur isn't exactly rocket science.

I don't know how much work goes into making a new Signal release but it terms of raw coding it's like two days of work.


I doubt blurring faces on photos you take helps much in hiding your identity from the government.

In events like these, they likely have access to quite a few image sources that do not blur faces.

So, given an image with blurred-out faces, they can look in those sources for images showing persons with similar skin color, hair, length, and clothing to the person(s) they’re interested in, and from there find your face.

If they are willing to make an effort, even individuals may be able to do that, using photos that people who don’t blur faces upload to the internet.


So an organization that gets millions of dollars from the US government is supporting people that wish to overthrow the government, many of whom are outright anarchists. Seems like a poor investment. Unless....


How is Signal getting money from the US government? I don't know much about them, but that seems rather surprising.



This is rather silly, you could always draw solid colors over someone's face and it works better than blurring. A rather frivolous update, from a software standpoint. The sentiment is nice.


Automation is a big part though, having to do it manually on 5 faces is tedious, pressing a button is not.


I see the point in this, but if you're going to automate something it should be automated right! In most cases, getting specific people's faces in shot isn't a good idea in general. If you're getting five people's faces in center frame for a photo just to blur their faces out, then it's probably fair to ask why you'd even take a photo at all.


Recording and sharing are different things. If I take a picture of a cop pulling masks off of protestors, I sure as hell want to record the incident, but not necessarily share images of the victims.


>then it's probably fair to ask why you'd even take a photo at all.

There are many cases (such as the recent protests) where you might want to document something and it is infeasible to ask everyone else in the area to leave first.


Is the pattern on these masks meant to confuse facial recognition algorithms or is it just for looks?


Probably the latter. The pattern doesn't really matter when 85% of the facial features are covered by cloth.


I don't know much about image processing, but can't blur from some area in an image be "removed" so as to recover the original image underneath or am I just totally mistaken about how images/pixels work?


Think of it this way; there is less information in a blurred image (less colour, less lines, less areas). You can not* conjure information out of thin air thus making the unblurred image.

* Recent advances in AI actually make this possible to an extend. The AI delves into it's massive memory and extrapolates a likely image/face.


That entirely depends on how the image is blurred. The default gaussian blur in an image editing tool can be reversed without leaning on magic AI to do it.


Not completely mistaken, I think you're confusing blurring with "blending", where pixels are displaced in tight irregular spirals. These images have been successfully unscrambled as part of criminal investigations into child exploitation cases.


As a software engineer: screw software-based solutions. Too hard to communicate to people, too easily compromised without notice, just blegh for things like this.

I remember the Mueller report being printed out, inked over, and then scanned before exported as PDF just to make sure there's no software shenanigans. I really like this idea.

If you wanted to implement that in the field, you could purchase a Polaroid camera, ink over faces manually, and then use your iPhone and take a picture of that picture and destroy the film afterwards.


This strikes me as ridiculously paranoid. Are you worried that a JPG/PNG contains the original non-blurred picture or something?

Nevermind the fact that in your examples, the physical originals can be stolen before you have a chance to redact/blur them, or your blurring done by hand isn't good enough and you can get the original by increasing contrast or whatever.


...isn't the whole reason of this discussion revolving around events causing "ridiculous paranoia" being realized? No, I don't think I'm being too paranoid. Even if Signal is open source, it means smack if you don't know what's actually running on the servers, or what's in each AppImage and running on your phone.

If you have your servers and employees where the government can reach you, you can be compromised, because ethics and morality go out the window when it's about your safety and that of those you love.

Analog is always safest, because it's what the world is grounded in. If you don't like inking over an image, then burn the faces of it using a blowtorch, or if you're worried the ink is still there, you can stamp out the faces using a hole punch.


You can be identified by gait alone.


Unreliably, and only if they have a clear view of your whole body, which isn't likely in crowds.


I disagree... this is an area I research, working on a surveillance system. But even so, if gait alone was not enough, modern recognition software can easily single out a subject in a crowd with just seconds of footage, and through thousands of cameras track said subject throughout the city. Footage will be plenty. At some point during tracking the subject is likely to reveal his face, too, or other critical information. If your voice is picked up it too will be used for positive identification. When you add in the people in close proximity to the subject things get even easier, recognize one of the other collaborators the target subject affiliates with and identification is often a simple narrowing scan of enmassed OSINT away, done real-time, of course. Or simply track the subject to an address, maybe even his home, and swoop in.

I want to also clarify what gait recognition is, for those not that familiar with it a common misconception is thinking it is limited to analysis of how you walk. It is not; factors of gait recognition: height, weight, build and proportions, sex, age, clothes (including type—-dress, shirt, etc.—-shape and colors), emotions displayed, facial tics, unique mannerisms. The analysis of your actual walk/gait is incredibly deep and consists of hundreds of variables, too many for me to care mention here, I might blog about it if it is of interest to anyone, but a few examples: cadence, the angles of just about anything you can imagine possible to measure, spacing between feet, knees, arm swing distance, etc.

For anyone familiar with Haar-like features it should be easy enough to understand that with enough features within threshold you can id just about anything.

This is all yesterday’s tech, by the way.

My point, be very cautious of attending anything that might destroy your future. Do not think a mask or blurring protects your identity, that is extremely naive.


My point, be very cautious of attending anything that might destroy your future. Do not think a mask or blurring protects your identity, that is extremely naive.

If you're afraid to be seen in public you have no future to destroy. It has always been possible to identify people with diligence if one is sufficiently patient. Given that you're 'working on a surveillance system' your post reads as little more than an attempt to intimidate people from participating in political activism.


I am talking about participating in unlawful activities. I do not think most people need to be afraid of being seen in public.

The system I am working on is not for law enforcement.

I respect people standing up for what they believe in. I do not respect people destroying the property of others.

I see no harm in «educating» people. Even on HN I think there is few who understands fully modern surveillance capabilities. Knowledge is power, I believe information wants to be free.


No, you talked about attendance of an event - I quoted you in order to make the context clear. It is simply untrue to say you were talking about participating in unlawful activities.


«anything that may destroy your future»

Did I mention a specific event? No. Stop bickering, it is childish.


Thanks for the info, but why're you working on a surveillance system?


from a still photo?


wouldn't it be easier and more secure to put a noise filled rectangle over the faces?


You can but it looks bad and is distracting, a good blur doesn't distract from the rest of the photo and looking good enough people are more likely to actually use it which is also important. You can also build a blur that discards enough information that it's not reversible and it /looks/ like they did that, Signal has been pretty thoughtful about security so far so I doubt they missed the research about simple blurs being insufficient to defeat facial recognition.


You can always apply blur on the noise afterwards.

It took me less than a minute to do this with Gimp (and I'm very bad at Gimp).

It's a simple median blur over a random noise.

https://i.imgur.com/f81JIRP.png


Yes.


I don't understand the need for this. There is nothing criminal or embarrassing about being in public or participating in a peaceful protest. Why is this feature needed?


Because people fear retaliation from both the cops and their fan club. We’re talking about the police that (in my city) flipped, ransacked, and destroyed tables set out by volunteers to give protesters food, water, first-aid, and sunscreen.

The last thing you want is to find photos or videos of yourself on a right ring YT channel because you will get doxxed, harassed and threatened.


What kinds of people get doxxed? Your average protester in a march or the independents who go off script and attack bystanders, observers?

The ones I recall like the bike lock incident in Berkeley was that the extremely violent get doxxed on the chans but not your average protester who isn’t smashing things.


Ahh yes, only the people that "deserve it" are subject to extrajudicial threats of violence by the police and lunatics on the internet with lots of guns and too much free time.

Don't play the game of trying shift the focus on what the victim did do "deserve" fearing for their life. If someone, anyone at all, is threatened or harassed they are a victim. They can also be a shitty person but these don't cancel each other out.

The truth is that the people who get harassed and doxxed are fairly arbitrary and have more to do with whatever unlucky soul the host decides to pick on that day rather than any kind of rational process. Trying to figure out how internet bullies choose their targets won't get you a satisfying answer other than "people who look like an easy target to be make fun of."


Not the US, but I know someone whose name ended up in a list being spread around among rightwing groups as a "left activist" because he was on FB a lot replying to anti-refugee/anti Muslim comments trying to educate the posters.

Imagine having your face online, plus the resources of the police...


What country did that happen in? Germany? Did anything happen to that person, besides being on that list?


"nothing" happened, if you consider worrying that some nutjob Nazis might show up at your door or jump you when you walk to the shops, so that you have to look over your shoulders and be paranoid, as "nothing"..


At the risk of veering slightly off-topic I really dislike the modern internet culture that judges that it's perfectly acceptable to post people's face on public websites for all to see without their explicit consent. I'd hate to find myself at the top of the Reddit frontpage or in the latest viral video, even if I don't do anything particularly embarrassing.

Although I suppose that if you're participating in a protest that's not really the same thing, the whole point is to be seen after all. And signal is generally used for private messaging so it's less of an issue. So overall I guess I agree with you, I guess the Signal devs feel strongly about the current events and wanted to do something to help.


> There is nothing criminal or embarrassing about being in public or participating in a peaceful protest.

Then why are they beat up, gassed and arrested ?


how else am i going to get to my local church for a bible photo op?


Why is encryption needed? Or privacy in general? If you have nothing to hide, you have nothing to fear.

/s


There is no purpose for this. I was in the protests and amid the looting in LA. You’re being photographed for dozens of different angles.

I love Signal but it’s so klunky and broken. Telegram is much more fluid despite it being less secure.

And the Mayors in LA let the looting happen purposely.


I wouldn’t be overly surprised that if there were protests on the right (return to work, for example) which resulted in violence that that would end up with calls for pulling this feature —maybe I’m too cynical due to the nature of the politics of speech over the last few years.


Retaliation. As it happens for HK Protesters


what about videos.


why not use a black square


Cool ! Now stop with the forced contact discovery.


May I ask you to elaborate? AFAIK the only thing they are leaking about you is "is this phone number using Signal?". A single bit of information.


Not the OP, but from my perspective, encryption is helpful, but a good portion of security is anonymity, and Signal requires that you use and leak personally identifiable information to even start using it.

It also informs you when people in your contact list are using Signal. It's probably not scanning through all of the phone numbers in Signal's database locally, so it is exfiltrating your contact list as well, exposing your network.

Personally, I'd prefer a model where I am not required to place even that much trust in the messaging provider.


> so it is exfiltrating your contact list as well, exposing your network.

They claim they do that in a privacy-preserving way with crypto magic but it's the inform-people-when-you-start-using-signal part that is a problem


The fact that I've started using signal might not be an information I wish to share with other people who are also using signal and have my contact.


One of the best features of Signal, and one that massively helps adoption with the less tech-savvy crowd, is that you can set it as the default SMS app on Android, and it then uses Signal for contacts with Signal.

If you can't tell if a contact has Signal, it would have to default to SMS - and when sending a Signal message (to either a phone number, or in the future, a non-phone identifier), there'd be no way to tell if you're sending it to someone with Signal, or sending it into the void.

Maybe that's a trade-off you'd be willing to make, I don't think it's cut-and-dry though.


Not saying it is cut-and-dry, but atm the users doesn't get to make that tradeoff for themselves - signal made it on their behalf when it could have allowed the users to choose on activation to which of their contacts they wish to be discoverable.


At some point he have to assume this is about defending people that are committing crimes. Nice to see that the radicalisation caused by left wing social media is finally getting to its final conclusion.


Why would peaceful demonstrators need to hide their identity?

I have been to numerous peaceful protests in the US, even been attacked by observers, and have never had to hide my identity.

Additionally, in a large crowd where most will not hide identities, this app is useless.

Only use case I can imagine is a one to many communication likely to be frowned on by authorities, which sounds like the coordination of illegal activity, such as violence and looting.

I wonder if any website where such techniques are popularized would consequently be considered an accessory to whatever illegal activity is being coordinated?

And even if not, as owner of such a platform, it would not rest easy on my conscience to know my site is being used to help coordinate activity that will hurt and harm a great many innocent people.


Even peaceful protesters get at times attacked. Just last weekend a car drove into peaceful protesters twice in Portland, OR. A few months ago someone who regularly organized counter-protests against heavily armed, white supremacists protests got run over by a car and died as he left a pub known to be frequented by leftists.

Look at all the cases of unprovoked, retalitory police violence over the last week.

I understand why people are scared and want to stay anonymous. The US might not be run by the Nazi party or the CCP yet, but do I want to bet my safety that it won't in the next ten years or so? Especially given the trend over the last years.


Devil's advocate here but if I were a Nazi and wanted to peacefully protest I'd hide my face. If I were protesting for any socially unacceptable fringe group I'd rather hide my face.


that is precisely the sort of group i was protesting with, hence why i was attacked, and i had no need to hide my face because we were not doing anything illegal

and the US is not nazi germany or the ccp. if it were, face blur filters would be the least of your concerns. this only makes sense in the context of conducting illegal activity in a lawful democracy


Only if you think the lawful democracy is perfectly implemented, and we know that's not true.


no place is perfect, but if we compare to say ccp us is still orders of magnitude better


"Over there is worse" is not equivalent to "over here is safe."


"over here is not perfect" is not equivalent to "over here is not safe" :)

i just think we need to look at what we got compared to most places and times, and not be too quick to throw out the baby with the bathwater


> "over here is not perfect" is not equivalent to "over here is not safe"

Yes, it is.


I doubt it. You seriously think police will go out of their way to look through these photos and arrest peaceful protesters?

On the other hand, in China, just for having Signal or the like on your phone is enough to earn a stay in their concentration camp and some involuntary organ donation before getting disappeared for good.

I would say there is at least a slight (very slight, mind you ;) difference between the two situations.


> You seriously think police will go out of their way to look through these photos and arrest peaceful protesters?

Given everything else they seem to be getting up to, why take the risk? Especially when Facebook will do the hard job if tagging folks for them. I certainly know of police keeping their own photographic records of peaceful protestors, so why contribute to the problem?

Also, why assume it's only the police a protestor might be worried about?

> On the other hand, in China

Don't care. Totally irrelevant. This is not a comparative exercise, and you can stop using it as a cheap deflection now.


if you have clear proof the police do this, you have a lucrative lawsuit on your hands :)


Here's a US law enforcement trainer, writing about what law enforcement agencies need to do around peaceful protests:

> Law enforcement officers must monitor peaceful protests to identify individuals who might do harm and incite violence. Such individuals should be detained, isolated or interviewed to determine if they are a threat to the peaceful assembly.

From https://inpublicsafety.com/2016/07/preparing-for-protests-ci....

Here's one UK police branch that does it in the open: https://en.wikipedia.org/wiki/Forward_intelligence_team. It would be beyond naive to assume that police in the US don't do the same.

All of this is a side-show, though. If I take a photo at a protest, how do I know what harm would come from publishing any faces that I happen to capture? You seem to be making the argument that I, as a private citizen, shouldn't have a tool available to ensure that I'm not doing harm. Who does that serve?

The fundamental core of your argument seems to be that if people have nothing to hide, they have nothing to fear. That is, and always was, bullshit.


What's wrong with the argument? At any rate it is all besides the point. The only feasible usecase for this signal face blurring is coordinating illegal activity. Any other use doesn't make sense for the reasons I've given a couple times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: