Pre-crime facial recognition surveillance is authoritanism. Post-crime facial recognition of surveillance is investigative process. The cat is out of the bag, the only thing that prevents it's inevitable abuse is strict legal framework for its implementation
There's really no chance of that happening in the USA within a generation. The US federal government has decided that it alone is trustworthy, and that everyone and everything is a potential threat, and that it is entitled to unlimited and unrestricted surveillance of the entire country (and several others), regardless of what is or is not written into law.
Snowden even blew the lid off of it, and nothing changed. That's how you know it's permanent.
When did we decide that this should have a low barrier to entry. Is there any reason to think increased competition between surveillance providers will lead to more ethical surveillance?
This is not a market problem at all, I don't see why you'd use market-brain constructs like monopoly to engage with it.
One provider or thousands, the problem is the social dynamics and the power structures, not a product-consumer relationship.
"When did we decide that this should have a low barrier to entry..."
That's not the argument I took away from that. Poster includes a summation, "it's complex" and is noting that simply adding regulation that only increases costs of compliance may just enable the formation of a tent seeking monopoly and does not necessarily result in a better outcome for citizens. The poster could advocate for a ban (or not advocate for anything) rather than deregulation as implied.
Problem with a strict legal framework for implementation is that it takes 1 person just a few days to get a face recognition system working using generic frameworks for deep learning.
The laws can be as strict as you like, but its like introducing a law saying you cant watch nude cartoons at home. Even if you somehow eliminated all the existing ones, its just a pen and paper away.
Is that really a problem from a legal perspective?
There are plenty of things that are easy to do but have strict legal frameworks around them. Quick examples include copyright laws, use of force/violence and driving rules.
and how well are those laws preventing these actions? Long term there are few things more damaging to a societies justice system than wide spread breaking of laws. Go on long enough, and people stop taking every law seriously, while at the same time political reasons for prosecution starts dominating.
Not really. I don't mind there being a near-monopoly on facial recognition software if it takes a large company to handle the needed guidelines and potential liabilities.
You have presupposed that said monopoly will not have already bought the necessary legislation required to maintain its position through regulatory capture. The current system positively rewards monopolies in most sectors.
If the bureaucratic obstacles for using a technology are so high that only very well funded companies can overcome them, then those companies effectively form a cartel with the state in the use and abuse of those technologies.
If citizens had stronger protections against the application of the third-party doctrine in the US, we might have less to fear. Currently though, the greatest oppressor of personal freedoms through technology is the United States Government. They can and will use "guidelines and potential liabilities" to weaponize new technology while making it next to impossible to counter the threat.
It's also trivially easy to regulate. Mandate that all cameras recording public spaces record to encrypted storage. The key is in possession of the judicial system. It can only be decrypted and examined with a warrant, with probable cause, as you would need for wiretapping or raiding a home or searching someone's computer.
> The key is in possession of the judicial system. It can only be decrypted and examined with a warrant, with probable cause, as you would need for wiretapping
That's how you end up with the NSA and CIA being inside every single camera in the country :)
For practical purposes, yes. The leaks showed how NSA has built a multi-layered data harvesting-archiving-searching machine that works not only through compromised hardware (including cameras), but also big company infrastructures, phone/email/SMS/internet browsing records and content, fiber optic cable tapping, hacking, installing bugs, spying and more.
If the camera itself is not bugged, it likely is harvested at another step at some point and nothing's more clear that if NSA wants to see what a camera sees, they are able to tap it if needed. Sure, untapped cameras exist, but it doesn't really make a practical difference. The NSA will still have your information if it wants, and likely already has most of it.
I don't know how long the records persist, but presumably for things like video surveillance footage the decay period would be quite fast. For full-text contents, less speedy but still fast. I can only truly envision long-term collection and storage of metadata - and even then, it's a big question how much is feasible and reasonable to store indefinitely.
It's likely that for your security camera footage to be accessed you would have to be targeted. It's likely that such targeting would only affect footage in the future or very recent past. But I'd grant that pervasive attackers can probably capture exponentially-decaying single video frames from millions of cameras if they were appropriately motivated, and those single frames could go back quite far.
The real problem here is the existence of such a system creates a kind of panopticon [0], with chilling effects not only on activity and discussion contrarian to the current administration, but also any future administration that may have access to electronic surveillance records. Without knowing how long records are kept, it is quite plausible that a future authoritarian state will misuse past records to target civilians.
The people don't care that they're being watched. That's the main lesson I took from the snowden leaks. I think most people might even like it because they think it means there will be less crime.
Hardly sounds trivial, and how would you ever enforce or manage that?
Also, that is totally counter to the current (US) laws that basically state you can record video (not audio) of any publicly viewable areas freely.
What you propose would require massive changes to essentially any camera that has any part of its view covering an outdoor area, and probably many indoor areas (malls, etc.).
Also, face recognition typically runs on live streams to build indexes, so encrypting storage would not do much. You'd need to implement something like an HDMI-style encryption to control what devices/processors/whatever can connect to a live video stream from a camera to try and control exactly how the streams are processed.
> Hardly sounds trivial, and how would you ever enforce or manage that?
Spitballing. Enforcement: make distribution of CCTV video undisclosed via court order criminal offense.
> Also, that is totally counter to the current (US) laws that basically state you can record video (not audio) of any publicly viewable areas freely.
This is irrelevant, because scale changes nature of things. Single photo of your person is just that. 24 photos per second for every second of your life is a total surveillance. Law was clearly about recording usage that does not amount to surveillance.
> Also, face recognition typically runs on live streams to build indexes, so encrypting storage would not do much.
Encrypt the stream on the camera itself. Storage is cheap.
And how does one view the feed from the camera if it's encrypted? That's the whole point of the camera itself.
Anywho, sounds like we're jumping through hoops without really understanding the requirements. Like, what is really bad about recording people in public? What is really bad about performing facial recognition?
But I'd go a step further, what is it that we're trying to prevent from happening by making facial recognition illegal? This is the juicy part and the one where the "problem" becomes wishy-washy. We get reasons like: "Prevent stalking by government officials", or "stop widespread surveillance from...[something]". All present their own challenges and implied slippery-slopes, but all have different ways of being solved without necessarily making public recording and facial recognition blanketly illegal.
>Spitballing. Enforcement: make distribution of CCTV video undisclosed via court order criminal offense.
The same way gun ownership laws have curbed the illegal gun ownership/use problem? Or, do we just keep stacking laws endlessly hoping one of them actually works?
>24 photos per second for every second of your life is a total surveillance.
Sure but we're nowhere near that point, or likely to be at that point. Also, almost nothing records at 24fps, 15fps is more common.
>Encrypt the stream on the camera itself. Storage is cheap.
In most cases and camera and recording/viewing software come from different companies, so you'd still need some kind of key management system.
The same way gun ownership laws have curbed the illegal gun ownership/use problem? Or, do we just keep stacking laws endlessly hoping one of them actually works?
There are legit arguments to be had about personal freedom here, but it’s plainly untrue regulation intrinsically can’t work. It works for many, many things - and works for gun ownership in basically every other developed nation in the world.
Where I am the law is very strict although somewhat arbitrary. I can shoot a video of a public place. I can publish the material (although commercial use requires sign off naturally).
But here is the interesting bit: I can’t set up a fixed camera to record a public space without permission. And I won’t get that permission. Meaning basically that recording in public spaces is only done by humans, limiting the scope of how widespread it can be. I like this.
It also means, that it’s not allowed to put up a camera on my porch that covers the street in front of my house. I suspect a lot of ring/nest users are in violation.
Sounds like Japan. But most countries that is not the case. People want to protect their property. Also with hidden cameras it would be trivial to do so and not be detected so in your country those who want to record are already doing it just quietly. So again it is a law that only effects non criminals.
Which part of the law? That I can’t set up a camera on my house to film a street crossing?
That’s almost impossible to get permission for. A private citizen can not get permission to film a Street corner. The difficulty in getting permission obviously isn’t written in the legal text, but individuals don’t get permission and businesses only very rarely do.
Of course it is. You can’t put up a camera on your porch and shoot your driveway and part of the street. To follow the law you have to point it so it’s only covering your driveway or lawn and no public space. It’s pretty simple restrict the field of view with a screen in front or bit of tape over the camera if needed.
Does everyone do that? Probably not. But that doesn’t change how the law is written.
If someone throws a rock through a window, why shouldn't it be legal to run the surveillance through facial recognition? A crime was committed, we have a pic of the perpetrator.
Facial recognition should never be considered evidence. It is a clue. A way to sift through tens of thousands of possible entries and narrow it down to 5 for a human to review. We need the legal framework to ensure its not used as evidence by itself, but the investigatory tool is very useful.
So is eyewitness testimony. That's why we have trials. The problem in the case of a crime committed like that is not that the technology is used, it's when people assume the technology is infallible, when there's plenty of evidence to the contrary (just like there's not plenty of evidence that eyewitness testimony is not infallible and often subject to a bunch of problems).
If facial recognition is perceived as low accuracy, but can yield some leads for investigators that can be independently corroborated, that seems like a fine use of the technology. If we're worried about the public assuming it's more accurate than it is if used as evidence in trials, we can either pass some laws about its use as trial evidence (which is not the same as using it as a lead), or train defense attorneys and the public (often done through TV...) that it's use in the role of proving guilt is extremely limited because of it's false positive rate.
Everything is flawed and leads to false accusations, even something ridiculously black and white like electronic bank records. Witnesses, intoxication, bias, outright lying, guilt, etc. All evidence has flaws and potential bias. One need only look at all the false imprisonments that have happened over the years due to various bits of "evidence" to see that.
Instead, we should take the opposite approach: Invest heavily in this tech, and lightly-regulate glaringly bad aspects of it. E.g. For facial recognition, we can put down laws that punish unfair punishment of suspects. Or if we find employers that misappropriate facial recognition developed to record hours worked on the factory floor to punish them for chatting or going to the bathroom too many times, and we don't like that, then we regulate that.
We really went down the wrong path here somewhere, applying a black and white approach to things instead of just riding the in-between strategically and fairly. That's how we move forward as a society instead of legislating ourselves into irrelevance.
The network effects of large databases lead to an imbalance of power between surveillance and sousveillance: a personal face recognition DB will never match a government one.
Obviously, a government has more power and more means than an individual.
But when it comes to databases, individuals have access to ridiculously large databases too. With Facebook and Google you can peek into the private life of most people, even the police does it because there is more data there than in their own files. And I am just talking about ordinary access. Not what Facebook and Google can do as a company.
Add a bit of social engineering and crowdsourcing and no one is safe if enough people want to find them. There have been some pretty impressive "doxing" in the past.
And staying off social media is pretty hard if you want to live a normal life. You may not have a Facebook account but your friends do, you may find your picture on your company website, maybe even in a local newspaper. And officials are no different, if they want to live normally, they are going to be exposed.
> CBP says it has processed more than 100 million travelers using face recognition and prevented more than 1,000 “imposters” from entering the US at air and land borders.
I wonder how many of those 1000 were incorrectly bounced back
Your comment seems unrelated to the comment you're responding to. That comment is about automatic rejection, not selection.
But speaking of selection, there is no perfect process in any system involving humans, so whether it's a flawed facial recognition algorithm, or a biased, tired, overworked human doing it, having a manual review step and post-hoc analysis is pretty useful.
My point is that by the time a biased selection happens, much of the damage to the overall process is already irreversible. If the algorithm is biased (and I'm not saying it is, just that it's valid to ask the question), making the final decision manual is more of a fig leaf than an actual fix.
At least at airports, CBP never bounces anyone back at the first instance. At the immigration counters, the dude either lets you in or sends you to secondary.
Note the immature (on purpose?) nature of some of these bans: some ban the local government only, some ban only one city of a multi-city district, and nearly all of them leave the option of hiring a private agency (with zero pubic oversight, because they are private) to do the FR for them. And that private agency happens to be owned by the step-son of one of the law makers. This type of legal nonsense is everywhere they are trying to regulate facial regulation.
The annoying part is that there's no opt out. And when there is, it's obscure (you can express that to the staff), and makes you look hella suspicious.
We banned mines, too, and it‘s been fairly successful. Not 100%, sure, but just imagine a world were mines weren’t banned, and what Europe and the US would look like, and you know the ban was very effective.
That's a fact the US should be deeply ashamed of. I had no idea how bad the land mine and unexploded bomb situation is until I visited Cambodia and Laos where even 50 years after the war people are still getting their limbs blown off.
Read up on the difference between persistent and non-persistent land mines [1]. The US only uses non-persistent mines that last usually 24-72 hours. The issue with 50 years later is not accurate since non-persistent mines haven’t been commonplace for decades and officially outlawed in 2004 [2]. Only place the US allows persistent mines is at the DMZ. There’s also a distinction between anti-personnel mines and anti-vehicle mines, which the US has a clearer distinction about than others. The Ottawa Treaty only bans anti-personnel mines [3], so the US in one of few countries that has a policy banning the use of persistent anti-vehicle mines.
Ottawa treaty only bans anti-personnel mines, not anti-vehicle or command activated. The US bands persistent anti-personnel mines, but not mines that only last a day or so. This argument gets more convoluted once you read up on it. As I mentioned in spaetzleleeser’s comment below, the US is one of a few countries that has a policy banning persistent anti-vehicle mines (they ban all persistent mines in general), so in some ways is more strict than others.
Any truck or bus has higher road pressure than a tank, because wheels footprint is much smaller than that of a tank tracks.
A very quick search also proves that deaths from anti-vehicle mines number at thousands each year. Even on the very UN website [1] we read:
> The Secretary-General calls on all countries to also regulate the use of anti-vehicle landmines. Such weapons continue to cause many casualties, often civilian. They restrict the movement of people and humanitarian aid, make land unsuitable for cultivation, and deny citizens access to water, food, care and trade.
I believe we can safely assume that anti-vehicle landmines DO pose a deadly threat to civilians.
London is often called the CCTV capital of the world, and for good reason. The city is home to hundreds of thousands of CCTV cameras, and the average Londoner is caught on CCTV 300 times a day.
Facial recognition software is being integrated now into this network.
No one seems to mind at all. It seems that the same is in most of the rest of the world.
Not everywhere though, in Spain it's illegal to film the street. "In Spain, the law protects the rights of citizens to use public spaces, which is free from interference (Clavell et al., 2012)"
If you want to live a functioning non-Orwellian society, move here.
The other easier option (although doesn't solve the sunshine problem..) is moving out of cities into rural areas. Very few cameras where we live in Cornwall.
If you do video calls through teams or similar when working remotely, internet speed would be a big issue, coupled with the abandon of a lot of rural areas in spain
They have bad economics and were living under a dictatorship in living memory. Those countries can do the best for a while once they free themselves from dictatorship.
Bad economics is very debatable. Still a top 15 country by GDP, it virtually matches the economic output of Australia. Pensions are 20 times higher (!) from the country I'm originally from, that is still in the European Union. So yeah, if you are from US, it might look 'bad', but for about 6 Billion people it's a dream place.
But also, they certainly don't work themselves to death. Live to live, not work.
* London is the surveillance capital of the world*
That's only because China does not release much information publicly about their surveillance systems, but many parts of China are most likely far more surveilled than London.
The apartment building I lived in from 2009-2010 had facial recognition.
When you walked through the main lobby, your face appeared on a screen for the inside doorman so he could grant you access to the elevator lobby. It also showed your name, apartment number, and lease end date.
That was over a decade ago. I can't imagine what new tech must look like.
It was suggested in the early 2000s that a face could be tracked moving across London by the security services. What do we think, was it possible then? I have to assume by now they have intent monitoring everywhere and can probably tell when someone is doing things like planning a bombing etc.
Well, the Independent claimed in 2004 that the average Briton is caught on camera 300 times a day. [0] Since London probably has more than the average number of cameras per area, even if we wind the clock back 3-4 years, I suspect the answer is yes.
The vast majority of cameras in London are private installations, rather than something hooked up to some central system.
There are the ring of steel cameras, but that area hasn't been widened recently. TFL cameras are possibly linked into some security service, and there's a fair few of them, but they're mainly pointed at roads.
Solving real, victim-reported crimes requires effort, rarely scales and pretty much never brings any political clout.
Solving bullshit manufactured crimes such as drug-related ones scales better, doesn't even require a victim to complain and provides seemingly-decent political clout and a veneer of "we're doing something, see?".
What's the last time you heard about stolen bikes or phones being recovered on the BBC? Personally, never. But drug-related stuff is common.
Probably because most of the cameras aren’t hooked to anything centralised or necessarily known about. Which necessitates an enormous data gathering and combing operation to find any useful of which there might not actually be anything. Cameras aren’t magic.
Because it's apparently taboo and dystopian to perform facial recognition and surveillance to help victims. Most of those cameras' and their recordings go into the void instead of some sort of central database that can be used for tracking criminals.
"Intent monitoring" is fictional and does not exist beyond scifi stories and the journalist written articles selling fear about facial recognition. I work in the industry, there is no such thing as "intent monitoring".
If they ban facial recognition then companies will start using other tech like gait recognition. So are we wanting to ban facial recognition or is it something more like the right to not be tracked(if that even is a right). I think we need to establish what our end goal is because you know clever companies will find a way around such restrictions and find other creative ways. They will use shoe detection instead or something maybe not as accurate but enough to get them around the law but still detect a percentage of criminals. Or jewelry detections or tattoo or whatever isn’t banned by law.
This is the same issue I have with 'right to repair' laws.
When you create a tangle of specific laws banning or regulating specific things all that you really have done is created a world where he who has the most lawyers wins.
What is the end goal with banning face recognition? I think this is all about asymmetry of information.
Seems rather obvious to me that will remain the case.
Both encryption and face recognition software rely on math and programming simple enough that a single person can make a functional one in less than a year from scratch. However, if using existing open source frameworks anyone can put it together either in a few days, though collecting a face database takes slightly longer, though in most cases anyone with an existing app of some kind can very easily collect the data largely automatically. Arguably I would not trust an encryption framework put together like that, but when it comes to face recognition I would.
So the question of if governments will be able to prevent non-sanctioned face recognition systems from the public sphere is rather similar to the question of if they are able to prevent the use of non-sanctioned encryption.
TIL, and this sounds actually rather useful.
Do you have information on which ones would support that? Especially in the range a curious hacker (who else is HN for?) would be interested in.
It's especially useful in crowd gatherings, group photos and such. I think it's rather designed for concerts, weddings, etc.
My A7-III has it. Since it uses Eye-AF and Face-AF pipelines (it generates a face model from the photo you take for that feature only), it needs AI autofocus stack inside the camera. A7-C should have it, maybe latest APS-C lines (6600 for example) have it.
Sony's AF tech is insane. Point to a person with sunglasses, and it marks the eye instantaneously. What the actual sorcery?
Well the goal is to recognize a face, not to match a specific face to a specific person. So you can train an AF model to to center focus on the center of the sunglasses. Even with the 'track these specific faces' features, since your still not trying to unique identify someone, the model can latch on to generic features like "red sunglasses, brown skin, surgical mask & this general shape".
That depends what the law says. Some of the pressure here is to stop the police themselves from using the tech, I wouldn’t know if any given law prevents personal use.
Government ban means banned for use by government officials/entities. So it doesn't really apply to the general public or corporations(Unless the corporation specifically has a contract with the same requirements to do work on behalf of the government)