Getting an ID card checked by security at the door of a secure establishment allows the people inside that building to know that the holder truly is who they say they are. Inside that space the person has access to confidential information and they do not need security to constantly verify their credentials.
..and yet ID cards can be copied and faked - so why do we do this?
This model is how a fingerprint can be used as a shortcut to deliver certain privileges. The user must first pass security by entering their password, and then later numerous safety triggers are in place to require that password again. Meaning that once a person is validated a stand-in can be suitable rather than fully evaluating each and every time.
Back to fingerprints: copying a fingerprint has numerous barriers that these exploits frequently ignore. First it needs to be the correct finger, it must be clear and complete enough to copy and finally it must be used at a time when the device will accept it. While such barriers may be insufficient for a secure environment, this approach provides more security than, for example, a person repeatedly entering a pincode into their phone through the day - something that is both easily observed and remembered (and worse too if it's a gestural passcode.)
To relegate fingerprints as only this or that throws the baby out with the bathwater - appropriate rules and context can make it a useful security improvement over the status quo. That doesn't mean it's perfect or that it has to be.
In the virtual threat model, difficulty needs to be insane, since any of 7 billion people can launch automated attacks on my server.
In the physical threat model, difficulty can be moderate, since the only people who can attack are ones physically here. My front door has a pickable lock, and my windows are breakable. My key threat is my crazy stalker ex.
Fingerprints are usually in the latter category, and provide pretty good security.
I always thought that since the beginning, but unfortunately the world went into another direction. People always said "something you have and something you know", but now for most cases it's just "something you have - your body". Obviously if in the future remote mind-readers are invented, the "something you know" part will also get obsolete, but for now we should stick to it.
I've always disliked this breakdown. My body is something I have -- it's just potentially (not always practically -- see the article) more difficult to clone or otherwise use without my consent than a key fob or something.
Edit: To be clear, I don't think this is an argument for biometrics, but rather an argument against them. They can't complement something I have in a two factor scheme, because my biometrics are something I have.
But it's the parts that are easily forgeable (fingerprints, retinas, etc) that are being relied upon. By "forgeable" I mean "things that someone else can also have by creating copies."
I don't think we have yet good metrics on how to detect specific individuals using a full-body scan. Not to mention the invasiveness of creating your personal initial dataset. Most folks won't stand for it. So right back to parts that are forgeable...
Fingerprints are not usernames. I wish that idea would die but people just love putting things in existing categories so much they keep thinking "fingerprints aren't the same as passwords... so they must be the same as usernames!".
>Fingerprints are usernames, not passwords. Here is an excellent (and timeless) post on this fact
No, that is complete absolute shit post that isn't even self coherent. Like, it literally whines about needing something that can be "independently chosen, changed, and rotated", which obviously describes usernames so obviously biometrics can't possibly be usernames by that very post! Why is this dumb meme so fucking persistent? Fingerprints are one of many biometrics. They aren't usernames, which aren't an authentication factor at all. They aren't passwords. They aren't tokens. They are their own thing. They have their own pluses and minuses as part of a comprehensive response to a given threat scenario. That's it. Trying to shoehorn them into something else is the same as trying to shoehorn everything into a car analogy.
All security exists solely in the context of an equation of threat scenario (the word "threat" doesn't even appear in that post), defender vs attacker resources and the value of what is being defended. Real security must work for actual real humans too. For example, rotating passwords every day/week/month is "secure" except that it's also a huge PITA or even outright impossible for many humans and defending against what should be a non-existent threat scenario anyway. So the obvious and inevitable result is that everyone starts to use crappy passwords, write them all down on sticky notes and text files and such everywhere, or both. That is not the fault of the users, it's the fault of a shitty system.
Another word that doesn't appear in that post? "Camera". Biometrics is an enormously rich potential field, fingerprints are about the worst lowest hanging fruit and in no way represent everything particularly as we use more and more wearables (there are bits of entropy to be found in your body's cardiac cycle for example). But even for fingerprints, which is really lower resource for attackers: getting a reproducing a fingerprint, or having AI go through every single networked look-down camera for the obvious obvious pattern of a human pulling out a slab of screen and then entering a PIN or passcode into it then recording that? Are people expected to never ever unlock a device anywhere but a physically secure area? Because see above, that is not realistic for real humans and thus a worthless security response.
As is usually the case, the best answer is hybrid, with multiple levels of factor usage to try to combine the strengths of each. And indeed that is the way things are going.
Edit to add: And if I sound irritated about this I am. This is the same kind of user hostile shallow anti-security thinking that brought us things like "security" questions, password rotation policies, lengthy and baroque "must contain 2 caps 1 number 3 special characters but not those special characters and cannot START with a number" password policies, etc. All of which add aggravation and failure points to no good end. Bad security practices affect our entire industry to the detriment of us all, but "bad security" isn't just a technical thing it's a human UX thing.
Bizarrely, my organization limits passwords to a length of 12 characters or shorter. I agree with you, I don't want a password the size of a paragraph, but c'mon... 12 characters?
I think you misread me, or I didn't communicate clearly. By "lengthy" I was referring to the policy, not password length. Indeed max password length itself is another common bit of foolishness, for sanity reasons arguably it shouldn't be infinite but ~150 characters should be fine so that if people want to have a long diceware passphrase that's fine. To the extent passwords are used at all it should be exclusively as input to a KDF or adaptive-hash anyway so storage-side it should all be normalized regardless of input length.
Ah, gotcha, sorry. "Lengthy (password policies)", not "(lengthy password) policies". I wouldn't call the policies themselves particularly lengthy, though we do have multiple systems with different policies for which we're supposed to use the same password, so there's that -- it's possible to set a password in one place that can't be set in the other. (Would something bad happen if they weren't in sync? I can't see how, other than it wouldn't be clear half the time which password to use.)
Sorry for not being clearer. Really though, the only "password policy" should be "no password reuse/dictionary" (check it against haveibeenpwned.com or the like, there is a nice API), and some minimum decent length. Preferably with a decent user friendly generator option for default suggestions too, and password manager friendly. It's probably not the weakest link at that point. "Multiple systems with different policies for which we're supposed to use the same password" seems like it should just be SSO?
But I recognize in reality when using archaic systems at businesses with no budget sometimes hacks are just the best that can be done, and that's how it is. I mean, obviously best of all is no shared password, use proper key via hardware token instead and the password/PIN or (gasp :)) biometrics is purely something the user uses to activate the token. Unfortunately it'll probably be awhile until we get there. But the general use of baroque password policies, particular when interfacing with the general public, is still an anti-feature for security which has finally started to fade away.
Secrecy is only an approximation of difficulty. Given the difficulty, I would estimate it as a two character password. It should be fine for people who have nothing to hide.
> Given the difficulty, I would estimate it as a two character password.
Sorry, but that is _way_ off.
I can run through 2 character passwords by hand in a few hours at most, likely faster. (Assuming a qwerty keyboard, 62 alphanumeric, plus roughly 33 other characters makes for 9025 possible passwords.)
To reproduce a fingerprint requires access, money, time, and expertise. It's not _hard_ but it is not trivial either. You need access to a good fingerprint. You need the money to buy the supplies (a laser printer, some acetate, and some wood glue). You need time to both capture the fingerprint, refine it in the photo editor of you choice, and then actually turn it into something that scans. And you need to know that this is all actually doable. And then that all assumes that it actually works; I can assure you this is not a 100% success rate.
Put another way, if you told me you _personally_ had a two character password on a specific account, I could likely log into it _today_. Conversely, if you told me it also required a fingerprint to log into, I'd be out of luck. I'd have to learn who you are, where you lived, and then concoct a way to capture a clean print.
As others have pointed out, biometrics != password. It's an apples to oranges comparison.
>It should be fine for people who have nothing to hide.
If I'm a company, would I want my employees to give up proprietary data they hold just because they personally "have nothing to hide?" Anyone who thinks that's acceptable is someone who isn't worthy of trust.
Think this is still overestimating the threat. It's kinda like saying you can hack someone's password by watching video of them typing. True, but also non-trivial.
If you're already being personally targeted by an organization professional enough to follow you around, take a photo of your fingerprint on something you touched, then painstakingly reproduce said fingerprint through highly technical means and then gain physical access to your personal device that uses a fingerprint reader to use said fingerprint, you should be aware of your position and have multi-factor authentication set up for everything anyway.
For your average everyday person fingerprint security is fine. The thief who snatches your phone when you step away from your table in the mall food court isn't going to be able to crack it via this method.
>If you're already being personally targeted by an organization professional enough to follow you around, take a photo of your fingerprint on something you touched, then painstakingly reproduce said fingerprint through highly technical means and then gain physical access to your personal device that uses a fingerprint reader to use said fingerprint, you should be aware of your position and have multi-factor authentication set up for everything anyway.
But the whole point is that it's easier than you describe as people make photos with fingerprints themself accidentally, and technical means to reproduce fingerprints are not highly technical.
You'd be surprised how low the bar is for "highly technical" among the general populace, particularly those inclined to steal used technology from targets of opportunity. How many people do you know who even own a laser printer, a subscription to photoshop and the skill to use it, and know what an "acetate sheet" is? The vast majority of people, if they have a printer at all, use cheap inkjets, they don't know how to use photoshop and don't care to learn, and they've never heard of acetate sheets.
Could an enterprising criminal master this technique? Sure, but I'm not convinced it's reliable or lucrative enough to make the time/risk investment worth it for someone with that skill-set.
Yep, physical proximity is a huge barrier to any attack, and requiring persistent physical access even more so. If you have a plug in USB keyboard, this sort of quick attack through MitM passthrough is even easier.
However, having some experience with biometric sensors the False Accept/Reject ratio both for matching the fingerprint and detecting "liveness/spoof" is a BIG DEAL. Matching many prints or to many people is also MUCH HARDER (combinatorically). At high SNR (more expensive, higher resolution, larger sensor, higher power, longer latency) these problems can be largely mitigated with accurate recognition and very difficult to spoof systems. Those aren't the ones people attack for online fame.
However, when display integrated ultra-thin low cost very convenient matching is required... it will trade off for False Accept/Reject ratios and make the system significantly (orders of magnitude) less accurate. Unfortunately, it appears that the old MacBook touchbar integrated sensor has sacrificed significantly in this area.
Time of Flight 3D sensors make spoofing Face ID with easily carried biometrics significantly more challenging (they tend to be head sized).
Agree with your overall post entirely, the thing about physical attacks is they don't scale well. If you're subject to an actual individual threat, it's a whole different and enormously scarier/more challenging threat scenario.
>Think this is still overestimating the threat. It's kinda like saying you can hack someone's password by watching video of them typing. True, but also non-trivial.
Isn't that genuinely getting pretty trivial in public though? And in turn I think that is a real argument for biometrics too. The amount of over-the-shoulder camera surveillance in business and urban areas is pretty scary at this point, as are the concealability and cheapness of even very tiny spy cams. There have been plenty of scandals around it even in things like AirBNBs or hotels, historically from the context of sex, but not a stretch to imagine that passwords could be a much bigger and more lucrative target. And ML/AI is getting ever more sophisticated, and humans entering PINs/passwords is pretty repetitive behavior with a high degree of uniformity in how it's done, at least the device-unlock level. Seems very amenable to highly reliable automated analysis, to the extent I'd be genuinely surprised if that's not secretly deployed already in surveillance states.
I don't enter PINs/passwords in public anymore if I can possibly help it. It just seems scalable in a way that physical attacks aren't.
I'm no guru but the only widespread use of spy cameras for password theft I've heard of is false covers on ATMs to catch PIN codes, which makes sense as an ATM eliminates a lot of variables. (angle camera needs to be at, number/size/position of keys on keyboard, required resolution/range, increased number of victims, etc).
In the wild I imagine it's one of those things that's simple in concept but difficult in execution due to all the edge cases. Even if you get 80% of a password, if you're not at the perfect angle to catch those last few key-strokes you haven't accomplished much.
There is a reasonable chance that the print is available on the very device it is needed to unlock (the phone screen, for example), perhaps enough for a thief who snatches your phone to have a decent chance.
Assuming they have a laser printer, photoshop skills/subscription and know about acetate sheets. I'm willing to bet most phone snatchers don't even have one of those ingredients.
Most phone snatchers aren't after the information on the phone. The point is not that this would not be a targeted attack (it would be), but that it might not require a particular combination of circumstances or set-up to follow one around and snap just-the-right-angle photo.
The huge advantage of biometrics (fingerprints, FaceID, etc.) is the ease with which a user can unlock their phone. A passcode may be better than a fingerprint, but a fingerprint+longer passcode is better than a shorter passcode (or no passcode at all).
Having a 12 character alphanumeric passphrase you enter each time you want to unlock is not something most users want to do.
Only about 49 per cent of the users were setting a passcode, which meant that the remaining 51 per cent were not benefiting from the data protection mechanism. When Apple dug in to understand the reason, the findings revealed that users unlock their devices a lot - on an average about 80 times a day. And about half of its users simply didn't want the inconvenience of having to enter their passcode into their device, at times. At that time, in 2012-2013, the default passcode length for iPhone was four digits, which happens to be six today.
Apple realised that it needed to come up with a mechanism that's fast and secure, and doesn't involve typing in the passcode. That's when Apple introduced Touch ID, which was easy, fast and secure. The way that biometric authentication worked on Apple platforms was that the user must set a passcode to be able to use the biometrics. And just as Apple thought, there was a much higher adoption of biometric-based TouchID. Apple says over 92 per cent chose to use Touch ID and had therefore set the passcode, which in turn meant users were able to use Apple's data protection encryption system.
At least on iPhones though they have a way to activate a mode that prevents the use of TouchID and FaceID. If I press the power button on my phone 5 times in a row that turns that off.
Yes I still run the risk of my device being unlocked against my will if I'm caught by surprise. But I'm able to disable this functionality in places where I think the risk of that may be higher, e.g. while traveling.
I'll still take the trade off of longer password (not just a few numbers) on my phone while using a biometric test for normal access.
Of course not everyone may have the same threats to consider and others may make different choices. Doesn't make either of our choices wrong.
On modern FaceID phones you need to hold the power and down volume key to bring up the Reset/PowerOff and cancel. Just clicking multiple times will bring up wallet, siri, or do nothing.
Just clicking multiple times will bring up wallet, siri, or do nothing.
On my iPhone 13, just now, I rapidly clicked the power button 5x.
The phone immediately made a loud sound and put up a screen that said "Emergency SOS". There was an option to "cancel" it, but I assume that the phone would have contacted 911 in short order unless I quickly cancelled.
So the correct description is probably "it depends".
Just hit the power button 5 times on my iPhone 13 Pro, and it locked down FaceID as I'd expect (while bringing up the Reset/Power Off screen). You've described an alternative method, not the only.
Which OS are you updated to (no please don't post it)? not 15.1.x? Have you disabled wallet, Siri, and SOS?
It doesn't work on any of the 5x 12s and 13s (pro and not) I just tried. It did work on an 11, which was not updated to 15.
You also risk the accidental activation of an SOS call.
I think you need to turn on the power button clicking 5 times feature by turning on the "Call with Side Button" option in Settings under Emergency SOS. I'd also suggest turning off Auto Call if you want to use it like this.
Yes this is kinda buried and not clear at all that this function also disables FaceID but it does.
A TPM can harden a pin such that it is stronger than a biometric. As a simple example, suppose the TPM has an exponential back off algorithm such that you need to wait 1s after one wrong guess, 2s after two wrong guesses, 4s after 3, etc. No one is getting anywhere close to 64 guesses in that case much less the hundreds you might need for a 4 digit pin (assuming it isn't someone's birthday).
However, you are correct that a fingerprint is faster than even a 4 digit pin. And even a TPM does not solve the problem of pin/password reuse and being easy to guess, it just makes it harder by giving you much fewer guesses.
The biggest problem imho is that we only have two states on our phones - locked and unlocked.
Ideally, I should be able to unlock the phone and take photos using just my fingerprint. In my case I would also like to be able to call, message, play games and similar. But to access the 2fa app, cryptoasset app or similar, I must further authenticate in a way that I only reveal parts of my secret ("Enter 3rd, 8th and 11th character of your password:"). The assumption here is that I will mostly authenticate in a private setting, but sometimes I might not have that luxury.
On Android (don't know about iOS) you can take photos without even unlocking - double press on the power button opens the camera. You can't access anything else (including existing photos in the camera roll).
It works the same way on iPhones. The lock screen includes a camera button. When tapped, the phone enter a camera-only mode in which only photos taken during that session are accessible.
Why would your banks account access not require additional credentials beyond what gives access to your photos? I have an iPhone. TouchID/FaceID can give access to the photos, but to get to my bank account, I use a separate login.
You can already configure apps you are allowed to use on iPhone & Android without unlocking the device. And individual apps are anyways free to implement their own security mechanisms.
Biometrics are great for authentication but terrible for authorization. Anything sensitive should require both. There's nothing wrong with a fingerprint and a password or a fingerprint and an RFID card as an authorization/authentication pair; you just have to keep these things in mind.
I've fallen to the laziness of using fingerprints on my devices as well, but they still require a password to decrypt the contents of the storage device on boot. For many, if not most, threat models, this is perfectly fine.
I lock my phone to prevent people with messing with my contacts and scrolling through my messages. It's an inconvenience to bypass that requires preparation. A motivated attacker would just as easily spy over my shoulder if I were to use a password, either on my phone or on my laptop.
I look at these mechanisms like the lock on a teenager's bedroom door. Those things aren't impenetrable and anyone with just a little lockpicking experience or access to some automated tools can open them in a minute. Unlike the locks on our front doors, built to keep intruders that don't want to risk physical damage to our windows out, they're a message: please don't violate my privacy. Violating that privacy is made moderately difficult by the mechanism itself, but it's hardly impossible.
Unless you carry a password-protected authentication and key management token with you at all times, you're at risk of having your system broken into. Most of us don't need to worry about those kinds of things.
"Authentication is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity." (https://en.wikipedia.org/wiki/Authentication)
So it's not useful for authentication but could be used for identification.
The broader argument here is less about fingerprints, and more about using anything immutable as authentication. You cannot change your fingerprints. You cannot change your social security number (at least not easily). These should therefore, NEVER be a primary method to authorize access to anything. Once stolen, the proverbial horse is out of the barn.
I dunno, I have psoriasis on my hands bad enough that sometimes i dont properly speaking have skin on some fingertips, so my experiences aren't normal.
I recall hitting someones' demo of the "first PAM integrated fingerprint ID system" in '98 and crashing their machine repeatedly with my thumb. It couldn't even scan me.
Biometrics have both a high False Acceptance Rate - they will accept invalid input - and a high False Rejection Rate - they will deny valid input. Scanners can be tuned one way or the other, preferring FAR or FRR, but either way, they are kind of unreliable.
This is why multi-factor authentication is a thing. Generally, pick two: something you have, something you know, or something you are.
If the scanner doesn't like your fingerprint this morning, just use your proximity badge instead, and if someone takes a photo of your fingerprint, it's still useless unless they also know your PIN.
The issue is that a lot of our hardware, particularly phones and laptops, is single-factor authentication. And on top of that, this hardware knows the login to a bunch of other very sensitive material, like your bank accounts.
There's of course nothing wrong with pointing out already known security flaws, but it's good practice to mention when this is a well known thing and reference prior work - which the post by kraken does not do.
One thing that's never been explained to me is how large the space is. Does everyone have one big swirl on their thumb that goes clockwise or counter-clockwise? Could you have two swirls? What is the space of potential fingerprints?
Fingerprint scanners compile a small set of identifying features (typically where ridges end or split). They don't characterize the entire fingerprint. The higher quality the scanning system, the more identifying features they use -- so the size of the search space is both smaller than most people think, and varies depending on the quality of the system.
Yes. But at a certain point one has to consider how much security is "enough." Someone could break into my house, even when locked, by kicking in the door or breaking a window, but I don't necessarily need to turn it into Fort Knox in response. If you are a high-value target, it is worth thinking about this, but for the average person, I think it might be a reasonable trade-off.
in biometrics this is called a Presentation Attack (PA), here the fake fingerprint is the analog of presenting a photograph, video or 3dp mask to a face recognition system. this is usually mitigated by the use of Presentation Attack Detection (PAD) systems, either hardware, software or hybrid. in this particular case it can easily be mitigated by some hardware that measures the amount of water in the biometric sample, for instance capacitive sensor, transparent conductive electrodes or maybe even better some optical sensor that is sensitive to SWIR wavelengths reflectivity differences (1000 and 1200 nm would be great here). a short scholar search will indeed reveal that this is a very active area of research, and probably will reveal tens of papers from our group which is a leader in this.
For devices like phones and laptops, this sounds too complicated. Why not instead just use passwords, patterns, etc? I doubt anyone who's genuinely sensitive about their device being secure uses biometrics to unlock it anyway, so this seems to be just a convenience feature for the casual user with minimal security concerns. As such, making it more complicated doesn't seem worth it.
And if we're talking about authenticating people in truly secure environments, my gut tells me that adding a couple more factors to even a simple fingerprint reader ought to be more secure and robust than making a super-complicated fingerprint reader and leaving it as the only factor.
the capacitive one yes probably, the two other i doubt it. sure you can always use a conductive coating as well as a material that mimic optical propreties of skin. the question is not IF a system will be spoofed, the question is WHEN.
Biometrics are not secrets (it must be assumed that attackers always possess all biometric data), but they can nevertheless be a good form of authentication when combined with situational awareness. If you try to use one of these hot glued fingerprints in front of a security guard, it isn't going to go well for you.
At the moment, humans are still necessary for situational awareness, but probably machines can get there pretty soon. A phone, for example, that monitors its surroundings continuously and has enough intelligence to reliably distinguish normal access by its owner from duress or the presentation of fake biometrics seems like it's within reach of current technology (though it doesn't actually exist).
You think the typical security guard would notice a print if it was glued on the finger? At workplaces I've worked at in the past they weren't watching the flow of traffic through the gate all that carefully...
I think this depends on how well they do liveness tests: it's expensive to have guards checking everyone's hands (but certainly not prohibitively so if you have that level of threat) but it'd be a lot cheaper if your sensors are fairly good at raising an alarm to attract scrutiny.
If we had asked people thirty years ago whether a single company, not a police department or other government agency, could, with consent, collect the most human fingerprints in history, would people have been likely to point out various obstacles and/or doubt it was even possible. Further, would they ever agree that these prints could be collected not for employee access to company resources but for access to people's own personal effects! (Company retains remote access to devices storing personal effects.)
Yes, if you're interested in this kind of stuff you basically have to work for the military because they're the only ones with the funding and motivation for this kind of stuff.
No, the fingerprints are not hacked. The MacBook Pro scanner is.
Fingerprints and biometrics in general are not a secret. Consider your fingerprint like your face. Anyone can reproduce your face, there are cameras everywhere, and it is probably already easy to find on the internet. "Hacking" your face by taking a picture is the most boring "hack" ever.
Now, if I print your face on a piece of paper, wear it as a mask and try to say to a security guard that I am you, normally, he won't let me in. If he does, the problem is not that I managed to make a paper mask with a picture of you, this will always be possible, the problem is that your guard is stupid and you need a better one.
And if your fingerprint scanner can be fooled by a dab of glue and a laser printer, you probably need a better scanner, something that Apple should be able to do. Smartphone manufacturers like Apple are usually good at bringing fancy tech to the masses, and they could work on defeating these old attacks.
Eh... fingerprints are quite a bit simpler than faces. They're just patterns. I don't know how you could detect a fake fingerprint. You'd need something that could tell there wasn't real skin on the device. I would say warmth but obviously he has the fake skin over his actual thumb so it's probably still warm.
Yes, I really meant fake fingers, not just fake fingerprints. And there is plenty of research on that subject.
Possible ways of detecting a fake fingerprint (beside warmth):
- Blood (we could use one of these cheap SpO2 sensors)
- Capacitance
- Perspiration and related skin resistance
- Microscopic skin details
And the usual machine learning solution of feeding thousands of real and fake fingerprints to a neural network and letting it decide.
As all living things, fingers are far from simple, there are plenty of details beyond the obvious pattern. It is a bit like a banknote, you can photocopy a banknote and it is very east to identify the banknote you copied. But it is very hard to pass it off as a real one to someone who knows where to look.
The problem with any lock is that, fundamentally, it is made to be opened when certain conditions are met. And that's putting aside any sort of brute force approach.
Good security design is as much about asking, from first principles, "what conditions need to be met to open this?" as about considering how it might be attacked.
For example, the condition to be met for a pad lock to open is not "when the proper key is inserted" or "the key pins are raised to the appropriate level". It's something more basic-- like "when the locking bar no longer blocks the shackle from rising."
From that perspective, attacking the key hole and pins is only one of multiple vectors.
Thumbprint locks on a laptop won't usefully respond to a bolt cutter. A padlock was just a convenient example. Even then, bolt cutters don't work in the strongest of them: angle grinders are a popular choice in that case, but are a lot more obvious. You can't hide a small angle grind up your coat sleeve and quickly snap the hasp on a bike lock.
In any case, the bolt cutters approach is also why I stipulated "putting aside brute force". Because in the context of computer security as with this article, something a bit more subtle seems to be effective more often, especially against higher end security.
By "laser printer" do they mean regular office printer or laser engraver? It's a bit hard to believe that the super thin layer of black paint produces an imprint thats significant enough for this to work.
As someone who doesn’t specialize in security, one claim that has stood out to me for not using fingerprints is that you can't run bcrypt (or some other salting algorithm) on fingerprints [1].
I don’t see any discussion of that here thus far. Is that still the case? I feel like I would have heard about developments in this area if something had changed. But perhaps I've always misunderstood the criticism?
Fingerprints are stored as data, and data is hashable. As someone who doesn't know the ins and outs of fingerprint readers, that sounds ludicrous. I also don't see why it would need to be hashed, however.
We can use fingerprints for additional security, but it has to be done carefully. If fingerprints or any biometric data is hacked, the users can not change it, so they have to be contained. The best way I know is to use a mobile device with fingerprint or biometric input, which is saved only on the mobile device. Then use the biometric to enable the user to accept the authentication request but use the mobile device specific info as a password replacement.
We can use fingerprints for additional security, but it has to be done carefully. If fingerprints or any biometric data is hacked, the users can change it, so they have to be contained. The best way I know is to use a mobile device with fingerprint or biometric input, which is saved only on the device. Then use the biometric to enable the user to accept the authentication request but use the mobile device specific info as a password replacement.
Also remember, in the USA, the police can legally force your finger onto a reader to defeat the lock, without violating your 5th Amendment right against self-incrimination.
Remember (on an iPhone) you can squeeze the power button and one of the volume keys for a few seconds. This disables biometric authentication until a passcode is entered.
This can protect you against this "attack vector".
It can protect you, but as I experienced, if you have your phone in your pocket and a government agent puts a loaded gun to your head and tells you not to move then you can't access this feature...
A fingerprint is a user ID that is usually treated like a password, which is the main problem here.
They should add at least a 2nd layer (that doesn’t reduce the convenience too much).
For example, people could probably remember a simple Morse-code-like sequence of finger presses, e.g. your extra token is that you set it up to use “tap, tap, longpress, longpress, tap”.
I know that's something people say, but that doesn't actually give me the information I need to be informed about how secure or insecure it is and how hard it is to bypass.
i find this whole in-screen fingerprint reader trend to be pretty funny. isn't glass excellent for capturing fingerprints?
i suspect biometrics like fingerprints may play a role in the future, but the role they would play is more convenience in cases where the device knows its in a trusted environment. (that is, there will be more attention on devices tracking whether they've been separated from their owner, or if their owner is not behaving like their owner, and if so, requiring additional challenge)
either that or we'll all be carrying keys. there are some cool wearables i've seen out there that i think talk nfc.
Anyone else see this technique a few times in heist movies? I always knew it could be done, but having a blogpost detailing how to do this is is pretty cool.
It seems you are stating that biometrics should not be used to restrict account access according to specific individuals ("John can only access john.harrey and finance.12")
Not bio-metrics but passive logons. Retina scanners can detect this I believe. Fingerprints are used by criminals to get access to services. Girl follow guy home. Drug him and then a team uses the fingerprints. May even take pictures to avoid having the person report to the police. It happened here in my city, in western Europe, so it can happen pretty much everywhere.
FDM printers probably don't have a high enough resolution, but I wonder if new high resolution resin printers like the phrozen sonic 8k mini ($600) have a high enough resolution to do it.
FaceID is more complicated: it uses an infrared camera and projects an array of dots on your face so the problem wouldn't just be generating a realistic video of a face but more along the lines of constructing a mask which would have similar 3D structure including how it reflects infrared light.
My favorite photograph of a fingerprint is when the Chaos Computer Club reproduced the German Foreign ministers fingerprint from a photo. So much for military grade security.
… is that they're treated as passwords instead of usernames. The three problems you list all have the biometric=password assumption in them.
See also using the American SSN usage: it's treated like a (secret) token, and so when it leaks it can be used to access sensitive information. Using it as 'just' a username would probably reduce a lot of problems as well.
True, but if I would need a secret as well, I don't see the benefit aside from not needing to remember a username. Some might like that for convenience, but I would always prefer memory in that case.
I think the current crusade against passwords is primarily motivated by different providers to advertise their ID schemes. Even needing a cert for something like Github is too much for me. I have no high profile repos and it might be reasonable in those cases, but I hope MS doesn't repeat the mistakes they made with their API access. The logistics of authentication is far too complex.
Aside from that I have seen people handling their keys that make you wish they would just use a password and cert logistics isn't trivial at all. No, you should not copy your key to our corporate file server... This is just the nerd way of gluing your password under your keyboard.
There's not quite an "algorithm"; SSN's are so short (it's just a 9-digit number, so max 1 billion unique SSNs) that they have a very simple procedure for assigning them. The Social Security Administration explains it here: https://www.ssa.gov/history/ssn/geocard.html
- The first set of three digits is called the Area Number
- The second set of two digits is called the Group Number
- The final set of four digits is the Serial Number
Certain geographic areas get certain "Areas Numbers", then Group Numbers are assigned consecutively, then Serial Numbers are assigned consecutively. This entire system of consecutive assignment makes it trivial to guess pretty well, or even exactly, what someone's SSN is.
This is a good change, but since it's not retroactive anyone born before that date (which is 100% of adults and probably roughly 50% of minors, who are likely not good targets for identity theft) are still at risk.
And it's still a problem since everyone uses it as a secret, not a username. So if leaked, you are still at risk. This is why the number realy should have been public from start so companies would not get the idea to use it as password.
From a security professional perspective, this is at least somewhat of an improvement, even if the entire thing feels like it's held together with a wish and a prayer. I would really like if there were a means to just institute an entirely new system. Essentially having one's entire life ruined, on the chance a bad actor can guess a four digit number is...not great.
From a genealogist perspective though, this is horrible news. Being able to trackdown people based off of rough geographic assumptions can help narrow down if someone is "lucky" enough to have a common name in a specific region. Of course, this change to SSN isn't nearly as disastrous as the death of paper - especially newspapers - but I really do not envy anyone who is going to try and do historical family research in two to three hundred years. It makes me cringe just to think about how much valuable information, how many life changing moments, are going to be lost to encryption, bit rot, and the constantly changing standards of software and hardware.
Doesn’t guessing someone’s SSN require that you need to know the numbers and assignments of some more people (who obtained it) in the same location and around the same time period? If someone tells you their name and that they got the SSN in a particular place and doesn’t reveal any other information, you wouldn’t be able to guess that, would you?
My problem with this reasoning is that it leads people to think that biometrics therefore shouldn't be used.
Can biometrics be spoofed? Absolutely. Is it likely to happen to the average person? Not at all. For a typical everyday user, a fingerprint or face scan is probably more secure than the common alternatives of "sticky note" passwords, easily guessed PINs, or no authentication at all.
Biometrics are a compromise between security and convenience. Before iPhones got Touch ID, it was not uncommon for people to just not put a lock on their phone out of convenience. Now it is impossible to find an iPhone out in the wild that is not fully encrypted. The average level of security on consumer devices that hold sensitive information has increased dramatically thanks to biometrics.
Reveal unnecessary information to whom? Both Windows Hello and iOS TouchID/FaceID never allow biometric data to leave the device. In the case of iOS, that data never even leaves the secure enclave.
In the case of Apple’s TouchID, the fingerprint is less a password and more of a session extender. You need to login with ID and password to establish a session. Then the fingerprint gives you access to that session. Once the session ends, you need to reestablish your credentials.
This obviously not as secure as a system when you must use your credentials frequently to maintain access, but it seems entirely appropriate for the level of security needed by most individuals on their phones. Especially as the alternative is often a super simple password or even no password at all. TouchID makes a moderate level of security palatable enough for people to actually use.
So far I haven't encountered a device where a fingerprint was used to unlock disk encryption, so your objection is actually implied, I think. Especially, with the increasing uptime of Apple's new hardware, a running session is what you more often than not got these days.
No matter how you twist it, biometrics are fundamentally flawed and not even Apple can magically fix that. At least one component of access needs to be a secret, which cannot be extracted without cooperation, or "cooperation".
Pin/password can also be hacked and there is no need for fancy 3D printer.
Someone can use their smartphone to film other person as they type stuff in, no need for printing fake print. They can steal phone/laptop as soon as they are done filming.
This is the case that fingerprint sensors are preventing.
Pointing out problems is useless - as people don't have alternative that would be "all-mighty secure without flaws".
It should be defense in depth not - and that is already there for example banking apps - you need fingerprint to unlock the phone and banking app requires its own specific PIN. Getting those 2 things makes it much harder for bad guys to do something like money transfer. Yeah they might get your photos and other stuff - but probably there are secure store apps that would encrypt your photos if you have ones that you really want to protect.
For fingerprint it is "using several close-range photos in order to capture every angle" - to get PIN, I need one angle and probably not even close-range of video and even weird angle if I have to sneak up onto someone in a metro or in a coffee shop.
I don't understand this one - when are people taking a selfie at the same time they're typing in their pin? I have an android phone, and I don't even unlock to take pictures.
I just don't follow the timeline and geometry. Seems theoretical only maybe.
It's badly worded in the article but I think it means person A is entering his pin. Persons B, C, D, etc. take selfies in the vicinity of person A. By comparing multiple selfies from sightly different times, you can determine person A's pin.
I assume the intent is to capture it while surveilling someone while they enter their PIN, not necessarily from images harvested from social media or anything. Since many PIN entries show at most the most recently entered number, you'd need multiple images to capture the PIN. But if you can capture it from reflections in their eyes, then you can surveil them from a greater distance and more stealthily if you've got a good camera.
I'm sure the military has better than average tech when it comes to security, but I wonder if they're agile enough to embrace the rapid technological change that is necessary to stay on the bleeding edge. These days when I hear military + security in the same sentence I think of aging warships running windows 2000, using oddball niche technology supplied by equally oddball government contractors/vendors.
Isn't Windows 2000 kind of a very respected operating system doing a lot of things right?
You assume 'old' strictly implies outdated, or bad, which isn't true. E.g. good passwords are still undefeated. And security protocol redundancies surely can make intrusion impractical, even if individual components fail.
I assume, military hard- and software to be made meticulously, double checking everything, on literally battle tested chips and gear. I mean, I really had no contact with anything military ever, so that's a guess based on aircraft and space development, pictures of überfunctional UIs and the ridiculous finances of the US military.
I'm not speaking from personal experience either, sorry if I seemed to imply that. I just meant it as a general comment with regards to anything that the military does. There is so much red-tape surrounding procurement, implementation that I wonder if they are agile enough to be on the cutting edge.
I don't mean to diss W2k in general, Its an OS that is well understood by now - weaknesses, mitigations, etc. Slowmoving entities like the government accrue so much cruft that it makes it exceedingly difficult to move to newer (and possibly better) platforms to take advantage of newer security tech.
If I understand correctly, the military doesn't really want to be cutting edge, as things go boom with yesterday's tech, without the trade-offs of untested, novel "innovations". I think there isn't much advantage in new developments, apart from convenience.
And for W2k, I wasn't merely suggesting it's well-tested, but also a different, better thing than say WindowsXP. At least, I got the impression operating systems folks reference it for a "many good ideas" kinda thing.
Sorry, I don't have any expertise in any of this and talk mostly out of my ass.
State driver license in USA is a honey pot of thumb/finger scans. Anyone on HN think the NSA doesn't have access? NSA info sharing with trusted foreign countries makes a reliable distributed backup for use by foreign spooks.
If your argument is that the NSA doesn't have your fingerprint because only the Global Entry Program has your fingerprint, I find that highly suspect. Of all the databases to be shared with the CIA and the NSA, Global Entry seems entirely reasonable that they be given access. Unlike state's driver license database where it's objectionable that the NSA be allowed to access it, Global Entry has to do with people coming in and out of the country and so seems entirely reasonable the NSA would have access, never mind the fine print no one reads when signing up for the program. I wouldn't be surprised if any of the three programs (Global Entry, TSA Pre, Clear) have it in their fine print that the CIA is legally given access to that database.
My response did not intend to address the NSA, it was intended to address the "state driver license in USA is a honey pot" since in my experience states do not collect fingerprints for driver licenses. Based on some cursory research there are only a small handful of states that require fingerprints, and fingerprints are not required for implementation of Real ID.
I've only had a state issued driver's license in CA and TX, and both require thumb prints. But I'm sure if you were truly interested, you could search the web for that information fairly quickly, actually probably faster than it took to post the question to HN:
My intent wasn't to find out which states require prints, it was to drive conversation in order to refute the claim that state driver licenses are honeypots for fingerprints.
Fingerprints are not required as a part of Real ID implementation. Real ID seems like it would be the main driver for feature parity between licenses of different states. If fingerprints aren't required by Real ID, then it seems like it would be incorrect to assume that all states require fingerprints - and thus also incorrect to assume that driver licenses in the USA are used as honeypots for fingerprints.
Perhaps landemva should have specified which states are using driver licenses as honeypots for collecting fingerprints?
A few years ago I had top tier frequent flier status, and the airline kept offering to pay the Global Entry fee for me. Sit for a lame interview and provide a bunch of info to power-starved snooping Karens? No thanks.
More and more states require fingerprints for driver licenses because of the RealID program. Eventually (soonish) you won't be able to use your driver license to fly without it being RealID compliant.
One state I lived in gave me the option of not having a RealID-compliant license if I wanted to. Another didn't, so fingerprints were compulsory.
In my state prospective teachers are fingerprinted and their prints are run through the FBI database when applying for the certification. My prints are in that database.
I really like it in Demolition Man, how they thought of a future which used biometrics for secure access (in that case retina scan). But they also saw how easy it was to bypass it when Simon (Wesley Snipes) simply takes the eye of the warden to escape his prison.
I don’t think this was intentional but they managed to demonstrate (or at least for-shadow) the incompetent police force of the future this way.
This meme really really has to die. It's so annoying that it's spread so far. Biometric security (i.e something you are) does not need to be secret nor revoked. That's the entire point. It's a piece of information that even when it's known by everyone still can't be reproduced.
The strength of a security system based on biometrics is exactly how well that system can detect that it's reading from an living breathing human.
- Perfect: A human guard manually taking a fingerprint reading. Can't be beat because the guard can obviously see that it's not really your hand.
- Shit: A camera that compares pictures.
The entire industry is about making an autonomous system that gets as close as possible to perfect. It's fine to say that you don't think it's good enough right now but "oh no I lifted a fingerprint from a photo" isn't some security breach.
But similar to hash collisions, a total break (arbitrary hash values can be output) isn't required for it to be a problem. Where fingerprint scanners aren't magic (especially given the sloppiness of input data), that they're defeatable in corner cases should be enough to be worrisome.
> Biometric security (i.e something you are) does not need to be secret nor revoked. That's the entire point. It's a piece of information that even when it's known by everyone still can't be reproduced.
If that's the point, the effort is doomed. All biometrics will be able to be reproduced sooner or later. There's no way around that.
So, like all other identifiers, revocation is an important trait. Even if successful reproduction is difficult and rare, it would be utterly devastating to those affected unless there's a way to revoke.
> Perfect: A human guard manually taking a fingerprint reading. Can't be beat because the guard can obviously see that it's not really your hand.
Not at all perfect. Can that human guard really see if you're wearing a fake fingerprint? I doubt it, unless he's closely examining everyone's fingerprints first. And even then...
>Not at all perfect. Can that human guard really see if you're wearing a fake fingerprint? I doubt it, unless he's closely examining everyone's fingerprints first. And even then...
The procedure at the USCIS to get my green card was remarkably thorough. The guard manually and visually checked each of my fingertips carefully to ensure I had no fake print overlayed on top of my real print and I had to keep my hands within a small area with a camera on it for the entire process or they would restart everything.
> If that's the point, the effort is doomed. All biometrics will be able to be reproduced sooner or later. There's no way around that.
All encryption will eventually be broken therefore what’s the point is a pretty bad security posture. But like no it won’t. Even if you can fake every other metric (good luck with eyes) a fresh blood sample taken by a guard with hypothetical futuristic instant DNA sequencing will never be broken. If your threat model is someone cloning you, the you have bigger problems and they still can’t clone your fingerprints!
You’ve got revocation completely ass-backwards. If someone successfully tricks a biometric system you don’t need to revoke someone’s fingerprint, you revoke the reader! That’s the thing that actually provides all the security.
The point of the guard is that a human has absolutely no trouble determining whether they’re taking a reading of a real hand, scanning a real eyeball, to taking a real blood sample. Maybe in mission impossible movies but you’re really really overstating the resources required to make a convincing hand to someone specifically looking for fakes. Yes social engineering is a problem which is why an autonomous system with the detection quality of a human would be nigh unbeatable.
Alright, but then passwords are on the table again in my opinion. There is no need for a system to identify me in the vast majority of cases that would not be sufficiently proved by my secret knowledge, which is harder to replicate than any biometric marker.
My thought is that biometrics should be the root of identity, not the endpoint. You shouldn't need to scan your retina, fingerprint, or face at every point you want to verify your identity. Instead you use other things like public key cryptography to verify your identity remotely, id cards (perhaps with strong cryptography) for in-person interactions, etc.
Lost/stolen cryptographic keys or ID cards could be revoked and would require a trip to your a certified biometric verification facility where a thorough in-person inspection would confirm that your fingerprints are real, you aren't using a fake eye, etc. Then you'd be issued new keys/cards at that location. Loss of ID is inconvenient, but not catastrophic. Leaking your biometrics is irrelevant.
Is it an infallible system? Certainly not, but it should be able to uniquely identify someone and not allow faking biometrics.
It isn't that relevant if is infallible if you don't need it in the first place. What would the purpose be? Checks by authority? Not too convincing to be honest.
>- Perfect: A human guard manually taking a fingerprint reading. Can't be beat because the guard can obviously see that it's not really your hand.
"Perfect" is too strong a statement. This is only true if the guard very carefully checks every fingertip to ensure nothing is glued over your normal fingertips, and even then it's possible to distract the guard or rush them with a socially-engineered premise. Or just bribe or blackmail them.
> Perfect: A human guard manually taking a fingerprint reading. Can't be beat because the guard can obviously see that it's not really your hand.
Well, the argument some people are making is that this might be no better than a human checking your ID. Yes, there the guard can verify that there is some real human there, but both the ID and the fingerprint could be faked (e.g. a fake fingertip mold which matches the victim's "known" fingerprint).
We're talking about a guard who physically takes your hand, inspects it, and puts your finger in ink, and then compares that to the prints they have on file. This is exactly the protocol that's used by the police and military when taking prints.
So wear fingerless gloves and social engineer a little bit (it's cold, it's winter, I have bad circulation, etc). If you think having a human guard makes a system infallible, I have some bad news for you.
Oh lord, this is firmly off that point. An alert motivated human looking for fakes can identify them with nigh perfect accuracy. This means that it should be possible to build an autonomous system that can do the same which is the goal of biometric auth systems. There is nothing that fundamentally breaks biometric auth until you can burn fingerprints on someone or replace eyes or gene therapy new DNA or whatever. And even then that’s pretty damn strong.
Exactly, every keeps going on about magic social engineering attacks without providing details.
Anyone who has had their fingerprints taken by the FBI knows that there is a solid procedure that will detect fakes. The idea is to replicate this near perfection, not bolt on some revocation system for fingerprints (ouch!)
The title almost sounds like that they have a meaningful fingerprint ready to open her iPhone... Was that the case? Or do they have a somewhat accurate partial fingerprint? I failed to find recoding of the presentation.
AFAIK iOS actually uses the pattern of veins below the fingertip rather than an image of the fingerprint itself. So I can't imagine this would be enough to unlock an iPhone.
You're sort of not wrong, touchid uses a capacitive sensor vs. a visual/camera sensor which has become more common in other devices. What this means is in theory you're measuring the electrical behavior of the outer layers of skin, and Apple claims goes as far as measuring subdermis. (This is also is why their touchid scanners don't work on wet fingers as the behavior is thrown off).
However, they are showing their attack working on a Macbook Pro with touchid, which uses this sort of reader. So it's easier to fake in practice than it is in theory. Whatever material you lift the print off of should have to mimic the capacitive behavior of the finger and this looks like it busts Apple's claim that it can read the lower layers (or it tells us their default sensitivity is set too low for convenience)
I would argue that the devices you carry with you are exactly the ones you shouldn’t use biometrics for.
Law enforcement can force you to use biometrics to unlock a phone. They have used dead bodies to unlock phones.[0]
What they can’t do is make you remember a code/password which you have “forgotten.”
The vast majority people will never encounter a circumstance where that will be an issue. To withhold a (n optional) feature from the masses based on the hypothetical actions of an agency who can abuse your fingerprints but will stop short of torture doesn't really make sense.
1. With police authoritarian levels where they are, I doubt saying "the vast majority" won't encounter similar tactics in the near future would be correct, or at least there is evidence enough to offer strong debate on the topic.
2. Nobody claimed to then want to withhold the feature "from the masses"... so this is a strawman.
3. "hypothetical actions of an agency"... I think it's pretty clear that these types of methods are not hypothetical, and are being used already
4. "will stop short of torture"... I also think it's clear that many LEO's, especially the closer to federal ones, have been found to torture already.
I agree with gp, biometrics on phones are a bad idea all around, for a lot more reasons that have been said. I don't know why you are protesting this idea as you do.
HN, and the tech bubble at large, is all about "edge-cases." Too many Debbie Downers getting off on playing "what if" scenarios, while ignoring reality.
> Unfortunately for the FBI, Artan’s lifeless fingerprint didn’t unlock the device (an iPhone 5 model, though Moledor couldn’t recall which. Touch ID was introduced in the iPhone 5S). In the hours between his death and the attempt to unlock, when the feds had to go through legal processes regarding access to the smartphone, the iPhone had gone to sleep and when reopened required a passcode, Moledor said.
Except the FBI probably won't make that mistake again. They'll wake up a judge and expedite the process,
citing this exact case as to why they need to be granted the subpoena. from there it's not that hard to make a jig that constantly does some sort of action so the phone never goes to sleep.
Maybe at some point in the future, but we definitely aren't at the stage of being able to parse out a specific password from an FMRI reading right now.
For some, being placed in the MRI would be torture. Hope you don't have a plate in your head or other bodily location. Would torturers be so concerned with this, or is that just part of the threat.
TLA person: Give us the code or we put you the MRI machine!!
What I was thinking was using FMRI to find out if they actually do remember the password (FMRI lie detection really only works with yes/no questions, AFAIK). If they don't know, then torture is a waste of time. If they do know, then you know torture may be fruitful.
FMRI uses indicators like pulse, heartrate, etc. to make a more or less estimate on the truthfulness. Torture can make these indicators useless.
Torture is a very flawed method to extract informations. You can't be sure that the victim isn't telling lies or admits to crimes just to make the torture stop.
In the US at least FMRI should fall under fifth amendment, right? Otherwise the fifth amendment would be useless. A right to remain silent wouldn’t exist if you can’t silence your brain. If one day there are stargate replicators that can reach into your mind, would that be legal?
There's a good reason why criminals don't carry lockpicks around and that's because they're regulated, in much of the world mere possession of them outside of your residence is a criminal offence and even in places where you can carry them legally they not only show prior intent, their use in criminal activities carries a charge just like breaking and entering. I'd also argue that being stuck picking a stubborn lock for 2-3 minutes is significantly more suspicion arousing than the literal seconds it takes to break a window but that's neither here nor there.
On the rationality of having locks when criminals can very easily break a window, the old saying that locks keep honest people out rings true. Locks do serve a purpose even if they do very little to slow criminals down. To bring the analogy full circle fingerprint readers always seemed like windows to me in how easy they are to bypass, luckily they're more of a luxury than a necessity. :-)
Just a remark: the person whose fingerprints have been reconstructed and that was the defense minister of Germany at time of the stunt (Ursula von der Leyen), is now the president of the European Commission (~= head of the EU government).
It says several images were used. You can't generate a correct fingerprint from that blurry fingerprint picture. The data has to exist in order to reproduce it.
In 2014. "A speaker at the yearly conference of the Chaos Computer Club has shown how fingerprints can be faked using only a few photographs. To demonstrate, he copied the thumbprint of the German defense minister" Ursula von der Leyen
I’m waiting on a court case with a fingerprint as key evidence for conviction, in which the defendant brings this up. Might not pass reasonable doubt muster, but what if somebody sold fingerprint forgery kits online that made it push-button simple? Just supply an image or two, run it through some ML to reconstruct the print, laser etch a latex glove or similar…
I wonder if you could use CRISPR or “lab-grown meat” techniques to do the same with DNA evidence…might be something that would get you a contract with the CIA/NSA.
Why in the world would you need CRISPR or lab grown meat? Just sequence the DNA and send it off to a DNA assembly service. The price is a couple hundred bucks a pop. You don't have to replicate the entire DNA, just the segments used for forensic PCR.
(On a side note, the state of biotechnology and life science knowledge on HN is utterly deplorable, repeating buzz words does not reality make.)
And what is involved in the DNA sequencing? And the DNA assembly service will probably take record of the operation itself (it is not a common service).
Interesting (I had not thought of that side), but it remains revealing. When there is reason to suspect that the case was manipulated it may suggest that that level of capability is involved. Or that somebody "rubbed a drinking glass" on the scene...
Most of the evidence that shows up at a court case is forgeable. Simply showing that a particular piece of evidence could be forged in no way proves that it is forged. You would need some sort of argument to prove your contention.
Does a law system let some guilty people got free to avoid incarcerating the innocent, or does it incarcerate the innocent to avoid letting some guilty people go free?
My opinion is to lean towards letting the guilty go to avoid incarcerating the innocent, but other people in other places can lean the other direction.
Sadly, "forensic science" is often not science at all. Much of it is barely an improvement on the techniques from the Victorian era. Altogether too much of it is an expert saying "these two samples look like a match" without a quantifiable metric.
DNA evidence has made enormous leaps in the right direction, but even that requires a good chain of custody, good lab practices, and honest actors throughout the process.
It would probably cause a huge media storm. Then the politicians would "fix it" by replacing it with face recognition... (suddenly Face/Off is no longer science fiction)
As long as you can weather the storm, a storm is not the death of shady businesses. The shady facial recognition software that scraped social media for images got some bad press, and then just stayed calm and carried on.
For all of those outraged by the media storm, it is free advertising to those actually interested in the service. All of the pearl clutchers feigning shock and outrage over shady service mean nothing to the company providing the service, as these were never going to be their customers in the first place.
The CIA's former Chief of Disguise says they very much exist and are used, with some limitations [1]. Her comment on the 3d printer making the mask: "What if I said we had it".
Of course that's not really surprising when you look at the kind of Halloween masks you can get if you are willing to pay [2]. I imagine if you could special order them to perfectly fit your head they would be very convincing to the casual observer and to software.
From the linked article, it sounds like that already exists in some form.
> Using several close-range photos in order to capture every angle, Krissler used a commercially available software called VeriFinger to create an image of the minister's fingerprint.
Considering the extremely dubious evidence that makes its way into courts, such as bite mark analysis, I doubt you'd get that much traction arguing about these scenarios with fingerprints.
It's already trivial - $500 consumer grade resin printers have sufficient resolution, and creating the model from photographs is super easy.
Same for facial recognition - you can do Mission Impossible style masks, and the most significant investment is in time spent learning makeup and wig work.
Biometrics are not secure, just like a vast majority of locks. All it takes is tools, knowledge, and motive to bypass them.
DNA evidence is overrated. People leave their DNA everywhere, so it's not that hard to get some and then plant it somewhere else.
The tests also have varying accuracy rates, but people misunderstand what it means. If the test is 99.99% accurate, that doesn't mean that there is a 99.99% chance that the defendant is the perpetrator. It means that in a region with ten million people, you've whittled your suspect list down to a thousand people. If you pick one of them at random there is only a tenth of a percent chance it was them.
This especially problematic when dealing with "DNA databases" because then with a large database you have a high probability of finding a false positive match and the true perpetrator might not even be in the database.
The tests generally choose markers that tend to differ between different people rather than the ones that tend to differ between people and animals.
But even then, people use percentages as if everyone's DNA was independent. Which it isn't. Blood relatives have similar DNA.
The one thing DNA is really good for is excluding people. If you have the rapist's DNA and you accurately test it against the suspect's DNA and it doesn't match, it's not them.
Sure. No one has ever faked a fingerprint to access phone of the partner or used a printout to trick facial recognition to see the latest mails.
Today even little Kids fake fingerprints of their parents to buy some microtransactions.
damn i need more sockpuppet accounts so i can list all my snarky comments:
- no shit, use public keys
- your 2FA can also be hacked
- your company forcing 2FA is insufferable like all modern web
- your KYC is literally pointless since i already gave those same ID photos to 100 different companies, few to none of which are competent enough to keep them secret
EDIT: huh, this is actually a good article. but it's still ironic since it's coming from a company that follows all the standard snake oil
https://blog.dustinkirkland.com/2013/10/fingerprints-are-use...