Well, yes you can change your behavior, and no, this is not a good idea.
We worked in biometrics like 4 years ago or so. It is trivial to defeat this "security mechanism".
We had snake oil people trying to convince us to invest in this(we are a software company), so we made a bet: If we could defeat their marvelous thing on a test they will pay all of our tester team a dinner(and go away and don't bother us again).
It was as easy as creating a prototype trainer. You record a person typing with a webcam. You make some statistics and then in 10 minutes you could train ANYBODY to copy the same behavior signature.
It took us like 2 hours to create the prototype trainer. After testing it we realized the webcam WAS NOT EVEN NEEDED, the microphone is enough.
So we used a single hidden microphone to record the people chosen by the snake oil team and ALL OF US(10 people) were able to defeat the system. Really fun for us, extremely humiliating for the snake oil people.
BTW: This man is proposing a complete keylogger of all your actions in the computer, what could go wrong?
I'm honestly more worried Google or an advertising company would be using this technique than its value as a security mechanism. It'd just be one more chunk of information in the browser fingerprinting process.
Changing your natural, habitual behavior is hard.
The sad part is, this probably would be pretty effective at catching bots due to the fact they likely are largely repetitive and/or skip the mouse move to simply click a location.
Although you are of course right there is something underlying interesting about what could be done to track us in order to build a unique "ghost" of us that can can be used for many purposes beyond logging in.
But given I am not an expert in this field I would like to turn around and perhaps ask you, what is the bigger vision?
Surely biometrics in all sorts of shapes and forms comes with it's own issues and shortcomings.
Isn't the idea to make it good enough not necessarily 100% airtight (although I do understand that the scaleability of technology makes it the requirements many times higher than between people)?
This concept was used in _The Dosadi Experiment_, by Frank Herbert. The Dosadis would input all available information on an adversary into a computer, and build a personality model of them. They would then run game scenarios against the model to find the optimal strategies to achieve some desired goal.
If you can assemble an AI copy of someone, you have an almost unbeatable weapon against him. You can make him do things that he thinks are his own ideas, just by adjusting the input parameters.
If you want someone to walk on one side of the street, and you know he avoids panhandlers, you put a fake panhandler on the other side. If you want him to slow down or stop at a certain point on that side of the street, and you know he likes motorcycles, you park a custom chopper there. And while he's gawking, you pick his pocket, or bag his head and shove him into a van, or stab him with a drugged needle, or whatever other spy movie crap you might have in mind.
If you have a detailed enough model to authenticate someone, you may also have a good enough model to impersonate them, or to influence their behavior for your own ends.
Agree, the question I guess is if there is any research into finding a way to make the ghost "part of your dna" in some way so that it's tied to you and you to it?
I know this is probably naive sci-fi but I have heard crazier things. I guess at the end of the day it requires that the interfacing is not just digital but also somehow biological/genetic.
> Surely biometrics in all sorts of shapes and forms comes with it's own issues and shortcomings.
Indeed. A behavioral biometric like this is probably most similar to gait recognition (identifying someone by they way they walk). It's slightly better than a 'soft biometric' like skin color and gender, but not good enough for any type of large scale deployment. Usually when researchers show this stuff it's more or less for novelty. With that in mind, there really shouldn't be a 'bigger vision' because recognition rates are usually not good enough to merit anything other than an occasional paper.
It doesn't even add another factor, really. If I can trick you into revealing your password, I can capture your typing pattern at the same time. Same goes for snooping or compromising other sites' auth DBs. At best this adds a bit of entropy that you don't need to remember, but it's probably not much, and you can't change it...
What a huge nightmare waiting to happen. Sites already give me shit for changing my location, making me jump through additional hoops because my browser signature changed, refusing to let me purchase something because I don't access them from my home country. The last thing I need is a behavioral profiler that insists it has determined I'm not me and there is nothing I can do to prove it wrong.
One of the banks I use had a keyboard profiling feature on their login page for three or four years and discontinued it in the last year or so. The reason: customers hated it because it did not work well enough (i.e. it would reject your login 3-4 times before accepting it).
I think there's a whole industry being ignored here - "Are you sure you want to post this to Facebook? Your typing is all over the place." or "Maybe we'll just delay sending that text message for a few hours to make sure future you really thinks it's a good idea."
Presumably, such a system would escalate to a more heavy weight authentication. We're already seeing something similar with sites trying to figure out if you're a bot or not. For example, if you make edits on stackoverflow, the site might decide to challenge you with a captcha from time to time.
The article doesn't make it sound like this would be a first line of defense, in fact the author seems quite adamant about its infallibility, but even if such algorithms would be used conservatively it would still be a huge hassle. Imagine every major website nudging you with a popup every once in a while: "we don't like the way you're typing, please use our code generator mobile app to prove it's still you".
I allege that this would have a huge number of false positives for no discernible reason to begin with, but on top of that: I might be on a different keyboard, using a different mouse, using a track pad, might be in bed lazily trying to log in with my laptop, might be injured, might be distracted, I might be in another country using an unfamiliar keyboard layout, a different screen size, the list goes on...
Get your own personal assistant robot, Intel Jimmy-style, switch into a personality profile of your choice (I am in an Amazon mood today, next I want to shop on eBay etc.) and voilà! Pas de problème!
I hope these services have an opt-out. I know, I know, this will get infinitely more accurate at an arbitrary point in the future, and that I won't have complaints then.
But I get screwed constantly while travelling to other countries, getting repeatedly locked out of Gmail. Again, most users won't face these issues. But I don't want to live in a world where if you're not a nominal case, you're screwed.
The people who think passwords are hard will keep getting older and will be washed away. The generation coming in thinks paper is a broken iPad. So exactly, why do we need to solve the problem of passwords, when even slightly savvy users can handle it. Is it so hard to figure out that not too long in future, you can expect all your users to be comfortably savvy?
Also, passwords are deterministic and are a better UI. The Android Lollipop's on-body smart lock, for example, is pure non-deterministic headache. Haven't we gone through this with automatic sliding doors already?
Nope. Passwords used by them young folk just fit the requirements, and nothing else. Now, geeky XKCD readers probably use a variant of correcthorsebatterystaple, so we've got that going for us.
A temporary way to opt-out of some of these things might be a good thing.
I am on the glass half empty side of if user passwords will improve on a scale required. Even if you get to 90% of users using a good enough password, that still seems too low. For an average user, it is difficult to use a different password AND remember it, and that barrier probably will not change much. Many users still aren't going to start using a password safe.
The article mentions but dismisses multi-factor as degrading the user experience. But I think with the dominance of mobile devices, that providing a simple multi-factor token has become easier than carrying an RSA dongle. I find Google's use of the SMS token to be quite convenient.
I agree on multi-factor point. Recently, some apps even grab the SMS automatically and authenticate. I was surprised early this week, an app actually grabbed my MasterCard Securecode OTP, automatically filled it in and pressed submit. So degrading user experience of OTPs is not very true on mobile.
Moreover, in the mobile dominant world, use of public computers is very less. So typically an authenticated session would last months or years, rather than a few hours. So it is less of an annoyance.
After reading about this post, I was theorizing about using this kind of system in an enterprise system to help with intruder detection.
It would be another thing an intruder would have to bypass, and it could be constructed loosely enough to not interfere with a normal work day. Essentially just a flag, rather than a lock-out if it detects a failure.
I imagine a suite of behavior heuristics would be something of interest to a big enterprise company.
Yes. It's annoying, because it taunts me with efforless click once or twice a day while it wastes my time with the annoying image matching for the rest of the day.
That's a pretty dismissive attitude. We recently added reCAPTCHA to our sign up flow at Codecademy and it helped combat spam a lot. The site was harder to manage and moderate before we took that little step.
Assuming all websites using reCAPTCHA are not worth using seems ridiculous to me.
This is like the places that make me store my backpack behind the counter while I'm shopping. Yes, I totally get that it's one way to combat theft, but it's also treating me like I might steal something. From a UX perspective, it's hostile. I'm having to do work to solve a problem that I've never been the cause of. So if I have a choice, I don't visit those establishments a second time. They have chosen to put those extra roadblocks in place, and I've chosen to go somewhere where I don't feel like I'm getting punished for someone else's crime. Seems like a win/win to me.
To suggest it's a "dismissive attitude" to not want to be hassled due to some other bad actor implies looking at it from the business perspective, and not necessarily from the perspective of the effect it has on users.
I understand your point, but do you have a better suggestion to solve the spam problem? It's a really hard problem, and CAPTCHAs do a reasonable job of solving it at low cost to the end user.
Note that certain adware/spyware domains, like google-analytics( look at the tags <maybe-spy> and <maybe-ads> ), are commented out, so edit the file as per your needs.
Google's re-capthas are coming from google.com, to block them add:
127.0.0.1 google.com
127.0.0.1 www.google.com
This may be a tough choice to make, depending on how integrated has Google become with your life ( note how I phrased this relationship ).
The observation that some usernames are changeable, doesn't contradict the claim that passwords must be changeable, nor does it contradict the claim that usernames need not be changeable.
Fingerprints are usernames, not passwords, even if some people use them as passwords.
What's the point of a password you can't change? Once it leaks, you're screwed forever.
In the autenticaion realm, there's three main things used: a) who you are ("username") b) what you know ("password") and c) what you have (smartcard, various kinds of dongles). Biometrics of any kind only fit in the first category. The other two must be changeable, or there's no point to them, since they become aliases for the username. Any authentication system needs to assume the password or the what-you-have thingy leaks or is stolen. If they can't be changed, it becomes rather difficult to lock out an attacker while still allowing the legitimate user access.
> Fingerprints are usernames, not passwords, even if some people use them as passwords.
This doesn't make sense. You cannot "use a username as a password".
Fingerprints, retina scans, DNA samples, etc are biometric passwords. They are unique identifiers to your identification, and cannot be changed for obvious reasons.
Please reread this thread. The reason 'liw and I are on the same page, and you literally denied that 'liw said what we can all read 'liw saying three inches above, is that you simply haven't thought deeply enough about this topic.
The entire concept of "biometric passwords" is flawed, because as you see, they "cannot be changed for obvious reasons". One of the most important things about passwords (and passphrases!) is that they may be changed at any time. Every time there is an unauthorized data dump, we get lists of thousands of passwords or hashes thereof. Therefore, anyone who protects important assets with passwords should change them regularly. Anyone whose biometric data is stored in a database will eventually have that dumped as well.
The day is quickly approaching when none of these biometric measures will be private anyway. With that in mind, they could perhaps be used as public identifiers, "usernames" if you will. In that sense they might be similar to the SSN, another datum that is clearly unsuitable as a password, even though hundreds of stupid organizations have used it as such.
There's nothing stopping you from registering a new user, using the "about" section at https://news.ycombinator.com/user?id=dhmholley to point to that new user, and perhaps even pointing back at 'dmholley from the new user. Your name would be changed, and everyone would know it.
The only thing that wouldn't transfer would be your valuable internet points.
I wonder if it makes sense to disable some of that information in JavaScript. You couldn't disable it for js videogames, but I see no reason for most websites to be able to track your behavioral profile.
The problem is that behavioral profiling will get better. How long you stay on a page, which links you prefer, and potentially a lot of the metrics that companies routinely use to A/B test their page would also reveal your behavioral profile.
It's a similar problem to rhetorical analysis. It's difficult to publish a paper anonymously if you have other publications because the rhetoric is likely to have your fingerprint plastered all over it.
Privacy is rapidly eroding and it's not clear the trend can be reversed.
These sorts of techniques have widespread applicability. Who needs facial recognition when you have kinematic behavioral analysis? Just imagine the trove of data you could pull from existing information sources if you had unlimited analytical time and computational power? As computing power becomes even cheaper and various analytical techniques become better our "effective privacy" window in our partially-anonymous society will grow ever smaller.
That's why I'm still entertaining the thought that we may be a "privacy vs. progress of mankind, chose one" type of situation. "These sorts of techniques" are the first scraches on the Great Web of Causality. I don't see a way to prevent it short of banning general-purpose computation. But are we going to deny ourselves all the advances in medicine, disaster relief, energy efficiency, etc. to protect ourselves from some future governments that may get funny in their heads? Maybe it's time to embrace that our "effective privacy" was only a temporary state of affairs, a historical abberation of the industrial age. I don't know if this is a good idea or not, but I suspect we'll learn to live with it and proper social customs will develop around snooping on your neighbours.
By the way, it's funny how often the discussions turn into "we need to stop technology X because evil advertisers will use it to do their evil things". It's not technology X that is the problem, it's evil advertisers that are assholes, and we need to find the way to get rid of the latter, not the former.
At this point we have to drag out the heavy philosophical tools and ask: what do we mean by progress? The "Whig view of history" is one of incremental improvement towards better states, but it's reasonable to ask what we mean by "better" and how the progress itself affects our view on what is better.
We also need to bear in mind that it's not just future governments but present governments in various parts of the world that will weaponise technology for control purposes. Behavioural analysis as part of the Great Firewall of China?
I think that the final destination here is going to be a near-complete lack of privacy in human relationships. Everybody is going to be able to look up everyone else's porn habits. Everyone one is going to know who does drugs, and who's cheating on who.
And as a society I think we're going to need coping strategies. Trying to protect privacy is only going to delay the eventual meltdown of privacy in our lives.
I think we are making good strides already though. There is less stigma around porn, around homosexuality, around fetishes. Religious tolerance also seems to be increasing. There are still problem points, but I think the trends are in the right direction. I'm happy to be proven wrong on this point though.
Also, when logging into my bank account and I'm anxious to see if that big deposit has been made, my behavior might be different than when checking at the end of the month if there's anything left.
Would that work if the user changed keyboards? I used Apple keyboards of various types for the last five years, and recently bought myself a new gaming pc with a gaming keyboard — and while WASD feel is great, typing is a nightmare, and I feel that my WPM count is three times smaller than on keyboards I'm used to.
We already know that places like Facebook monitor our every keystroke and store them for posterity. Yes, they hang on, also to the text you regretted, backspaced, and never published.
It would seem utterly unprofessional, and potentially detrimental to shareholder interest, for them to not also keep track of timings and typing rhythms.
Which makes me wonder how much mood analysis, lie detection, and other psychometrics they really have collected on us all over the years, given the right kinds of algorithms to run the lot through.
It was true at one stage, according to official acknowledgement form Facebook. There's a discussion somewhere here on Hacker News. Can't find the link right now.
That was actually my first thought. There was the story last year about replaying the typing of a Google Doc[1], and as it stores the time between keystrokes it struck me that an attacker could feed document you've written into an extension like this but in reverse, thus typing with exactly the same signature as yourself (at least in respect to the gap time).
If this doesn't get implemented into browsers as a default option or usage of extension doesn't get popular people using this are going to be easy to identify. It's like someone using just normal http and suddenly using https and Tor. You are going to stick out.
I tried NoScript for a bit. Obviously most social networks stopped working, but I liked how NYT became free again (since they track you by a cookie) and obviously HN remained solid. Essentially NoScript just means no online socializing, which I think I might grow to become ok with.
> Most (if not all) behavioral profiling systems check your mouse movements too. However in my experience, mouse movements do not provide sufficient metadata to accurately identify a user
I would love to know more about this. Online ad networks have access to mouse movement patterns on web pages, but users (usually) don't enter data through keyboard on such pages. And I would expect they already use this mouse movement data to catch fraud... I wonder if it can be used to identify users?
I might not get everything right, but aren't they proposing "kind of" a keylogger as a security solution? Even the idea of using a technology for which others build products to fight against seems a bit strange.
Now, admitting that everyone will use it in good faith, I'd like the fact that, by itself, it does not add another thing you need to do as a user to authenticate. But, as Paul said in his article, I only see it used as a trigger for other security measures.
I don't see why you find the idea that strange, an anti-virus is already running on your computer, hooking all your programs, and sniffing on your connections. Offense & defense have always used virtually the same techniques.
Imagine the behavior profiler learning enough to notice trends and even changes in groups. Imagine if it could identify enough to determine people who had previously been sexually victimized. I wonder how much that type of data could sell for.
My problem with all of this prfiling/bio-metrics is an extension of the problem with retinal scanning. It isn't a question of if it works or not, but of what information other than identity does it leak.
It sounds great and much more protective than passwords. You can't copy/imitate behaviors.
However, I am wondering if the system still works if you are tired or sick. Your behavior might change in this case and therefore the system would not recognise you.
I think you misunderstood the intention of this feature. The goal is to identify and/or profile users that themselves use just a regular log-in. This can be then used to improve targeted marketing, selling that information to third-parties for example.
Note how the article mentions that the gender can be determined after a few keystrokes, even though the user never entered that specific information. This is certainly not the only metric that can be identified. The point of the article is to develop a solution to prevent leakage of private/personal information.
> Note how the article mentions that the gender can be determined after a few keystrokes, even though the user never entered that specific information.
Research got median 88% accuracy testing subsets of 98 males and 35 females.
Note that I got 74% accuracy on that data set by guessing male, male, male, male, male...
The original researcher knows in advance what the ratio is, yes, that's my point. I'm illustrating that the research is not very good. They couldn't even identify women to take part in the study. Given the numbers involved, it certainly isn't Facebook-ready.
In general, I don't believe it is possible to distinguish male and female typing patterns.
What you might be recognising is how people learned to type combined with the size of their hands - that might partly but not exactly break along gender lines. Bucketing people on that basis is just a recipe for awkwardness.
Fabricating facts and using ad-hominem is not a very good way of backing up your arguments.
Quote from the paper:
We use the public GREYC keystroke benchmark database for this work. It is one of the largest databases (in term of number of users and sessions) in keystroke dynamics. To out knowledge, no existing database contains more individuals. In order to reduce the bias due to this high quantity of male information, we only kept the first n male samples( where n is the number of female samples).
( Don't bother with your response, I won't be reading it. )
>We use the public GREYC keystroke benchmark database
Yes. That's their own database which they're talking up, the one that they made to do this research. That's what I was talking about.
>In order to reduce the bias due to this high quantity of male information, we only kept the first n male samples( where n is the number of female samples).
It happens that I didn't read this part.
On reflection, what I understand now is far worse than what I originally understood:
- They have 35 females and 98 males, they take many handwriting samples from each.
- Since the participants provided many samples, these samples appear both in the training set data and in the test set data.
- I use the training set data to figure out if I can recognise the handwriting of the 35 female participants.
- Then I look through the test data to see if I can identify those participants again.
Basically what you've shown is you can identify the handwriting of 35 people if you've already seen it - 88% of the time.
Splitting groups into 'female' and 'male' is a red herring. This method would presumably work, even if I split them into two random groups.
That's the first thing I thought of, tired or sick. Or how many times I've used one hand to type in my password because I had a drink or food in my other.
How would mobile work with this? Sometimes I use both fingers, sometimes just my thumb on one hand. Would it just create multiple behavior profiles for me that are accepted?
In combination of the right password and the behavior match, it seems like this would actually be pretty strong. I'm looking forward to trying to break it tomorrow with a friend.
How is the profiling data supposed to be used theoretically? I hope not as a full login. I'd count it as a "what you are" type of item like a fingerprint and would just only want to use it as a username.
I think session expiration could actually be an interesting use case. Instead of/in addition to "session expires after X minutes" you could expire the session after the behavioral delta is big enough.
But I'd assume a different login mechanism.
Could be good session hijacking protection, especially for applications that require regular interaction anyway.
Love that there's countermeasures already. Well written article, too :)
There are a lot of approaches around (big data, profiling, machine learning, ...) based on the assumption that people usually behave in the same way. And they really do.
I wonder how effective this would be on mobile keyboard. I can type fairly well on a desktop keyboard but on mobile it's more complex since you can have plugin keyboard enhancers.
We worked in biometrics like 4 years ago or so. It is trivial to defeat this "security mechanism".
We had snake oil people trying to convince us to invest in this(we are a software company), so we made a bet: If we could defeat their marvelous thing on a test they will pay all of our tester team a dinner(and go away and don't bother us again).
It was as easy as creating a prototype trainer. You record a person typing with a webcam. You make some statistics and then in 10 minutes you could train ANYBODY to copy the same behavior signature.
It took us like 2 hours to create the prototype trainer. After testing it we realized the webcam WAS NOT EVEN NEEDED, the microphone is enough.
So we used a single hidden microphone to record the people chosen by the snake oil team and ALL OF US(10 people) were able to defeat the system. Really fun for us, extremely humiliating for the snake oil people.
BTW: This man is proposing a complete keylogger of all your actions in the computer, what could go wrong?