> iTerm2 has a feature called Triggers, which can execute actions based on text matching a regex in your terminal. So we could write a regex to listen for “Yubikey for” and have it run the same script, eliminating the need to press buttons altogether.
Don't do this.
Actually seriously, don't do most of this.
The fact that your computer cannot induce the yubikey to provide its key material (or evidence of the key material) is where it gets "security" from in the first place. As soon as someone can convince your computer to do something there's an increased chance they can get it to do something else.
Some suggestions:
- Wire the F14 key up separately to "the finger" (and not to wifi)
- Use a yubikey simulator[1]. If your sysadmin won't trust you with the key material inside the yubikey so you can use a simulator, they definitely won't trust yourself to emulate the simulator with the finger either.
> Before we go any further, I’d like to acknowledge the reasons for this. If a remote attacker were to compromise your laptop, being able to trigger the YubiKey from software on the computer defeats the whole point of using the YubiKey.
> If you work in tech, you probably have a YubiKey
That is a gross overstatement. As someone who works for a pre-IPO startup and been in the bay in various startups for a number of years, I'd hazard that only 5-10% of the engineers had YubiKey, let alone "work in tech".
As an software engineer in a bank (i'm gonna call that tech) my password has to be 8 characters and capital letters don't matter. I still use a Yubikey personally.
I think he means the password complexity policy is only measuring length, and doesn’t distinguish between upper and lower case. Not that the case doesn’t matter in the password. Just in the policy.
Don't know about employee passwords, but for Wells Fargo the online login passwords are case insensitive. I assume this is due to some legacy system somewhere, but for their system password = PASSWORD = PaSsWoRd. One of the many reasons I no longer bank with them.
I've never had a YubiKey, but lots of other places use smart cards (ISO 7816) for secure authentication. I suspect they are far more common than YubiKeys or newer tech, especially in the financial industries.
YubiKeys can pretend to be PIV of OpenPGP smartcard ;=)
But yes in e.g. banking the security systems had been created long before there where really good USB based security keys so it's probably most times actual smartcards.
But then it also turned out that many smartcatd drivers are just REALY bad and complex potentially making your system more vulnerable so I can totally see companies switching away from them.
Amazon may use keys from a variety of sources, but the ones I’ve seen were packaged differently from anything I’ve seen from an actual Yubikey.
If only people were allowed to bring their own Yubikey for U2F and OTP, then they wouldn’t have to wait on whatever official procurement processes are in place from their approved suppliers.
I've got a couple branded ones from amazon IT -- a big one and a tiny one. Currently they're giving out some other one, but I think you can bring your own too
I used it for testing smart meters in Norway, so we did not need to run to the lab to trigger events. The best part is that the whole menu is interactive by just one physical button, a great job for a Bluetooth button pusher + Python.
It is also capacitive so it work on the phone screen, and YubiKeys (?)
I like the concept but the price is steep IMO. $90 for two actuators in the starter kit then $40 for each additional module.
I wonder if one could design a similar solution without requiring a bridge. Although of course given the power consumption adding WiFi on the actuators might not be a great idea.
That may be the case but it doesn't change the fact that I can't think of a use for these that is worth $40. I think the value these would add to my life would be worth closer to $5/each.
Now, if they existed for $5 each, I would be all over it and buy 10
I mean, for a power supply, bluetooth receiver/chipset, actuator... I don't think I cobble together something half as good for twice as much, and that's not factoring in labor.
This is some cowboy engineering and i love it. totally shooting from the hip, but still finishing up with a nice long-form post.
like other comments mentioned this is super impractical but that's not the point. building a robot finger to push a button at the push of a button is a hilarious saturday afternoon project
It's one of those projects that lets you flex your problem solving muscles in a low stakes environment. Taking "I wonder if i could..." and running with it
Security isn’t binary. This mechanism is more secure than no 2FA, because off the shelf malware isn’t gonna prod around your local network and look for a self built finger contraption. Furthermore, if someone does trigger it, the thing will move and you’ll hopefully realise you’re haxx’d.
By adding some security (protecting against some threat) to another security (protecting against same threat) you gain no security, after all: 1+1=1 in binary, so security is binary in this way as well.
By protecting the same thing with two different security mechanisms, you have multiplication, and in binary 1×0=0 so security is binary in this way.
And so on.
Security is about identifying threat-actors and devising cost-based challenges that exceed the value to others of compromise. In that way, it is absolutely a binary thing -- you are either secure from those specific threat-actors or you are not.
It's a real problem that without perfect knowledge, you don't actually know if you are secure from those threat-actors: Someone can discover a cool factorisation trick, or your computer might make weird noises when multiplying certain numbers, or it might allow authenticated users faster responses than unauthenticated ones. Threat-modelling in the face of those kinds of thing is nearly impossible, but even against basic stuff (the stuff we already know) it can be really hard. For these reasons and more, weakening some security in what you may perceive as a small way can actually be absolutely catastrophic to the security against the intended threat-model. So don't do that: Start from the other side, decide what you're trying to protect and from whom, and convince yourself that they really can't gain anything with what they've got.
Script kiddies using a ten year old version of metasploit? The finger is probably safe for all the reasons you're thinking, but if they find a way in, someone else is going to strace/gdb/dtruss all the things and find you've got a lot of secrets in RAM - if any of those belong to an even higher-value target, you can bet that is automatically harvested, collected, and shipped back to "home base" for use.
> This mechanism is more secure than no 2FA,
You can't meaningfully say more or less secure without saying who the threat-model is.
For threats I worry about, this is much less secure. I also believe that's true for most yubikey users, including the ones with the technical ability to do something like this.
> the thing will move and you’ll hopefully realise you’re haxx’d.
If the yubikey cannot be triggered by my PC because there isn't a wire connecting the two together, then there is zero risk from a remote attacker who does have access to my PC -- unless you believe the airgap grants you nothing in the first place.
I mean, I hope the airgap means something, but I don't hope that I will always be awake and in front of the finger paying attention to its gyrations and undulations.
My YubiKey requires me to enter a PIN as well, which expires after a period of time. I don't see any significant benefit to requiring a tap after I've entered the PIN.
The PIN can be entered remotely, or indeed supplied by software independent of a human's presence. The tap, on the other hand, cannot be synthesised without a bunch of extra malarkey.
If your Yubikey had a PIN terminal, it could treat the entry on its own PIN terminal as presence indication, but it just discerns the PIN via CTAP from the host computer and that might be caused remotely.
I'm not going to provide a better response than geocar, but I have two things to say:
1) I really wanted to give out the free advice that people should plug their yubikeys into their monitors. Get two so you can have one in the monitor and one in the laptop (or laptop bag). Also, you don't need a USB c key for the monitor.
2) there's the specific question of "what is the surface area of attack?"
With a yubikey, you limit that surface to "people who have physical access to your device"
I didn't make the case that security is binary. I simply pointed out that they are severely compromising their security posture by re adding remote users as a surface of attack.
If someone compromises their machine and watches what steps they take to access eg a production network, the attacker will trivially see the yubikey being triggered. They don't need to know what it is or why it's being run. They'll just know that after you ssh you run this script.
When I was at Google around 2012, the company had a custom 2FA dongle that detected motion rather than touch. An engineer who had remotely ssh'd into their workstation needed to 2FA and realized that they could send an SMS to their phone, cause the phone to vibrate, and trigger a false 2FA event on the dongle. (Or maybe they got their computer to play a loud noise. I forgot the specific details.)
Similar to this fake finger, it was a cool hack at the time but defeated the purpose of 2FA.
More on defeating 2FA, during my internship at Amazon I created a grease monkey script that would store 'n' yubikey codes and paste them automatically whenever browser asked for a yubikey code and this worked flawlessly because afaik yubikeys code have No Expiry... they just have to be used in order of their generation... I highlighted this issue of No Expiry of yubikey codes but no one took it seriously...
Correct - just like an evildoer who had your yubikey could generate and save a bunch of yubikey key strings, they could also generate and save a bunch of time-based codes for times in the future by changing the host clock.
You can use a bidirectional challenge-response between the yubikey and a trusted server - that's what U2F does.
But honestly, if an attacker has both your password and physical possession of your 2fa token, it's already game over.
Sure you can! (For yubikey OTP key strings at least) Just have yubikey sign the current time, you're already trusting them to correctly verify the key string.
U2F is a different animal though. The question is then: does timestamping the response reduce the attack surface enough compared to the downsides? I'd argue yes since the described attack can offset a failed login and the actual attack after a MITM. Also, it is probably possible to get the time-stamp within the kernel. If your root is compromised you're also done for.
> Just have yubikey sign the current time, you're already trusting them to correctly verify the key string.
By "them" you presumably mean Yubico not the Yubikey. But these OTP strings are generated by the Yubikey, not by Yubico so there's no way for them to be "signed" in this way. The verification process takes place at authentication so that would just tell you the current time, something you already know, it's useless.
> U2F is a different animal though. The question is then: does timestamping the response reduce the attack surface enough compared to the downsides? I'd argue yes since the described attack can offset a failed login and the actual attack after a MITM. Also, it is probably possible to get the time-stamp within the kernel. If your root is compromised you're also done for.
For WebAuthn (and its predecessor U2F) none of this is correct.
The Relying Party (a web site you want to authenticate to) sends a random challenge. A correct authentication in part signs that challenge, so timestamping is irrelevant here, your answers are either fresh or they're invalid anyway.
Because a physical FIDO authenticator is independent from the computer you are not necessarily "done for" if the computer is compromised, unless you've outfitted your computer with a finger to press keys it cannot, for example, press the button on the key, so there is no way for the compromised computer to obtain signatures from the authenticator with the UP (User Present) bit set, and checking UP in the signed response is part of WebAuthn.
Urban myth: somebody taped a hotdog to the CD drive tray of their workstation, and put the yubikey right in front of it. Then, whenever they needed to touch the YK while not physically in front of the workstation, a quick `eject /dev/cdrom` did the trick ;)
Well, considering that you don't have to actually be at your desk anymore, you can let it rot for a couple of weeks between replacements. Or use some other material with similar conductivity and capacitance characteristics.
Why not just keep the wire attached to the Yubi-key, but leave it electrically floating (high-impedance state) and have the board ground it whenever it needs to be pressed? No need for mechanical triggers...
Google won’t let you setup 2FA without adding a phone number which kind of sets you up for sun swapping attack by design...
My biggest beef is lack of NFC in MacBook. I wan’t a key in card factor because who the hell has keys these days. Maybe add hardware button on the card. It would work on on mobile and laptops. Banks could use their own credit cards for logging in...
If you've got adversaries doing a sun swapping attack you are in a Rick and Morty episode not the real world.
I can't swear Google has never known one my phone numbers in the many, many years I've had an account, though they don't have one recorded now. However I can tell you with certainty I have three WebAuthn authenticators, and no SMS-style 2FA authorised on my Google account now.
> If you've got adversaries doing a sun swapping attack you are in a Rick and Morty episode not the real world.
It happened to Jack Dorsey. And attacks tend to become easier over time. Any employee of an at&t store could do it to you right now.
The reason we know Dorsey was the victim of a sim swap attack is probably that he's important enough that when he was hacked he couldn't be dismissed with the "You probably messed up and leaked your password" dismissal.
No, Jack Dorsey suffered a sim swapping attack. Those happen here in the real world, but the post my joke was aimed at wrote sun swapping. Swapping suns isn't a thing outside of fiction.
As to me, since you made it personal, I'm sure somebody at an AT&T store could attempt SIM swapping but they might have trouble because the system won't give them my number from a completely different numbering system (different county) without a code they don't have.
If you socially manipulate your way into getting a transfer out code (good luck with that, but I'm willing to accept it could happen) then the big problem is I don't use SMS 2FA, as I wrote in the comment you're replying to, so it's a dead end.
This reminds of back in the olden days (when SMS MFA was still an "OK" thing to do) we needed shared MFA for IT Admins of various SaaS apps. We setup a dedicated phone, duct taped to the wall, to get these codes and push them to hipchat (via Tasker & NodeJS).
This reminds me of something that happened at a company I worked at maybe ten years ago. An employee was fired for setting up a webcam that pointed at his 2FA key generator so he could log in remotely without having to carry it around. Hacker mentality, but not in a way that won him the respect of the security team.
If the only phishing protection you find meaningful for 2FA tokens is domain matching then any extension-based password manager like Bitwarden will work with far less hassle than needing a physical token or your phone.
A slight aside, but so many of these keys seem to have the touch point / button applying force perpendicular to the direction of insertion, I wonder if there is any long-term potential to cause damage to the USB interface.
I have one of those you don't need to _press_ it seems the conduction of the finger is "key" (haha, sorry). But yeah I agree if you are pressing it _hard_ it will likely damage the usb port especially considering it doesn't have the housing around the contacts which I would say might protect the port a bit better, probably translating the force not to the contacts.
The obvious next step is to plug this into a server and control it through USB over IP. Call it "remote, centrally controlled 2FA" and your manager will love it!
I remember a story about how our (third party) security operations center doesn’t allow phones on the floor, but most of its customers use Duo Push. So there is a table in the middle of the floor with all the 2FA phones bolted to it.
The threat actors are SOC employees or visitors who might (maliciously or unwittingly) use their smartphones to record sensitive data.
The risk is data exfiltration. A selfie in front of the SOCs giant screen wall; a compromised phone that keeps recording audio.
The problem is that a third-party SOC will generally need a way to connect to their customers' systems. Sometimes that gets properly implemented as a site-to-site VPN with isolated jump hosts and session recording. In other instances, the SOC gets to use normal employee VPN access, and usually a handful of VPN tokens.
And now you have a fun conflict: One customer insists that no mobile phones are carried inside the secure SOC area. Another uses a VPN solution that requires a smartphone (and, e.g. Duo Push) as the second factor. How do you satisfy both? You take a set of mobile phones, possibly add some measures to stop them from being used as recording devices, and bolt them to a table so they can't leave the secure area.
Threat is that the mobile devices could be used to 1) photograph proprietary systems, 2) exfil data over mobile networks or potentially introduce 3) malware via usb ports.
I don't really get how bolting the devices is a solution for enabling 2FA, unless the access console is also at the same location. But it would prevent 1) and 3).
As long as the table is bolted to the floor, you're replacing posession (of a phone) factor, with location (in SOC) factor. Keeps both client happy, and security architect sleeping soundly.
Considering many services only allow one YubiKey or only one TOTP authenticator ... I might actually need a short term solution like this to beat the 2FA on those services. Otherwise what happens if I lose my key on the road?
The 2FA services that allow >1 YubiKey are good, I can have a backup key locked up some place and use them as intended.
You could generate the key(s) on a (airgapped, if so inclined) computer, push to multiple Yubikeys (though other brands are available, let's not let it become a 'google') and then delete the private key(s) from computer.
Of course, it depends what you want to defend against with your backup - this works fine for a broken OpenPGP smart card (;)) but in the event that it's lost or stolen.. well the best that can be said is that it gives you some window to create a revocation cert, login, and change the single registered FIDO device to a (third) newly provisioned one (or your second one, the backup, provisioned with a new key after logging in).
Or you could use a different method as your backup (IME if they only allow one they do at least also have backup codes, app-based, etc.) in order to login and change the device to the backup provisioned with a different key. (So it can be generated on the device in this case.)
Some policies will disable your Yubikey/U2F key if it goes unused for N days. Usually low enough that it's annoying to keep a backup key.
We've used https://rsc.io/2fa to share TOTP keys between multiple individuals. We store the secret key in a shared password store that's also behind a separate 2FA login.
With TOTP it's very understandable to limit you to one authenticator as each additional authenticator makes it easier to attack you (meaningfully more guessed codes are now correct at any moment) and the UX is awful because there's no good way to discern one TOTP authenticator from another.
WebAuthn / U2F are explicitly designed to allow multiple authenticators. The W3C WebAuthn spec. explicitly calls out that you should allow users to register more than one authenticator, and might want to provide a nice way for your users to label them, e.g. "Yubikey", "iPhone", "Greg's key" or whatever. Every site I've used that offers WebAuthn does this correctly except AWS and you'd have to take that up with Amazon.
> ... each additional authenticator makes it easier to attack you (meaningfully more guessed codes are now correct at any moment) ...
That isn't really an issue with TOTP either (if properly implemented), as each authenticator has a unique identifier and seed.
I've got multiple authenticators set up on a PayPal account. When logging in, I simply select which of the authenticators I want to use and enter the code it gives me.
Holy crap. I've avoided buying a U2F/FIDO2 device (eg. YubiKey) for a few years thus far, as I'm "happy enough" with TOTP. I know the differences in terms of implementation/security, but not enough services even offer TOTP, let alone the U2F/FIDO2 protocols. It's not worth the investment right now for how generally unsupported it is.
Do most services that support such devices really only support a single key/device? Do they offer the same one-time recovery codes that are generally offered with TOTP (though the storage/security of such codes server-side is dubious), for the eventual failure of the Yubikey (not if, but when)? Surely a tiny hardware device reaching the end of its lifespan doesn't mean you lose access to your account.
I have a pair of hardware FIDO authenticators, and my phone and laptop are both platform authenticators.
I have personal accounts with Google, GitHub, GitLab, Facebook, and DropBox that use at least two of those four authenticators. I also have Login.gov (US government) and Gov.uk Verify (one of two UK government authentication systems, hooray for needless duplication)
Most of them offered one-time recovery codes which I hand wrote in a book of one-time recovery codes, but without fetching that book I can't tell you it was all of them.
At my previous job I used a physical authenticator with AWS and that was indeed restricted to just one authenticator, on the other hand there's an account "administrator" for that AWS account so if you lost your authenticator the admin can get you back in and I assume larger companies have multiple people in the administrator role.
The WebAuthn specification explicitly says that Relying Parties (ie web sites) should support multiple keys.
And yes, if you lose access to all methods of authentication in some cases you lose the account. I believe GitLab explicitly flagged their intent to act this way for accounts that don't pay them money, and I would prefer this. As I wrote back then, if it's not worth an hour of my time to somehow try to prove my identity to you after locking myself out of your service (which if I'm not paying you, it probably isn't), then I don't want it to be worth an hour of some social engineer's time to steal my account.
I’m pretty sure every site I’ve setup to do TOTP only allowed one authenticator. I got burned using Google Authenticator when I had to replace my phone, because there was no way to transfer the auth data to a new phone.
Maybe The Google app has changed now, I have no idea. I’ve had much better luck storing TOTP in 1Password and Bitwarden - which allow you to sync across multiple platforms. So now device upgrades are a non-issue.
Most sites give you some static backup codes for TOTP - definitely store those somewhere safe, they can be a lifesaver.
Those backup keys defeat the entire purpose of 2FA and are like storing passwords in plain text. It only takes 1, maybe 2 of those codes for an attacker to add another security key to your account for future unlimited access.
Supporting multiple keys is a good idea but it solves a different problem. People want peace of mind.
Backup codes are not like passwords in at least two important ways:
* The site picks them, not you, so they're random nonsense different for each code, rather than inevitably being password1234 and being the same on Instagram, Twitter and your bank account.
* You don't need them usually, so there's no reason you'll have them to hand, which then makes it harder to steal them. Even for a social engineering attack, you increase the friction because now to help the attackers a user needs to go find their backup keys which is a hassle.
I think the parent’s point is that if you’re going to allow backup codes you might as well just add “second password” as a form of 2FA and enforce some basic complexity requirements.
It would be better to use a software TOTP authenticator with backups. You could also store multiple encrypted copies of the TOTP secret encrypted with different Yubikeys. I don't know if there's any software that does this automatically and momentarily decrypts the TOTP secret into a secure memory location when you need it. That preserves... most of the benefit of 2fa.
"better" is a strange word to use when discussing security, everyone's situation is different™. You're making a set of trade-offs between availability (backups) and confidentiality (only using hardware tokens which are tamper evident) which absolutely do not generalize to every case.
I'm saying if your hardware token is plugged into a machine that you connect to with USB-over-IP your hardware token is basically security theater and your actual security is whatever secret you use to protect the machine the token is plugged into. So if you're worried about availability but want something like 2fa software TOTP secrets make a better tradeoff.
Storing copies of a TOTP secret is as good as just having 2 high-entropy passwords and saving multiple copies of one of them in clear text, which is not more secure than having 2 high-entropy passwords and not storing them anywhere, and which is equivalent to just 1 doubly-high-entropy password not stored anywhere.
The fact that you can store copies effectively defeats the purpose of 2FA.
One of the reasons to have multiple YubiKeys is that if I lose one on the street I can just login to all my services with my backup key, disable the lost/stolen key, and buy and register a new key.
Whereas if someone got a hold of your TOTP secret, ehhh ... you might not necessarily know for a while.
One reason to have multiple copies of the TOTP secret is to not be locked out of accounts should one lose their 2FA token. For example, if one has two copies of the TOTP secret, one of which is in a secure location, and one of which is used for daily purposes, as long as the secure location is reasonably secure, it's not much different than storing backup codes in that secure location.
I do agree with you that having multiple copies of the TOTP secret that can be lost and not noticed isn't a good idea though.
There are these RSA-brand tokens that show a new TOTP number every few minutes. Occasionally people find an unsecured webcam pointed at one of those somewhere on the internet...
I guess a ~10kohm resistor in series couldn't hurt, should something go wrong. After all your finger is not exactly super conductive in the first place...
I got into robotics and have been able to do it professionally for several years precisely because it's a useful way to automate real world end-to-end testing.
Yeah... isn't one benefit of a yubikey that a secret must be acquired by some very physical and intentional means? If my laptop/password is compromised, then they still can't log in because they need my secret token from the yubikey. Well, if having that secret token is just one curl call away if they're on the same network then its no longer a very physical and intentional safeguard.
I know... layers of unlikelihood.. but I'd probably opt for a physical "good button" gapped from my computer as sort of a closed electrical extension of my finger.
> Congratulations, you've defeated the purpose of having a YubiKey.
Even a virtual 2fa button is useful. It prevents people using your stolen credentials to login to websites unless you click the button, even if it's just a virtual button.
Sure your computer can be compromised, but it's probably still more secure than sms 2fa.
I'd hazard saying that the purpose of a YubiKey is to provide two factor authentication. A YubiKey acts as an item, posession of which implies identity. When you allow for the YubiKey to be activated without human interaction, it's moved from domain of posession into the domain of knowledge - identifying party needs to know where to knock, not to possess they key. It's no better than appending the URL at the end of your password.
If you allow for a YubiKey, or any other physical artifact in that matter, to be remotely invoked it negates its utility as an authentication factor in the physical domain.
It depends on what protects the key. If the problem is being unable to duplicate it, you could protect remote access with a different YubiKey or some other second factor.
And the setup in the article isn't even remote access. If the only way it can be triggered is a local button press, you're golden.
Exactly. At one of my work places, we needed 2FA to log into a vendor portal. So we stuck the username, password, and TOPT in Vault which is protected by corporate AD password only.
Surely an authenticator app like Authy is more secure than a hardware key like Yubikey.
To access my account with the former an attacker needs my phone and me to log in to it for them.
To access my account with the latter an attacker just needs to hardware key.
I usually have my phone on me whereas I don't want to have to keep track of a tiny USB device and am likely to just leave it plugged into my laptop. My laptop is the most valuable item in my home and so most likely to be stolen, along with the attached key.
> Surely an authenticator app like Authy is more secure than a hardware key like Yubikey.
It is difficult to assess one choices as "more" or "less" secure without a threat model.
You've focused on the threat from attackers willing to use a mixture of a physical attack (stealing the phone or laptop, perhaps mugging you for it) and a digital attack (accessing online accounts using credentials they stole) but those are very rare.
On the other hand Phishing and other purely online attacks are extremely common. I probably see two or three attempts per week. Most of them are crude but not all, and they work.
Authy emits TOTP codes, so those can be phished. The phishing site gets you to enter your TOTP code, which it passes over to the genuine site, signing in the attacker with your 100% authentic working codes.
But a Yubikey (and dozens of cheaper alternatives including Yubico's own Security Key) can also be used with WebAuthn, which cannot be phished.
I'd like 3FA(!), which would be password plus OTP (e.g. a code from Authy) plus a hardware key like Yubikey. Wouldn't increase the work for me to log in since the Yubikey would always be in my laptop, but would avoid the phishing problem.
Depends... Your phone runs millions of lines of code and you likely browse the web on it which means that any moment an exploit could take over your phone. (or the regularly scheduled bluetooth vulnerabilities).
Bam, someone now have the ability to authenticate as you without even needing physical contact and without you ever noticing - this could run for years without any trace. With yubikey you will notice that it is missing.
There is a yubikey with fingerprint sensor that is supposed to come soon as well.
In my case, the biggest case against a phone app is that the most likely disruption would be either that my phone was stolen (though not specifically to get my credentials) or just break from a fall or something.
And until there is a decent fallback from that passwords are the better choice for me. (Yubikeys aren't that much better in that regard either)
Nice build but over engineered. You could achieve the same result by taping a piece of aluminum foil, or maybe even a wire to the capacitive sensor and connecting it to ground through a relay. Use the ESP8266 to toggle the relay when you want to simulate a button press.
"Now that we have that shell script, we can call it from other places as well. iTerm2 has a feature called Triggers, which can execute actions based on text matching a regex in your terminal. So we could write a regex to listen for “Yubikey for” and have it run the same script, eliminating the need to press buttons altogether."
But isn't the whole idea that it shouldn't be possible to trigger it from software?
"So.. you built a button that you press that will press a button? Why not just press the button?” which was a bit infuriating because they clearly missed the whole point. “Don’t you get it? This button BAD, but this button GOOD. Me want to press GOOD button.”"
I’ve used various yubikeys in personal environments over the past 7 years and found them to be a gimmick rather than a useful tool.
They are quite versatile and can be used for many different use cases which is part of the problem in my opinion. While not a total dummy, I found yubikey software and documentation to be difficult to use and configure and a pain to find how to setup the key for common scenarios. This brings me back to ideal users, probably corporate use where a dedicated team can support users for the specific use cases.
The Bloomberg terminal uses a piece of hardware that generates 2FA tokens, but requires you to scan your fingerprint each time. So we need a fake finger that also has my fingerprint, and a webcam pointed at the 2FA hardware, so I can just get my auth keys remotely and not need to carry around another dongle.
"Why have an Applescript call a shell script? I found that when I launched the shell script directly from Karabiner Elements, it opened a new instance of Terminal.app and took focus away from the window that is prompting for the YubiKey. This causes everything to run in the background."
A little off topic: Does anyone know of a way to get the results of a yubikey press into a remote desktop session? I frequently remote desktop into laptops that are in arms reach. If I need to use the yubikey, I have to remove it and plug it into my desktop and press it, since it acts as a local keyboard.
Wouldn't a local keyboard type into a remote desktop session? If a local keyboard can't type into a remote desktop session, the remote desktop session sounds mostly useless.
I would place the yubikey on top of a small squared base made out of Sugru, therefore elevating it and making it easier to press.. If you are concerned about the stress on the usb port then you use one of those "right angle usb cable" short cables available on amazon.
It makes me laugh that such a small problem (pressing a yubikey at an awkward angle, which sometimes doesn't register properly) can be solved with such a delightful over engineered solution.
The one thing I don't understand with Yubikeys: doesn't leaving them plugged in at all times in your computer (which the form factor highly encourages you to do) completely defeat the purpose?
Security keys protect against phishing, in addition to account takeover.
Say you click on a link that looks like Google but it's not. You enter your credentials -> these are now in possession of the attacker. If you have 2FA enabled AND you use a security key, the key digitally signs the hostname of the site you're browsing. This second factor won't be valid on the real google.com site because it was created on the phishing site.
Phishing protection is a core feature unique to security keys, and it's completely independent whether you keep the key in your laptop or you bring it with you.
WebAuthn (and its predecessor U2F, you should not roll out new U2F deployments but old ones are slow to upgrade) sits on top of FIDO, which has a protocol CTAP (Client To Authenticator Protocol) for this purpose.
At the extreme case, when you're asked to sign in with a Security Key you could have an authenticator with dedicated flash storage, screen and fingerprint reader so it can display like:
"Site news.ycombinator.com is prompting you to authenticate as blueblisters [484D2A8BBF] blueblisters@example.com - You last used this credential 16 hours 41 minutes ago. Touch the fingerprint reader to continue"
But in the real world the cheapest options have zero flash, zero display, just a push button and an LED. The LED flashes to indicate that you're being asked to press the button, your pressing it means you signify that you're present. All the data is still sent to them, but they can't display it, you have to trust your browser to validate what was sent.
This means it's unsafe to "press the button" when plugged into a general purpose computer unless prompted by an application you trust with your credentials, like a web browser you're using to sign in to sites
If the authenticator has no storage it can only really act as a Second Factor this way. But a device with storage can replace all steps of logging in if you want. No need to enter a password, an email address, anything, just one tap to get in. Apple is promoting this for the new iOS. A current Yubikey does have storage, and so it can do this, but the storage is very limited, unlike an iPhone with gigabytes of Flash memory.
Why would one always leave the yubikey in their laptop? Isn't one of the security features supposed to be physical seperation of the key and the system when the owner isn't around?
I thought the article was going to be about how there was a serious real vulnerability in the hardware and some remote attacker could spoof the Yubikey being touched.
For the new yubi keys with fido support I would recommend disabling OTP it massively improved the user experience for me., in difference to TOTP OTP has done fundamental problems.
And fit reasons not affecting many people OTP is implemented by pretending to be a keyboard which is just anoying in many cases.
But all other operation modes (FIDO,FIDO-U2F,PIV, OpenPGP) do not have that problem.
So when possible I use password manager + FIDO(-U2F), where no it's password manager + TOTP using the yubikey (I plug the USB-c yubikey into my phone accessing the keys TOTP functionality through the authenticator app).
It's an entirely useless device. All you need is a pw manager that saves you from non-malware attacks (email compromise aside). Yubikey cannot save you from persistent malware, which makes it useless in almost all scenarios. The only hardware device that makes sense is the one with a screen (like trezor). Simple click-to-use devices carry no protections that you wouldn't otherwise get with a pw manager.
To a robot, the act of moving meat into physical proximity with a switch might be considered to be an equally obscure action. As a component of a 2-factor system to prove token possession, either seems quite adequate.
The use of U2F tokens as single factor auth seems to have promulgated thanks to this implicit 3rd factor keeping the situation moderately at bay. I posit that this is largely the same reason that keys and locks remain relatively secure despite that almost all of them are trivial to bypass or duplicate. The physical access bit is just so damn inconvenient for the typical modern white-collar criminal.
Used to have some (actually, I still do, in a drawer) but gave up on them several years ago after random mysterious failures - just going dead after a few months.
I hope for their customers' sakes they have solved their reliability problem.
I have one that I had on my keyring for over a year, including during a swim in the ocean (I forgot I had it on me). It still works. Perhaps you got a bad batch?
At work we use to use the old Yubi keys that were nice and long and had a good contact areas. Then they switched to the nanos and wouldn't reprogram the old ones (or even order the larger ones of the same generation, or let us pay for them ourselves).
You can do the entire OTP entirely in software. Just be sure that the location you place the secret is encrypted:
Personally, I just use Keypass since it can do TOTP very easily. It's not the best 2FA since it stores the second factor alongside the passwords, but you could fix this by having two databases.
I don’t really care about this one too much because if you got a copy of my pw database and knew the password you have enough access to remove the 2FA on my stuff.
Like there is literally 0% chance I’m going to let myself get permanently locked out of my accounts if my keys and phone get stolen.
Don't do this.
Actually seriously, don't do most of this.
The fact that your computer cannot induce the yubikey to provide its key material (or evidence of the key material) is where it gets "security" from in the first place. As soon as someone can convince your computer to do something there's an increased chance they can get it to do something else.
Some suggestions:
- Wire the F14 key up separately to "the finger" (and not to wifi)
- Use a yubikey simulator[1]. If your sysadmin won't trust you with the key material inside the yubikey so you can use a simulator, they definitely won't trust yourself to emulate the simulator with the finger either.
[1]: https://github.com/sstelfox/yubikey-simulator