Hacker News new | past | comments | ask | show | jobs | submit login
The Effectiveness of Publicly Shaming Bad Security (troyhunt.com)
580 points by MandieD on Sept 11, 2018 | hide | past | favorite | 285 comments



https://login.swissid.ch does this too: disallow password managers from filling out the login. Upon asking them to fix: "Autofill completion is not allowed by us for security reasons. First, if that's the case, if someone gets to your PC, we can stop a hacking attempt and that's one of many reasons. For other questions, we are at your disposal."

They also only enforce SMS as two-factor authentication.

The idea of this SwissID is to become a nation-wide identity service, yet they manage to do everything wrong. Yeah, this annoys me to no end :(


Hah, that page actually allows you to test whether or not a certain email address is signed up for the service, which seems like an even worse idea given what they're to become.


How do you actually prevent this? It seems impossible to prevent that information from leaking via timing attack.


You should validate credentials all at the same time. In general, you only fail a login at the end of the process, not halfway through. Also every login failure, regardless of reason, should be accompanied by short, random server-side sleep before returning (e.g. random between a couple hundred milliseconds and a second).


Account enumeration is impossible to fully prevent, and as far as security vulnerabilities go, the risks associated with account enumeration are usually pretty irrelevant. It’s the sort of thing you’d see on a penetrating testing report when the testers didn’t find any actual security vulnerabilities.


I generally agree with that, but it's worth mentioning that there are exceptions where being able to tell if an individual is registered can be sensitive information (Ashley Madison, Grindr, etc.)


Yeah, there are some unique threat models where determining that an account exists would be a sensitive information disclosure. In those cases users would be more willing to endure the potentially heavy handed UX trade offs required to adequately prevent it.

It’s the idea that knowing that an account exists somehow represents a compromise in the accounts security posture that I generally reject.


I still think that's vulnerable to timing attack. Timing attacks are the voodoo that should keep us up at night, a clever attacker can extract information that seems impossible.


With unlimited logins, it is indeed vulnerable to timing attacks. You could measure the mean time of a certain email against the mean time of another email. This effectively gets rid of the random delay.

Perhaps you could measure the time of the login process and adjust the random delay based on how long the process has taken. If you can get this to average to a <1ms difference between the "email exists" and "email doesn't exist" you could probably defeat any timing attacks over the network.

That, or just limit retries and have different random timeouts for every login. Now you can't try enough times to get a good estimate of the mean for each login path, and you can't use other logins to help you refine your estimate because each has different timeouts.


Once upon a time I thought this was a pretty serious avenue of attack and wrote login forms to always run on the server in constant time -- starting a timer at the beginning and only returning output after a fixed number of ms.

I mostly don't bother anymore, because an effective timing attack for account discovery against something that's doing everything else correctly should take so many attempts that it should wake up whatever brute force protection sites should be running now anyway.

Given the number of dumb automated brute force probes against just about anything with a login, you can't just allow an infinite number of requests from a single IP (or a handful of IPs).


Oh sure, didn't mean to come across as someone who worries deeply over account discovery. I just saw an opportunity to remind folks that foiling side-channel attacks is very non-trivial and you can put forth a good effort and still be surprised at the information you're leaking.


Random sleeping isn't super effective. See: https://eprint.iacr.org/2015/1129.pdf and it's references.


From what I read, they can tell differences when having the randomness in the hundredths-of-seconds (specifically said they patched to go from 10s to 10,000µs). The randomness I mention, assuming N is the maximum number of tolerable system time for successful login, should be 2N + random(2N, ~100N). Or just store a time at login start and force hit the same deadline every time (via sleep of diff from start) then add randomness on top. Of course, additional brute force detection/protection is ideal for repeated failures.

The random sleep is to prevent obtaining enough samples and provide reasonable noise at this small sample size. Given enough samples until the end of time, patterns can be obtained. This is not breaking TLS here, this is login, and seconds of sleep vs microseconds makes a difference.


Randomness inserted like that can be filtered out statistically with a decent sample set - it's a gaussian distribution, so you'll still see the saying timing differences. This is why things have to be constant time, rather than random return time.


Usually you do it by giving the same response for an invalid password and a non-existent email. I wanted to see how that particular page was leaking but the site wouldn’t load for me.


That won’t really help in general because you can go try to sign up with an email address. It has to tell you if the address is already taken or not.


Not quite - best practice is to continue the initial setup - ie, "we've sent you a link, please click to activate your account". Except if the email address is already in use, you email the address and let them know that. That way they only leak that info to the owner of the email address - and they can include a password reset link too.


How about for websites that give you some functionality without a verified email address? At that point, you can't let a user dink around if the address is in use.

Granted, this doesn't apply to eg banks, but there's plenty of websites where this could apply.


Don't ask for their email address then. What's the point in having an email address that you have no idea if it's correct or not? You might as well ask them to put in a random string of characters.


Absolutely. It's really more relevant with services you can't directly sign up for, such as an internal service in the company where user enumeration helps you find a target when the error messages are different.


Requiring a timing attack and having throttling on any such endpoints raises the bar quite a bit by itself.

As to preventing timing attacks you can add a delay to give a uniform response time.


> As to preventing timing attacks you can add a delay to give a uniform response time.

You have to be very careful with how you implement the delay to prevent the timing signal from still propagating to the attacker.

https://blog.ircmaxell.com/2014/11/its-all-about-time.html#A...


What the vulnerability?

You find out that an email address belongs to a Swiss citizen?


No, but given a list of random addresses of Swiss citizens scraped from wherever, you can validate each one by checking them against this portal. If it shows up, it's an active address.


I too have a bad feeling about SwissID. Only SMS as two-factor authentication, turned off by default. No other methods planned, according to the support.

The fact that the Swiss Post requires it as login method makes me uneasy.


Can someone explain what security does forbidding pasting provide? My brain just can't comprehend it.

Also, would a fix be a password manager that just ignores the forbidding or is it done with JS somehow?


Forbidding pasting provides the security of having to click "Inspect elements" before the hacker can proceed.

(Most PW manager plugins I know already ignore pasting properties, though a bank I was customer of circumvented that; the textbox was actually a DIV and they had coded the functionality of a textbox into it to prevent pasting)


Yea, I ran into a bank that did that too. Then they proceeded to ask me a half dozen "security" questions from a massive list I could choose. Most of which I didn't know the answer to.

I answered all of them (and put my answers down in my pw manager) with something like "X bank has terrible security, I hate this bank" - hoping that one day I'll have to answer those by phone haha.


I always answer the security questions with gibberish that I also save with my password manager. I now use a method like correct-horse-battery-staple to create answers, but I used to use long alphanumeric strings. I switched methods because, yes, one day I had to read the answer over the phone.

Rep: "Tell me the answer to this question."

Me: "Ok, let me see what I set it to..."

Answer: F^O9dA66@wUPpK5$lTXBbrQ#yLP1EGl$

Me: "Oh... Oh no."


I'm a bit worried about social engineering there. "Oh, it's a bunch of gibberish" may pass muster with a support rep (in both of your approaches), leading to compromise.

Lately, I've been making up a seemingly correct, but random response (and different each time). My favorite vegetable? Sea cucumber! I store that in my password manager.


> I'm a bit worried about social engineering there. "Oh, it's a bunch of gibberish" may pass muster with a support rep (in both of your approaches), leading to compromise.

I can confirm that this is the case. I provided a gibberish answer to a security question for Blizzard. I didn't bother to write it down, relying on not forgetting my password.

I never forgot my password, but Blizzard shut down my account anyway because I was making payments with a card that was not listed as the account's "primary payment method". (The card I was using was listed on the account, but another card was the "primary payment method".) When I had to call support and answer my security question, the answer I'd filled in just meant that I wasn't required to provide the correct answer.


I've found it's better to give them correct answers that are entirely fake. The make of your first car is an astin Martin. Your nearest sibling lives in lunar colony 1.

This way "it's a bunch of gibberish" doesn't get past their security.


I use something like this: “the secret password is tango-seven-alpha-romeo-zero-zero-victor-sierra-foxtrot-quebec".

Never had to use these for real yet, but it should be a bit harder to be seen as a “a bunch of gibberish”.


"Oh shoot, it was a bunch of random words. I'm so sorry, I had it written down but I can't find the paper..."

Remember, an attacker can call support hundreds of times, getting a different rep every time. There's a good chance it'll work eventually.


Seems to me like that’s not really a criticism against using random answers for secret questions.


Clearly random answers are a problem. You're going to find support reps inclined to accept "oh it's just something random", which means you're guaranteed get compromised if you're a big enough target to spend some hours on.

Random but outwardly appearing valid ones are fine (but you'd want to avoid using the same answer on different sites). One site's "first car" could be Porsche 911, another's Aston Martin. Both aren't true, but the support rep doesn't know that.


I've had the same situation before, and I don't think I've ever had to read them the entire thing. Usually we did something like this:

Rep: "Tell me the answer to this question."

Me: "Ok, let's see.....ah. So, it looks like a random string of gibberish, right?"

Rep: "Um, well...(unsure if he's allowed to say Yes or No)"

Me: "Yeah, I use a password manager for all my stuff, so all my passwords are randomly generated. I didn't think I'd ever have to read it over the phone. Sorry about that! I can read it out for you, but it might take awhile. If I read you the first three characters and the last three characters, is that sufficient to demonstrate for you that I know the Answer?

Rep: "Yes, I think that would be fine."

Me: "Alright, then! First three, 'F', 'caret', 'capital O'. Last three, 'capital G', 'lowercase l', 'dollar sign'.

---

As I said, I've never had anyone challenge me to read the full thing out. When I explain why it is that way and give them the bookends, they are usually convinced that I'm me.


Theres a security issue there in that you don't want them in "I think that would be fine" territory.

Some of those reps may have been fine with you saying "oh no. I didn't think I'd ever need that and just mashed the keyboard".

Better to use something that's still made up, but is plausibly true.


"I didn't smash this F^O9dA66@wUPpK5$lTXBbrQ#yLP1EGl$"


As is demonstrated time and time again, the weakest point in any security system is usually a human being.


If the operator gives up after the 10th random character given on the phone, it's still quite secure.


I am reminded of https://xkcd.com/1181/.

What garbage string is there doesn't matter. Just as long as it is recognizably garbage and you know it.


I use GPW strings for this use-case. GPW has weaknesses that make them somewhat poor for use as first-line passwords, but they're still really good for passwords that you need to read to someone over the phone.

In my most recent experience with them, the company allowed to be set both the question AND the answer. So, they had to read a random string to me, and I had to read one back. It went quite well, actually.

https://multicians.org/thvv/gpw-js.html


I do the same thing, but I only use strings that are maybe four or five characters of letters only. Most of these answers are expected to be things like people and street names, so I think it's still vastly more secure without looking like the system had an internal error.


The only change I make to this process is my security questions are stored in a separate password manager to my passwords. That way if I lose access to my passwords and actually need the stupid (ahem, security) questions I can find them.


This is exactly what I do. Keep it easily human readable but it should still be nonsense that no one could social-engineer their way into getting.


I thought I'd seen it all until I had to set my United Airlines security questions.

Not only are the questions picked from a list, but the answers are. http://www.slate.com/articles/technology/future_tense/2016/0...


I did that at a certain corp, my security answers were all long sentences completely unrelated to the question (mostly things like "This question is useless" which made for some interesting phone conversations until they got it)

Basically if you wanted to reset your employee password, which gave you access to corp vpn etc, you could call a 24/7 support line and give them your security answers.

The problem with this is that most of the questions were not things that are inherently secure, things like "what was the name of your primary school" are easy to guess or research.


> The problem with this is that most of the questions were not things that are inherently secure, things like "what was the name of your primary school" are easy to guess or research.

They're also inherently unanswerable in many cases.

As an example, I went to two different primary schools. I don't have a favorite musician or sports team, and the answer to "where did you meet your wife" might be the school, the city, or "in class".

Last time I had to update my Apple security questions a good 80% of the questions weren't ones I felt I could answer in a way that'd be memorable a few years later.


Not to mention, these things are usually case sensitive. Sure, I can remember my childhood address, but how did I capitalize it? Did I abbreviate street? If I abbreviated street, did I add a period to make it "St." or "St"?

Fortunately, I don't notice too many services requiring security questions these days. Unfortunately, most of them are banks or other services that probably also have my SSN.


USBank?


Nop a german bank, they have stopped doing it somewhere around december last year.


A couple (bad) examples I can think of:

* you leave the password in the clipboard, and another website copies it (used to be a thing, I think it's patched now)

* same case, but now a coworker comes to your unattended PC and retrieves the password by pasting it somewhere

* allowing pasting would undermine the idea that you should never write your password down, and lead to a proliferation of files called "passwords.txt" on everybody's desktops

None of this arguments is really good, but I can believe that they would be the result of a world without widespread password managers (also known as "the 90s") and tradition.


> * allowing pasting would undermine the idea that you should never write your password down, and lead to a proliferation of files called "passwords.txt" on everybody's desktops

"Never write a password down" has always been a bad idea. A file named "passwd.txt" on my desktop still is better than using a trivial password or the same password on all sites. It still requires compromise of my machine and prevents the password from being recovered from a dump of the pw-hashes.


And if your PC has full disk encryption and you lock your PC when afk, then its pretty close to a password manager.


No, there's a reason password managers should be preferred. For example, sometimes (browser) sandbox escapes grant reading of arbitrary files. Take for example the recently discussed malware scanner the sent the browsing history, it could read such a file and transfer it back.


If you have malware on your device you lose already. It can just read memory out of your browser process and steal your passwords.


"Just".

Modern browsers and OS kernels have extensive mitigations against this. Reliably extracting a password from a browser process's heap would be newsworthy today.


I think “just” is apt. If you have a web request to send the password, you will have a url or username string very close by in memory that can be searched for.


I specifically picked an example of a malware that was capable of reading arbitrary files, but not arbitrary memory because the authors found a simple way to trick users into granting them this permission set, but not another.

A sandbox escape that allows the attacker to trick the browser into sending arbitrary files back is also substantially different to having malware on your system that can read arbitrary memory.


Windows doesn't encrypt files on lock. You can tell this because your applications keep running...


But that's not the point. The point is they have to break past your login screen, or, failing that, pull data from your storage while it's "offline" (i.e. not booted). If it's encrypted, they can't pull data off your drive externally, and as long as they can't login you're fine. Plus all the data is stored still encrypted. It's not like it decrypts the drive when you boot, it just enables an decryption algorithm that decrypts data on the fly (AFAIK).


The data is encrypted, but as long as the encryption keys are in memory, they could be retrieved via either an attack against peripheral ports that can read memory (thunderbolt has proven vulnerable and USB too, iirc) or via a cold boot attack, possibly using freeze sprays. Such attacks against FDE have been demonstrated. A good password manager purges the keys after a bit or on lock. pass ties into the gpg ecosystem and thus allows having the keys on a smartcard, a capability I’d like to see in other PW-managers.

MacOS has the option to purge decryption keys from memory on lock, but that effectively puts the computer to sleep on lock. It’s more secure, but annoying as hell since all network connections die (VPN, ssh, ...)


True, there were a couple teams recently with proof of concept for a cold boot attack on BitLocker, so I guess it's still not so secure. But unless you've got some crazy blackhat or a three letter agency after you, I'd argue you're probably not at risk ;)


If you have a fancy "USB" port which allows connexion of graphics cards (so basically a PCIe port, although it also accepts USB), chances are that you can do whatever you want with unrestricted DMA through this port. It seems that letting Windows use the IOMMU is only allowed on the Enterprise edition, which is basically unavailable for the general public. So facing determined and/or well financed actors, it is as if the Windows login do not exist anymore for tons of Windows users.


Using the clipboard at all for security related things like temporarily storing a password is a bad idea. The clipboard is a big public billboard visible to anything running on your computer.

The fact that password managers use it at all is simply because it is the only hack that works to reliably get data into password boxes. Yes, its a hack. The HTML5 spec should have exposed a mechanism to securely insert data into an element tagged for such a purpose. A one way mechanism.


> Using the clipboard at all for security related things like temporarily storing a password is a bad idea.

(Emphasis mine.)

Well. The moment you have evil code running on your box, as you, then I'll naively assume you have a bigger problem to deal with anyway.

> The clipboard is a big public billboard visible to anything running on your computer.

And everything from client work to love letters in my home folder is available to anything that runs as me, unless I've gone out of our way to secure it - and succeed.

Not saying the clipboard isn't a problem.

Not saying browsers shouldn't expose a carefully thought out API.

But the way I read your post it might scare people away from password managers and back to a single password or passwords written on papers stored within reach from the workplace.


> But the way I read your post it might scare people away from password managers and back to a single password or passwords written on papers stored within reach from the workplace. Browser extension password managers are very much a step in the right direction. For most people, they strike the right balance between convenience and security. I guess I'm just a very paranoid developer who does not value that convenience as much as most.


> The clipboard is a big public billboard visible to anything running on your computer

So is your keyboard buffer. If someone's already in your computer watching your clipboard they're probably also watching anything you type too


On X11 and Windows (except UWP apps probably?), yes. On Wayland, random apps can't listen to global keyboard events.


A number of people dislike Wayland because applications can't watch the screen, keyboard input, clipboard etc outside of their own window. Really, that's one of its great strengths over X11.


I'm curious, how does an application such as OBS Studio (https://obsproject.com/) work with such limitations in place?


Through some API that checks authorization.

Eventually everyone should be using https://github.com/flatpak/xdg-desktop-portal/blob/master/da... (which is based on https://pipewire.org )

For now, e.g. https://github.com/fzwoch/obs-gnome-screencast uses org.gnome.Shell.Screencast


Keepass tries to mitigate this, as well as keyloggers, by splitting autoinsertion into parts using both. An even better solution is probably one-time passwords with 2FA.


Thanks, that's good to know!


Does any password manager uses a virtual keyboard to type the passwords in? That would avoid using the clipboard, but wouldn't work with one of my banks which doesn't even have an input box. They show a keyboard on screen and you have to click on the letter to type your password.


You have to type in your password WITH YOUR MOUSE??? Wow. Sounds like a great way to make sure everyone uses the minimum allowed length for their passwords...


One of mine has this mouse-to-type feature coupled with numbers only and max length of 6...


Keepass can "auto-type" the password by emulating keyboard events.


passmenu --type


> you leave the password in the clipboard, and another website copies it (used to be a thing, I think it's patched now)

Even if it were not patched, how a site disabling "paste from clipboard" remove my password that I have already copied to clipboard.

Note that I would copy the password first and then I would realise that the site is not allowing me to paste it.


You only realize that the field doesn’t support pasting once and don’t attempt it ever again. If it allows pasting, the password will be in the clipboard every time you log in, which arguably could be more times than one.


No, the next time I try again and after it fails again then remember they didn't allow it and curse them for not fixing it already. Then repeat the process.


if the issue is that it stays in the clipboard the site could just remove it once pasted or when sending the form


I am not familiar with JavaScript. Could someone share some sample JavaScript code that shows how it allows the browser to modify my system clipboard?


Check out clipboard.js.


1Password clears your clipboard after a short period of time, and you can customize it. Is this not a standard feature of all password managers?


So does pass and KeePassX.


And pwSafe (based on some of Bruce Schneier's work), FWIW.


I think it's "Cargo Cult Security". My hypothesis is that someone once realized that having a password stored in the clipboard could be bad, so let's just ban pasting passwords. The logic here is obviously completely backwards, but once a meme is out there it's hard to stop it.


I believe that the argument they rely on is that if you're pasting it then you must have it written down somewhere so that you could have copied it.

It's not a completely left-field position - it's definitely wrong in a modern context - however, previous years of security advice did focus on not writing passwords down.

I have also heard that they believe the removal of the option to paste removes the ability of attackers to exercise brute force attacks against their site. This betrays a lack of understanding of multiple technologies, though.


It's just as worrying to parrot advice that hasn't made sense for years than if they'd dreamt it up themselves. Whoever is in charge of security should have updated their knowledge in at least the last decade or so.


It's my experience that at some organizations, there is effectively no learning anything new beyond the hiring date from any outside source. They may hold very general training for uselessly shallow/fad stuff like "how to be an innovator" or "what the cloud means for our business" but those are generally not substantive efforts to improve the effectiveness of employees. They often set no goals and have no consequences for anyone. They're check boxes/busy work for upper management.


Indian SBI Card does this, prevent pasting, & has set autocomplete=off to prevent browser from remembering the password. I got into Inspect Elements, removed autocomplete, & then typed in my password & then Chrome offered me to remember it. Now on subsequent logins, clicking into UserID does not trigger the saved id/pw combination dropdown from Chrome, but clicking in Password field does.


For pasting of passwords, there is no security benefit. password managers ignore it.

"They" often also disable pasting of other duplicated fields, like bank routing number, email address. So this shows why it is done (but misapplied to password). It's so you don't just copy and paste a wrong value that is hard for them to verify and leads to support calls. By forcing you to retype from scratch, the theory is you will either get it right twice or the error will be flagged.


This is so silly, I am more likely to get it wrong if I type it in than paste it in.


but you aren't likely to type the same, matching error twice.


As a naive non-security expert, I'd consider it a risk to put passwords in the clipboard - what if there's something that can read your clipboard? JS on another site for example? What if you accidentally paste it on a 3rd party site?

That's copying it though. I know a decent password manager will clear the clipboard after an X amount of time too, mitigating the risk somewhat. But that's copying, not pasting in a field.


How does a site disabling "paste from clipboard" protect my password that I have already copied to clipboard?

Note that I would copy the password first and then I would realise that the site is not allowing me to paste it.


If a browser allows websites to read the clipboard, that's a security bug in that browser.


There is no security benefit to forbidding pasting. https://www.troyhunt.com/the-cobra-effect-that-is-disabling/

Why do people do it? The same reason people believe /dev/random provides security benefits.

It's important to ignore the correct things. (But it's often hard to figure out what to ignore.)


Of course, because everyone should be using /dev/urandom instead. \s


Whenever I see messages of this kind, there is some doubt light that turns on in my head. Wondering if it’s possible I missed something, but I always have to conclude that, no, there isn’t anything wrong with my thought process.

They really are just that clueless...


Safari does it best. I does not fills the box, until you click on it, then it gives you the choosing list to click an entry.

That prevents the page's JavaScript code to read autofills before user action.

Regarding can you trust Troy, check this out https://news.ycombinator.com/item?id=17398821


Me too as we already had an attempt called suisse ID (10 years ago now?) and mobile ID. Both which cost money limiting adaption greatly.

Personaly I like mobile Id which makes logging into PostFinance or swisscom easy but not all phone providers are able to offer it.

Sucks that for post and sbb I need yet another system, Swiss id....


Notably, SwissPass did not allow me to set a randomly generated, very long (>20 character) password.

It also didn't notify me that it didn't allow such passwords, it just went right ahead and created an account it was impossible to log in to.

Thankfully, I was able to reset the password to one which is far unsafer.

Well done everyone.


India Government's official Website for managing public retirement fund, NPS, eNPS does this. Password need to be changed every 90 days, of max 14 characters length, but no where documented. On password change page, it will silently accept your any 14+ length password & will truncate it to 14. Then you try to login with your actual password, it gives error, Wrong Password.


That's just stunningly bad (both the 90 days reset, and the silent (!) truncation to 14 characters...)


In the U. S., Washington state's initial rollout of their ACA site did that. Gave it a big, long >20 char password, it created the account, go to log in and...

How did I figure out what was going on? They would happily email your password in plain text. </facepalm>


The first thing I would try in that situation is to input the first 16 characters of that randomly generated password.

I've stumbled upon login fields that just dismissed everything after the 16th character a few times.


Troy and Krebs should team up to create a security hall of shame and only remove companies when issues are fixed.

We have security vendors who have sslv2 enabled and they can't understand why that's an issue.

We have huge fortune 250 companies that we exchange full credit card data with that have TLS 1.0 enabled with Symantec certs and only two weak ciphers. I sent them an ssl labs report and they accused me of breach of contract for hacking the site.

This list of security and finance related vendors that are double facepalm worthy is just astonishing.


I'm doing this in the community where I live and I have discovered that it is super effective. I send an email to each company explaining a security problem with their site (currently focusing on simple lack of HTTPS for form data, and not mentioning the public disclosure because I want to see who fixes things because they care vs. those just avoiding negative publicity) and if they haven't resolved it or replied within a week, I list them publicly on https://www.insecure.org.je.

The site isn't winning me any design awards and needs expanding of the advice articles, but dozens of local companies are immediately spurred to action when they appear in the "Sites requiring extra caution" section. Thousands of local users have directly benefited by the added security, even though they are completely unaware of why it was upgraded.

The reaction from some business has been very predictable, with a mix of hostility, threats, confusion, outright lies, but enough respond politely and want to fix things, and I go out of my way to help those who want to learn.

Source is public and if you want to try this locally, I highly recommend it: https://gitlab.com/tombrossman/insecure.org.je


That is a great idea! Do you reach out to companies after adding them to the public list of shame letting them know, or do they eventually discover then are on it? I may have to implement this..


No, I just list them and I do not send any follow-up message. Many do find out immediately though, because others tell them about it. I seem to get a lot of referral traffic from LinkedIn after doing updates, so I guess someone is posting about it over there.

I had someone well known in the local tech community call it "The most unprofessional thing I have ever seen" but later he was using it as a sales tool to persuade one of the companies listed to hire him to upgrade their site. I don't condone this but once the info is public I can't control what people do with it.


There's also Plain Text Offenders, for those who clearly store passwords in plaintext.

http://plaintextoffenders.com/


Freedom of the press foundation does this for HTTPS for news sites: https://securethe.news/sites/

Though, imo, they should be sorted with the worst offenders on top.


http://attrition.org/ does this, though it doesn't look like they've kept up with companies.


attrition.org doesn't even provide HTTPS.


Wow...how can a site about security not support HTTPS?


FedEx just sent out a message last Friday:

"FedEx will renew the security certificate with Symantec for the following FedEx Web Services servers at 11 pm CDT on September 15, 2018. Please note that these certificates are valid for two years and will expire on October 4, 2020."

Heh.


Excel 2010 only supports TLS 1.0 Can't update SSL until Excel is updated.


2010 was almost a decade ago.


> You see, they knew this process sucked - any reasonable person with half an idea about security did - but the internal security team alone telling management this was not cool wasn't enough to drive change. Negative media coverage, however, is something management actually listens to.

I could not agree more with this statement.


I have compassion for these people who made these ludicrous comments -- they clearly aren't cryptographers or digital security experts. Let's separate the people making these comments from the corporations they represent.

Public shaming of a person is never the right response, in my opinion -- live and let live.

Public shaming of a corporation, on the other hand, may be the only way to get the attention of the decision-makers.

Let's be kind and compassionate to individuals.


As noted in the article, they're a public face of the company. If they don't know the actual answers to questions being posed, they should reach out to someone in their organization who does, not make stuff up.


I agree but after seeing a lot of these think some of their presented facts were from someone higher up - though maybe informally, maybe pasted from a internal FAQ.


Then something is internally broken at that company, and -- again -- the people who act as the public face of that company run the risk of having the resulting ire directed at them. But it's not personal -- it's their job to act as a proxy for the company in the public sphere. If they're being endlessly attacked by people because the company is doing something wrong, they need to send that information up the flagpole to their higher-ups.


My compassion dies when their humility does. You cannot affect attitude towards disclosures by accepting the existing attitude. Compassion is bidirectional. A "we will investigate and get back shortly", or a "our internal team has investigated and here is their answer: '...'" is much more appropriate. Several companies' social media responders are courteous. Excusing corporate condescension mouthpieces just encourages the practice.


So an eye for an eye, huh?

The thing is, publicly shaming someone is a very lazy way to effect change. Sure it might work for corporations, but if there's resistance it may be necessary to try reaching out to the decision makers via a different communication channel. I don't think it's appropriate to ratchet up the shaming and outrage.


This is addressed in the article. There were numerous instances of internal security teams trying to effect change, but were completely ignored by the powers that be.


> The thing is, publicly shaming someone is a very lazy way to effect change.

Effective, too! Sounds like a win for me, a human being who isn't paid to ensure that other companies keep their systems secure.


To be clear: do you count shaming an official corporate account as shaming a corporation or as shaming the person that actually write the tweets? Because of course corporations will always put a person in front of, so that you feel bad for speaking the truth.

That's why official corporation tweets often carry a signature with a person first name - it reminds me of Pennac's Malaussène, who by trade is a professional scapegoat.


There's a subtlety that people seem to be missing.

Okay: "that is a dumb policy." "Big Corp's security wicket is hilariously wrong."

Not okay: "You are a moron." "This customer service rep should go find a fast food job."

In the examples he posted, Troy seems to be staying strictly on the side of shaming the companies. Yes, he's doing it by interacting with a customer service rep, but that's their job: to represent the company. He's not attacking the CS reps directly, and I couldn't quickly find any examples from his Twitter account to the contrary.


I think the author agrees with this. And they see the employee behind the customer service account as acting as a mouth of the company. Responding to what they say in that role is shaming the company, not the individual. Bringing the employee's personal life or questioning their individual abilities would be unkind, however.


Yeah, in many of those tweets you can almost feel that the person writing it is trying to reiterate something they don't fully understand.


You clearly did not read the article. This isn't youtube comments; these comments are supposed to be for informed discussion. Please read the article before commenting next time so we can all have a solid place to discuss these issues.


This is addressed in the article, but your comment does not seem responsive to the article. (To summarize: shame works; social media reps function as the face of the organization.)


This is about shaming companies, not individuals. Most of the time you don't even know the individual behind the generic customer support channel anyway.


and remember that corporations are by definition a collection of individuals... is public shaming really the only way to get through to the decision makers? I hate the thought of regulations... but maybe there is a $$ solution here?


I really dislike websites that prevent the use of password managers by disabling the ability to paste. Recently[0], I discovered that you can stop websites from doing this in Firefox by setting dom.event.clipboardevents.enabled = false. This has already improved my quality of life slightly.

[0]: https://gist.github.com/0XDE57/fbd302cef7693e62c769


While true, it’s a bit of a blunt force solution as it prevents all clipboard events (e.g. no ‘click here to copy’ anymore).

That may be a reasonable compromise for some, but I’ve seen some weird broken behaviour as a result of people using this FF toggle.


Could you have an extension with a toggle that turns it off temporarily only for password entry? Or even better UX, integrate it into the password manager extensions such that the temporary action is invisible to the end user? It would toggle the switch off during login, and toggle it back on after.

Perhaps it's not a good idea to code too much browser-specific behavior into this stuff on the other hand.


There's an extension already that does that. It's called Don't Fuck with Paste or something like it.


Isn’t this the default behaviour in Safari though? That’s a browser that’s used by a lot of mobile phones, and it doesn’t seem to be problematic enough for it to be changed.

I think this is what Safari does at least, because the Google website tells me to hold and copy the links instead of just holding it like on my Android phone.


There's some sort of counter to this I have it enabled but Facebook still prevents me from pasting (sometimes! Pasting deletes the rest of the comment and closes the reply box, super annoying). Some sites still prevent password pasting too -- but I've found you can drag-drop in both situations to work around the restriction.


On Chrome, you can install the 'don't fuck with paste' extension.


I already set this to false so that news websites can't hijack my copy functionality to insert "Read more: <tracking link>" at the end of anything I copy from the article.


An add-on/extension I've found useful for combating this behavior on certain financial websites is Don't Fuck With Paste:

https://chrome.google.com/webstore/detail/dont-fuck-with-pas...

https://addons.mozilla.org/en-US/firefox/addon/don-t-fuck-wi...


I can think of one exception to Troy's claim: companies that hold data of people who aren't their customers. No amount of shaming Equifax would have fixed their practices, because we can't choose to not have our data collected by them.


The worst offenders are max password length limits, especially tiny ones like 8 characters. It's a guarantee that the service does not properly hash and store passwords.


Just as bad - the ones that enforce an uppercase, lowercase, number and special character. Except that they don't tell you that when they mean "special character", they don't mean all of them, and don't tell you which ones are invalid.

I can't tell you how many times my 1Password generated password was disallowed because of a special character that I ultimately had to delete by trial and error.


If you think that's bad, my water utility (Severn Trent Water), at the time, accepted a password of arbitrary length, but only processed 20 bytes of it at login time.

This means when I created my account with a 63-character password, which it let me do, and then I logged in for the first time, literally minutes later, with the same 63-character password, it told me it was invalid. Of course, at the time, it didn't tell you what the length limit was, and the login and signup forms didn't limit it either. If I didn't have the bright idea (and prior experience elsewhere) to try "maybe it's only the first n characters of the password", starting at 8, and finally succeeding on 20 (thanks to no rate limiting), I would have never figured it out.


One time I used a system that, when setting my password, it converted all letters to lower case, but it didn't do the same when trying to log in. I figured out that's what happened when I called them and they told me to give them the password over the phone (yep) and then started giving me hints about what was 'wrong' with the password I gave them. This was a financial services company.


www.flashseats.com does this as well... let's you type in a long password, but only saves a small portion of it.


Another fun variant on this is where you're allowed to set a password with special characters that the password validation code later rejects.


That usually happens on banking sites because their system is backed by an old mainframe that can only handle 8 characters. The second thing you said is right though -- they probably have terrible security on that mainframe.


NatWest goes a step further and asks you to enter (for example) “the 1st, 5th, 8th and 9th character of your password.”

I saw a comment once explaining why this might make sense to prevent replay attacks. But it seems awfully absurd.


Not to mention it means they are storing the password in plaintext.


They could still hash the long password, without trimming, and store 64 bits of that hash.


True, but then you'd have to make sure that all software that accesses that field is aware of the hash, and chances are if you have a legacy system like that, you have legacy software too.


Why does the accounts system have to use that mainframe?


In most cases it's because the mainframe holds all the customer data and is the system of record, and it's just easier to do it that way.


It's quite scary to think that a bank only fix obvious security issues after public shaming. There are a lot of internal services in a bank that are not exposed to the scrutiny of security researchers and will therefor never get patched.


What sucks is that for every breach you hear about, odds are somebody “on the inside” has tried to raise the alarm and been ignored, or marginalized, for it. Then after the breach they often take it very hard.

To the extreme, a former employer suffered a major breach (after I was gone) and from what I heard a really great dev was distraught to the point I worried he might be suicidal.

All made worse because the organization as a whole didn’t learn, there were no obvious consequences for the management at fault, so he took it even more personally, when it truly wasn’t his fault a breach happened.

We need a way to give these people a voice before bad things happen, not just therapy after something they saw coming goes bad.


Yes! What does it say about internal security if a bank leaves its mailbox unlocked. (The postal equivalent of not using SSL.)

For those that would prefer other methods because public shaming doesn't fix everything: Let's look at some scenarios what (successful) shaming of publicly available services could do:

1. lead to audit and discovery of the issues in the internal services 2. bolster the argument of internal people who argue for security improvements 3. reset priorities to fix issues in public instances instead of fixing internal ones


It's not comparable. A mailbox is in a physical location and you have to be next to it to use it.

An email box is specifically meant to be opened anytime from anywhere on the planet. It's insecure by design.


Agreed but I wanted to compare an unlocked mailbox at a bank to not using SSL on their website.


Right, and that applies to probably all but a small percentage of organizations.

Not malice, just management that doesn't properly account for the negatives involved in poor security policy. Banks are no exception to the rule that all the managers are humans, and may not have the best information / mindset.

Fix the lowest hanging fruit as fast as possible, and raise the mean incrementally.


Meanwhile 90% of the banks in France use a 6-digit password that you can't paste because you have to enter it by clicking on a super-secure-random grid... https://imgur.com/a/q91JDXi

Maybe Troy Hunt can publicly shame them into more secure practises but I'm not hopeful.


My bank's (Lloyds TSB) idea of extra security beyond a password is ... yet another password, but one that you have to select 3 random digits of, in dropdown boxes.

I asked them for TOTP 2FA, but never got a response...

Why is it always banks who have some of the worst security posture out there? I fully expect my bank to eventually implement 2FA ... by SMS! Thinking that it's a good idea!

sigh


A common work around is to make them all the same: that way you don't have to think which ones to type.


It's secure. It defeats keyloggers and it prevents the password from being remembered by the browser and enumerated.

It's usually a PIN that is set by the bank rather than the user, to prevent people from using 123456 or their date of birth.


For the 2 banks that I use, it is set by the user. And for BNP you have to change it every 80 uses, so you end up using something really easy to try to remember it...


It only defeats keyloggers that don't record the mouse.


Recording the mouse doesn't allow to extract the password. You need to record the display too.


that actually is secure ... if you have a secure display


In the same sense that a password manager is secure if you have a secure keyboard.


A question: I know of some companies and banks with such issues. I am no Troy Hunt - what would you do about this? Public shaming doesn't really work when you aren't a public person. Also this is in non-english environment, so there are no such public figures...


Most of those Troy Hunt examples were submitted to him by non-public people. You could probably just email/tweet him.

smaller scale: http://plaintextoffenders.com/


"Now, keeping in mind that the username is your email address and that many among us like cake and presents and other birthday celebratory patterns, it's reasonable to say that this was a ludicrous statement."

This is when I lost it. Bloody good read.


the worst of all are those that allow you to paste the password when you setup the account, but not on login.

this leads to a situation where I have a 100 digit hashed password I have to type in by hand.

usually I just create a new account, preferably somewhere else.


I created an account for a 401k vendor that our company recently switched to. During the registration process I used a password manager to generate a 14 digit random password.

Imagine my surprise when I went to login to the newly created account only to find out that the login screen enforced a character limit of 8 characters (both with a textfield attribute and js). This limitation was not enforced during registration!

I had to edit the page in developer tools so I could actually paste my full password to login. The limitation was purely client side.


haha whaat? they had a limit of 8 chars MAXIMUM? that's usually the minimum limitation!

at least they had implemented the first rule of it-security: perform all checks client-side only ;)


Ouch. So the server still accepted the 14-char password, even if the login page didn't allow it?


In macOS I use Keyboard Maestro, which allows you to create a macro that pastes the current clipboard by typing. Very handy for these no-paste text boxes.


It would be interesting to ascertain how many times users' on social media flag a security problem to a company's social media team that isn't actually a security problem? In other words, how many false negatives get caught too?

Troy Hunt's post is really told from the victor's perspective (likely a bias rather than intentional or arrogance), but to form a well-rounded view, understanding how many false negatives would likely help...


If the flagging is respectful and the user is mistaken, either they will apologize for the mistake or it's up to the community to lower that user's influence. Troy Hunt is retweeted and upvoted and made famous because he has built a good reputation.


I wrote about non secure contact pages of various banks in Australia back in 2014 [1] and sent them all private messages. Didn't hear back from a single one saying they were working on it. Haven't checked lately to see how or if they changed.

[1]: https://blog.oxplot.com/non-secure-contact-page/


I had a bit of a rant in a job interview about how storing plain text passwords was something only idiots do. I got the job and lo and behold the main feature being worked on when I started 3 weeks later was "encrypt passwords field". So even inadvertant direct private shaming can effect change!


Didn't browsers take a stand a few years ago by ignoring "autocomplete=off" on password fields?

I think it’s about time browsers start ignoring any onpaste events on password fields. I’m curious what Chromium folks think - have there been tickets about this? It would be a great way to end this dumb practice.


Firefox has the dom.event.clipboardevents.enabled pref which can be used to prevent a lot of web naughtiness however it does break a couple of legitimate use cases.


Firefox also has signon.storeWhenAutocompleteOff (default true) which ignores such hints.


It's sad that what Troy is arguing for sounds like the best we can expect right now and he is probably right. Ideally, there would be mandated processes for any commercial company like any serious security vulnerability must be fixable within x hours/days (if a fix is available) i.e. the excuse can't be, "we don't have the means or money to fix that right now" and there are certain things that seem to be accepted knowledge in the security community (like password managers) that somehow are allowed to be circumvented by random companies because they decided so.

The real question is why isn't there a mandated list of global best-practice for web app security that can direct any acceptance test of any web site? Can't people like isaca and isc2 agree something and make it the gold-standard?


And every manager (but for a few well informed) will scream every time that gold standard has to be adjusted for new zero-days and completely new vectors etc etc, forcing rewrites over and over.

Never underestimate the power of inertia.


> But the hesitation quickly passed as he proceeded to thank me for the coverage. You see, they knew this process sucked - any reasonable person with half an idea about security did - but the internal security team alone telling management this was not cool wasn't enough to drive change. Negative media coverage, however, is something management actually listens to.

Sounds like justification for bad-security whistle blowing too. The downside of encouraging it is that you are easily deanonymized if you had attempted to bring it up internally before. We need a, possibly crowd funded, corporate bad-security whistle blowing foundation that contacts your own company on your behalf and then publicly shames them if they don't fix the issue.


The Swedish Bank ID also disallows copy/paste of passwords. When I contacted the company that builds the solution I got more or less the same response, "it is safer for normal users" which I didn't really understand. Highly annoying.


I have heard the argument that regular users often believe that copying and pasting passwords makes them immune to keylogging, so allowing that will cause some of them to keep a copy of their password on a plaintext file on their desktop where otherwise they would just type from memory.

Not sure if that's what banks are thinking about.


Probably a dumb question, but doesn't copying & pasting protect against keylogging? The only key events being sent is CTRL+C and CTRL+V (or the mouse equivalent), and not the password keys themselves.

Obviously this is an extremely bad way to "protect" yourself (since you keep your password in plaintext on your PC), but it does protect against keylogging, right?


Maybe keylogging in the strictly literal sense, but I think most software "keyloggers" log the clipboard too. I suppose it would protect you from a physical keylogger.


To clarify, it is not a maybe, keyloggers definitely monitor the clipboard. It's one of the most basic features of a key logger.

Another basic features is logging the active window/process to know where the user is currently writing to.


Correct, it does foil key logging. However in general installing a key logging is harder to pull off than just copying all files from the machine in question. Of course it depends on which known unpatched security hole exists on the system at the time. However in general a program is more likely to be able to read arbitrary files.


This is deterrent for storing a plain text password in a file on your desktop. Frankly, if you are already keylogged your plain text passwords are already stollen.


It's not. What it does do is make your average user feel safer because they're told this is fore their safety and not equipped to evaluate the claim.


I tried [0] to stir one of these up for coral.co.uk which also forces (via a downgrade redirect) insecure login over http. Unfortunately my tweet didn’t get much attention, although I did get the “Don’t worry! You can login without any doubts!” reply from Coral.

Not sure if it’s still the case (think they recently did a redesign and hopefully fixed it; I’ll check tonight). If it is, I would appreciate some more attention on this issue!

[0] https://mobile.twitter.com/milesrichardson/status/1017195538...


I can imagine it's a really shitty situation for e.g. smaller banks here in the US. Most of them don't have the resources to build their own custom back end services, so many outsource the tech. And then shit like Fiserv getting hacked the other week happens, and not only is Fiserv's reputation hurt but so is all their client's.

I'd love to "bank local", but when shit like that happens, the only way I'd feel safe with my finances is by going with the Bank of Americas/Fidelity's/etc. over Mom & Pop Credit Union.


That calls for startups to build good tech for these companies to all use.


Very, very difficult in fintech. Very high barrier to entry. The Paypals/Venmos/Acorns of the world have been increasingly getting into the traditional banking realm, offering checking account/debit card products, but their back ends are still the same old slow companies. Polishing a turd essentially.


> You see, they knew this process sucked - any reasonable person with half an idea about security did - but the internal security team alone telling management this was not cool wasn't enough to drive change. Negative media coverage, however, is something management actually listens to.

It's even better when the internal person tips off the press to initiate the public shaming to take to management. Never happened in an organization that I worked in, but I know security people in other orgs that had to resort to these tactics.



Ego, embarrassment, lack of taking someone seriously, ignorance, and laziness all contribute to not taking security reports seriously, and results in creating rationalizations to prevent fixing them.

At that point, you need pressure to create change. Shame definitely works, but it isn't professional, and it isn't nice at all. Many people may be open to help if you can get off of Twitter.

You can send a positive letter through a private channel that details the problem and offers help in fixing it. You can make an automated, impartial test that clearly proves the problem and provides links to fixes. Failing these, you can start a petition. You can have famous, well respected, and powerful people sign it. You can add carrots and sticks, like an award for security response, a hall of shame entry, or the nuclear option, a PR release and interview with national media about the dangers of the problem.

Even just sending a list of these problems as they have happened in the past and how they resolved to executives at the company is probably all the visibility needed to get the ball rolling.


A big takeaway for me is how the internal security teams knew it was bad and couldn't make a case for the change.

Once a process is in place, it takes a well delivered argument to make a change. Not allowing paste into password fields is a prime example of something that seems like a good practice, but isn't. If anyone is reading this and needs to convince others to allow paste, NIST says to allow pasting passwords, and that it improves security:

> Verifiers SHOULD permit claimants to use “paste” functionality when entering a memorized secret. This facilitates the use of password managers, which are widely used and in many cases increase the likelihood that users will choose stronger memorized secrets. [1]

People will no doubt still say it's bad, but its a fact that the National Institute of Standards and Technology says to allow paste, and that it improves security

[1] https://pages.nist.gov/800-63-3/sp800-63b.html


Social media is used by companies to improve their public perception but we cannot ‘shame’ them by its bad practices. No way. And if the Community managers throw lies we have to also shut up? Respect always, but the ‘shaming’ are not CManagers fault because security decisions are not taken by them but other people so they shouldn’t feel bad by the public response.


The trouble is, it won't scale. If an announcement like that comes out several times a day, it's not news.


There was the example of the company that saw the BBC article and was scared into compliance. Perhaps if he keeps up the approach of keeping the shaming events rare and brutal, it can be enough of a psychological effect to corral other companies into compliance until there are only a few cases remaining, at which point you could just go after them individually without creating a news burnout.


I'm mildly surprised no-one has so far drawn the similarity to full-disclosure. Because let's be fair, this practice is full-disclosure.

Of bad practices.

And as we've seen over the past 2+ decades - FD works. Once you draw attention to an insecure system, resources to fix it will be found. (Enough of the time, at least.)


This is different from full disclosure because (1) in many cases the information is directly visible to the commmon user or publicly advertised by the company already, and (b) the insecurities aren't immediately directly exploitable (like plaintext passwords), just a risk factor in case of a hack.


Law suits seem to be the standard tactic for dealing with other types of corporate negligence (for better or worse). Why aren't they employed more for these types of obvious security vulnerabilities if you can demonstrate harm?


Considering Equifax basically had 0 consequences, legal work is incredibly expensive and ineffective unless someone gets seriously injured or dies.


This reminds me of the fiasco regarding Virgin Media over Twitter: https://twitter.com/virginmedia/status/595135419152474112

I think companies need to mandate some kind of improved protocol for responding to these sort of tweet storms. It always seems to be people who are technically inclined talking to a social media representative with little to no knowledge of standard security practices, which leads to attempts to calm the masses, but ultimately backfires.


So the question is why don't companies have infosec individuals seriously looking into issues raised by a user seriously because its in their best interest to avoid bad PR and public shaming which will be the logical next step if they ignore it.

Is the shaming necessary? Do corporations only respond to an issue when it blows up? Seems like bad stewardship.


Public shaming definitely works but there is only so many times you can play this card. It may give you quick small wins but such organizations which downplay or do not understand fundamental security concepts have many other serious security/technical debt which eventually leads to a breach.


This suggests to me a more democractic approach to corporate culture / action.

Let's say we have a giant backlog of work, each item ranked by voting by employees

Now the security team adds their "Stop sending passwords in plain text" item.

Which companies will it get voted up in?


Why was the situation with the page that does not ask for credentials and does not use https considered unsafe? Is the worry that a man in the middle could change the Login link to a different website?


Yes, exactly.


Let's take a step back and ask ourselves this: when did banks end up with some of the least secure websites?

It's so common we've become oblivious to how frightening this is.


Could there also be a thing where you go out of your way to praise? It seems that the combination of carrot and stick seems to be good for persuasion.


[flagged]


Why would that result in loss of respect/trust? k-anonymity works great and is perfect for application into password managers and on-registration password security checks.


The combination of 'Baking HIBP into Firefox' and 'Cloudflare did some great work' and 'sends this to the API' just sounds bad.

We are talking passwords here, for god's sake, I want nothing be sent nowhere, let my browser and me alone.


OK, calm down for a minute. You need to read how k-Anonymity works before you continue to comment on this:

https://blog.cloudflare.com/validating-leaked-passwords-with...

It isn't sending a plaintext password, not even a full hash of the password. This is entirely fine..


> This is entirely fine..

... until it doesn't. Any update on did this HIBP 'feature' made it into Firefox?


You’ve yet to tell me when it’s not fine. Can you please provide some citations or technical reasonings besides just putting forward unsubstantiated claims? Below you've told jgahramc you refuse to read how it works, then you probably shouldn't make claims on this.


Bugs can happen.

One of the first scenarios that comes to mind - some implementation queries API for every key press. Suddenly you are sending your password out.

I guess time will tell, but human-generated passwords have proven their ineffectiveness anyway.


That’s still not a problem with Troy/Cloudflare/k-Anonymity. Bugs happen, sure, but I’d hope your password manager that stores all of your passwords, reliably, has unit tests to prevent that kind of failure scenario.


You are arguing with yourself and haven't looked into how Troy's API service works. Worth reading: https://blog.cloudflare.com/validating-leaked-passwords-with...


Point is, I trust Firefox, and it may lose my trust when it makes third-party API calls involving my passwords by default.

Should I read how Troy's API service works? No.


I was very uncomfortable with Pwned Passwords until I read how it works. And even then, the true comfort blanket for me was writing a script that talked to it myself, so I could see exactly what info I was sending, and what info I was receiving back.

I'm now entirely comfortable with this product/service. Read how it works.


Should I read how Troy's API service works? No.

Yes. Until you do that, you are arguing in utter ignorance - and you’re wrong, to boot.


To Moderators: I claim Frivolous flagging on https://news.ycombinator.com/item?id=17958620

From FAQ ... flagging a story that's clearly on-topic by the site guidelines just because one personally dislikes it— eventually gets an account's flagging privileges taken away.


[flagged]


Please stop doing this or we'll ban the account.


[flagged]


Why did you need to post 5 top-level comments to say that?


Because my first one, with 12 posts in it, was flagged. And second one too.


Not password checks, but rather "does my email appear on haveibeenpwnd.com".


The very passwords:

"k-anonymity" model which works like this: when searching HIBP for a password, the client SHA-1 hashes it then ... sends this to the API.

https://www.troyhunt.com/were-baking-have-i-been-pwned-into-...


According to the description in the linked article, only the first 5 characters of the hash of the password are sent to the API (and that API is not publicly available in the first place, apparently, but can only be accessed via Mozilla or 1Password's own APIs).

What exactly appears to be the problem?

The reasoning for this feature is clearly laid out, and the underlying "ethics of running a database breach search service", while controversial, are also something Troy has thought about very carefully:

https://www.troyhunt.com/the-ethics-of-running-a-data-breach...


> What exactly appears to be the problem?

My trusted browser should not send out any sensitive information.


Yeah, but that's not sensitive information.


You'd be surprised ... :)


... yes, I would be surprised if the SHA1 of the first 5 chars of a password was sensitive. If I'm missing something, please share.


This implementation has no value to potential attackers. Anything it could -if it could- help you with, you can already do without it.


The only true, safe password manager is the one that stores its database in a local, master password encrypted file, likee KeePass(XC), open source.

Anything else is trusting third-party.


Troy's stance makes me uneasy. I'm sure publicly shaming these companies is effective, but where does the line get drawn? In my experience the infosec community strongly believes security is the most important thing, full stop. So they feel no qualms about using every trick in the book to get their way.

But two things about this trouble me. The first is, have we decided that the ends always justify the means? It seems in other domains public shaming is unacceptable. Troy himself despises Donald Trump, but one of the things most heinous about 45 is his use of Twitter to publicly ridicule and shame companies and individuals. And now Troy is engaging in exactly the same behavior. But it's "OK" this time? What is different?

The second thing that troubles me is security "best practices" today may not be best practices tomorrow. Some of his example companies are using practices that were "best" 10 years ago. What happens in another 10 years when Troy's current advice is outdated? More pitchforks?

The security treadmill is hard for even the most modern tech companies to stay on. I think the infosec community could go a long way to helping itself by making easier to use tools, and writing canonical guides for common scenarios. Setting up HTTPS to get an "A" from SSLLabs is non-trivial. Securing SSH with perfect forward secrecy is near impossible for a mortal. There's no reason it has to be this complicated.

Edit: To the people who couldn't get past the first sentence of my last paragraph. I'm not saying that because the security treadmill is hard the companies get a pass. I'm saying the infosec community has a responsibility to make it easier to stay on it. If the barrier to securing something appropriately was so low you could trip over it, we wouldn't have this many problems.


You're right, we should be totally OK with a company that makes profit off of us using 10 year old outdated security techniques - we wouldn't want to be rude.

Imagine you went to an amusement park and saw the rides were being operated unsafely. No seatbelts, no safety protocols, no even making sure passengers were seated first. Do you stay quiet because after all, the teenager running the ride probably didn't invent the safety protocol? Sure, someone might get beheaded, but you don't want to be rude.

> The security treadmill is hard for even the most modern tech companies to stay on

That is nonsense. If you have the resources to track me across the web, hold vast amounts of my private information and profit off of me, you'd best damn invest in basic security practices like SSL and encryption at rest. If you don't have the in-team expertise, bring in a consultant. These aren't Mom & Pop ice cream shops, these are major international companies and banks - stop cheaping out on security to save a few pennies.


>You're right, we should be totally OK with a company that makes profit off of us using 10 year old outdated security techniques - we wouldn't want to be rude.

Nowhere did the person you're replying to even hint at, suggest or in any way imply this. He/she just said that shaming people into doing whatever the $good_thing_of_the_day is is a case of assuming the ends justifies the means and that's kind of ethically murky.

I don't have a problem with putting a bank on blast for bad security. Security is part of their core value proposition. That's why most people don't hide their savings under their mattress.


There's a difference between shaming a company because they didn't give orphans free food, and failing to secure the data they opted into collect. We have a duty to treat people as ends, but no such duty applies to companies nor their paid mouthpieces (during their duties).

If the company didn't want the burden of security, then don't keep secured content. They opted in.


It's this kind of morally righteous fury that bugs me. You compare web security to unsafe carnival rides? Please.

Equifax had one of the worst breaches imaginable. So did Target. Far as I know, no one died. You know, I don't think anyone even got injured. Did some people have to call their bank and dispute charges? Maybe.

Not really a life or death situation was it? I don't think Tesco or Betfair are really life or death either? Sure they should have better security, but is it worth becoming an angry mob about it?


My mother spent two months without a functional credit or debit card because her identity was stolen and it took a marathon of paperwork and disputes. Thank goodness she has family who was able to give her a sizeable amount of cash during that period. I suspect many are not so lucky.

What about people fleeing abusive spouses or other folks who have a very real reason to keep things like their address under wraps? There was a training school that recently leaked the addresses of it's participants - several of whom were undercover police officers. Don't get me started on the OPM breach.

Just because a breach doesn't affect you very much doesn't mean there aren't serious consequences. When every single business has horrendous amounts of information on me, any minor breach becomes a major problem.


>My mother spent two months without a functional credit or debit card because her identity was stolen and it took a marathon of paperwork and disputes.

She was lucky. I know someone for whom the process took over a year. Because of that he had to wait before he could buy a house - his mortgage wouldn't be approved until all the mess was cleared up.


Yes, people do actually die from compromises, you just don't hear about it.

For you, somebody snatching your CC might mean a hassle due to having to call the bank and dispute the charges. For somebody with little money to spend, it can mean the difference between feeding their family for 3 days and not doing so.

For you, somebody leaking your private messages on a social network might mean some inconvenient messages that have to be explained to a friend. For somebody else, it can mean that suddenly their homosexuality is well-known in a country where that carries the death penalty, and gets them executed.

Software is infrastructure, plain and simple. You don't maintain it, people die. You don't make it safe, people die. It doesn't matter whether you personally see it causing problems for yourself. This "morally righteous fury", as you put it, is absolutely justified.

Developers (and the companies managing them) need to take some goddamn responsibility for the infrastructure they build.


>Software is infrastructure, plain and simple. You don't maintain it, people die. You don't make it safe, people die. It doesn't matter whether you personally see it causing problems for yourself. This "morally righteous fury", as you put it, is absolutely justified.

Not all software infrastructure is critical infrastructure. It's like the difference between a company that makes bottom dollar kitchen sink sprayers and washing machine hoses that always leaking and mine railings making a town's water supply undrinkable. Both are water problems. Only one is a big enough problem that people not affected by the problem should care about the problem.

You have to take both likelihood of harm and severity of harm into account. Just because someone somewhere might get hurt using a bottom dollar washing machine hose to transfer dangerous chemicals doesn't mean we should regulate all things related to water the way we regulate waste disposal near rivers and wetlands.


> Not all software infrastructure is critical infrastructure.

This might be a valid argument if we had a standardized way of classifying critical software vs. non-critical software, that took into account corner cases. We do not.

Instead what happens is that people arbitrarily classify what they consider 'critical' based on gut feelings and rarely considering the situation of people-who-aren't-them, and that the resulting potpourri of judgments produces critical systems where the authors of individual parts thought that it wouldn't be as critical as it ends up being. Then it breaks, and now all hell breaks loose.

In practice, people build critical software systems with off-the-shelf components that really weren't designed to be used for such critical cases. This isn't specific to open-source, either; it happens just as much with proprietary components. There are no certification processes worth a damn, no clear idea of where and why this matters.

And until we get to a point where those processes and standards do exist, there is only one safe assumption to make: all software is critical infrastructure, because you have no idea what it's going to be used for in practice. Hence the 'fury' being justified.

EDIT: Addendum... I've had the critical-systems discussion with many developers. The overwhelming interpretation of "critical" is "I can see a way in which the software can directly kill people". Think drones, airplanes, and so on.

Conversely, almost nobody considers the indirect consequences that might lead to that same outcome (no access to money, no access to healthcare, murderous stalkers, etc.). Let alone serious consequences that don't result in death.

Realistically, barely anybody in the software industry has the first clue as to what constitutes a 'critical' system, or exactly how much damage their software can do. A similar issue exists in the design industry[1].

[1] https://www.youtube.com/watch?v=J0ucEt-La9w


>Addendum... I've had the critical-systems discussion with many developers. The overwhelming interpretation of "critical" is "I can see a way in which the software can directly kill people". Think drones, airplanes, and so on.

>Conversely, almost nobody considers the indirect consequences that might lead to that same outcome (no access to money, no access to healthcare, murderous stalkers, etc.). Let alone serious consequences that don't result in death.

That's how things work in every other industry that doesn't specifically build things to operate in hazardous environments. Things are designed to not kill and/or harm in normal use, not to not kill and/or harm people in exceptional circumstances or if grossly misused.


Think again, having to deal with identity theft have seriously derailed some people's life. I wouldn't be surprise if there was a few suicides that could be directly tied to a specific breach.

There might be a line to be drawn, for example I don't expect a small shop to adhere to all the last security recommandations.

But a big company handling highly sensitive information at scale? It should be a crime to have a leak caused by a well-known issue. Unfortunately, there was barely any consequence, as far as I know.

Those companies certainly have the means to keep up with security requirements, and it should be mandatory considering their business. Until they act properly, shaming it is.


Just because it's not a "life or death" situation doesn't mean people shouldn't say something. What kind of attitude is that? I mean, is the rule for when we should speak up against negligent practices or stay silent really "if no one will die, it doesn't matter"?

There are other types of harm people can suffer other than just physical harm. And those other types of harm are no less significant or noteworthy, at least in my opinion.


>Not really a life or death situation was it?

It very much can be. Look at some of the largest cases of embezzlement/fraud in history where many people's life savings/retirement were completely lost. In the short term, some people commit suicide. In the longer run, they die of poor health because they don't have much money when they're old and sick.

Although I don't know anyone who committed suicide, I do personally know people impacted by what at the time was the largest case of embezzlement in history (back in the 90's). I've seen the impact it has. And I can assure you, shaming a public company is nothing compared to what those people go through. If we had a better hammer to coax the companies to fix these things, I would advocate for it. For now, public shaming is not effective enough an approach.


What's infuriating is the claim from those companies to be secure when they aren't.

If, as you suggest, they should be given a pass on data safety, then they should be barred from making promotional statements about data safety. They violate the social contract by making false or misleading claims.


Beyond the identity theft example, Ashley Madison wasn't that long ago, and explicitly without passing judgement, I'm pretty sure that was highly disruptive to a number of peoples' lives.


I think you should re-examine your premise / first principles.

Why should we automatically view shaming in and of itself bad?

How about instead of calling it shaming, simply label it "holding companies accountable"?

Why should "a painful feeling of humiliation or distress caused by the consciousness of wrong or foolish behavior" be something people are prevented from ever feeling? Is it not sometimes merited?

Bad feelings, pain, suffering... if these should be avoided at all cost, what about the people whose private information can be stolen and is then put at risk by bad practices... some which are clung to even in the face of overwhelming evidence presented?

In other words, someone might have to suffer to some degree, if mistakes are acknowledged or if they are not, It's just a matter of whom... should it not be the companies and, to a much lesser extent, (causing some slight embarrassment to) those representing the company and arguing with people providing helpful advice, instead of innocent customers paying a business to secure their information, wealth, livelihoods etc?

Or avoid all shaming, consequences be damned?


> The security treadmill is hard for even the most modern tech companies to stay on.

It's not hard. It just requires you to give a damn and actually treat it as an important aspect of your process, rather than as an afterthought. And that's where most companies go wrong.

> I think the infosec community could go a long way to helping itself by making easier to use tools, and writing canonical guides for common scenarios.

That I can agree with, and behind the scenes, I've been arguing with infosec people about this for years now. But it's not an excuse not to get your security in order.

If anything, that should be driving more investment from companies into security, to collectively produce more usable mechanisms and documentation... but that doesn't happen.


> What happens in another 10 years when Troy's current advice is outdated? More pitchforks?

YES! All companies storing data should continually stay up to date with best practice.


> Troy's stance makes me uneasy. I'm sure publicly shaming these companies is effective, but where does the line get drawn?

Troy's use of the word "shaming" makes me uneasy, because that's not what he is doing. To shame is to dishonor or make ashamed. He isn't dishonoring anyone by having a conversation with them in a public forum. The companies and their employees are dishonoring themselves when they fail to address complaints and continue to speak out of ignorance.

There is a marked difference between Troy and 45: Troy is speaking directly to these companies. That the conversation occurs in a public forum is of no consequence; the companies willingly entered the forum for that very reason, and there is nothing private about the conversation to be had.

On the other hand, 45 does not shame companies by engaging them directly; he shames them by denouncing them to the public. He has no interest in having a conversation, and so never speaks to them, only about them. This allows him to conveniently sidestep any attempts to refute him, since there was never a debate in the first place.


At what level do we call shaming a bad thing?

When I have a bad experience at a business, I post a negative review on Yelp - sometimes with pictures to show how bad it is. What Troy is doing isn't fundamentally different.

>The security treadmill is hard for even the most modern tech companies to stay on. I think the infosec community could go a long way to helping itself by making easier to use tools, and writing canonical guides for common scenarios.

Many of the scenarios he points out are easy to fix and well documented (e.g. not using HTTPS, not storing passwords in plaintext). Although banks are better now, about 7-8 years ago, I often would say that a teenager creating a web site following the Django tutorial would create a more secure site than many financial institutions out there (you know, the kind that limited passwords to 8 characters, or only allowed numbers, and didn't salt their passwords).

If a teenager can find and follow docs to make secure sites, then I don't think it is unreasonable to expect better from people who are paid to do it. And it is reasonable to complain publicly when they don't. Some of these institutions have people's life savings.

If you walked into a bank and saw that they kept all your money behind a glass door with a single pickable lock, and that they don't request your ID when you make a withdrawal, would you complain about someone publicly shaming them?


> The second thing that troubles me is security "best practices" today may not be best practices tomorrow. Some of his example companies are using practices that were "best" 10 years ago. What happens in another 10 years when Troy's current advice is outdated? More pitchforks?

Absolutely. To expand on your comment, some of his example companies were using practices that were never secure to begin with. What happens in 10 years when they still aren't being secure at all and more information is in the wild because no one held them accountable?

It's best practice for every data-holder to adjust their defenses when any security situation changes, ever, not just Troy's advice.

Anything less is just letting laziness rule.

(Bring Trump's tweets into this is a total non-sequitur. His 'shaming' has nothing to do with the real world, only his moment to moment delusions.)


>>> The security treadmill is hard for even the most modern tech companies to stay on. I think the infosec community could go a long way to helping itself by making easier to use tools, and writing canonical guides for common scenarios. Setting up HTTPS to get an "A" from SSLLabs is non-trivial. Securing SSH with perfect forward secrecy is near impossible for a mortal. There's no reason it has to be this complicated.

Agreed on this. Security is too hard to do right and too hard to check. There are no 2 guides or 2 people that agree on the same recommendation.

SSL labs is a perfect example where the top rating is not realistic to obtain.


Are you kidding about ssllabs? Every site I host gets an "A" rating, with practically zero effort. What are you running where you can't manage that?


Yep. I set up a few traefik instances a while ago with the integrated Lets Encrypt support. All of them have at least an A on ssllabs. All of this, without actually configuring any of the ssl /tls settings myself.


I might be confusing SSL labs with another verification tool that always give B and C.


> Edit: To the people who couldn't get past the first sentence of my last paragraph. I'm not saying that because the security treadmill is hard the companies get a pass. I'm saying the infosec community has a responsibility to make it easier to stay on it. If the barrier to securing something appropriately was so low you could trip over it, we wouldn't have this many problems.

Security should be everyone's responsibility and working with InfoSec to make your treadmill easier is your responsibility too.


A lot of the posted examples include basic copy/paste blocking and password rules. The barrier to better security in these reports is so low as to be non-existent. The better question is why these companies can't even do that, let alone aim higher when they are at the size they are in a world where everything is digital.


The thing about Trump isn't the shaming - sparing the very many appropriate snarks for him. The trouble there is the position of authority and implied threat of him and his following.

There is a fundamental difference between for instance "Apple stores their password data reversibly hashed in the clear in 2017!" and "Apple won't create backdoored encryption/create jobs in the heartland instead of in China!" One is promoting security by calling attention to an objectively bad security practice to store whenever not necessary (passing to the larger store of the hashed one). If people disagree with it no harm done. People will just roll their eyes if there is no technical merit like if it was that the password allowed o,O, and 0 or emojis in passwords.

The other is an attempt to cudgel and bully for personal gain. Regardless of merit it is an attempt to do harm. Even if he is right in the circumstance it is an inappropriate use of the office. Since if there is actual wrongdoing involved actual action should be taken by departments. Perdue apparently has a food poisoning scandal for instance? Form a team to investigate the issue and call on justice department if it was illegal or congress if it was technically legal or should have been uncovered or enforced way earlier.

Really passwords are a bad security practice to be avoided in my opinion - high grade keys are the way to go. We tried phone and email 2FA and it was inconvenience and another attack vector and the differentiation requires enough volume that a password manager is needed anyway to track everything they may as well use key reserves. We have been trying to avoid it but it is inevitable and right now it is shameful that an empty Amazon EC2 instance is more secure than my bank account.


> The first is, have we decided that the ends always justify the means? It seems in other domains public shaming is unacceptable. Troy himself despises Donald Trump, but one of the things most heinous about 45 is his use of Twitter to publicly ridicule and shame companies and individuals. And now Troy is engaging in exactly the same behavior. But it's "OK" this time? What is different?

Intent has a lot to do with it, but to me, I see one as part of business. As another comment said, some of these companies carry enough information to ruin our lives. It makes sense to hold their feet to the fire in a public forum when they put security on the back burner.

>The second thing that troubles me is security "best practices" today may not be best practices tomorrow. Some of his example companies are using practices that were "best" 10 years ago. What happens in another 10 years when Troy's current advice is outdated? More pitchforks?

Security is notoriously hard, and anyone who has ever tried to get into the space understand that. But just because it's hard doesn't mean that you get a pass. It is the duty of the business in question to protect customer data. I believe that it is reasonable to expect a company that has my account and personal information to do everything that it can to protect that data.

You said elsewhere that if a breach occurs, a customer only has to dispute a few charges with their bank. You're right. It is much easier for a customer to dispute fraudulent charges. But why do I need to spend my time filling out forms and cooperating with a minor investigation because a business (through negligence or otherwise) lost something that I trusted them with?


I generally agree but would you place your security in the hands of some 3rd-party providers such as password managers? If you do that then their problems instantly become your problems.

I am not saying that password managers are without any merit and/or useful. All I am saying is that although security decisions are often made without much thought process there are situations where there are some hard requirements based on a multitude of factors which are not publicly discussed.

It is easy to brush it all off as a 3rd-party observer but in reality nothing is perfect and considering that a lot of the people on this forum are actual developers you must be well aware of the countless compromises you had to make that go against security best-practices... :) common on.

What Troy is discussing are cases of trivial matter of negligence - hardly breaking the bank and I can assure you that in many circumstances the security budget required to implement some of the necessary improvements outspend the annual fraud budget. This is hardly makes an argument if you need to justify your department running cost.

Yes shaming kind of works, sometimes... but honestly these companies have much bigger internal problems than reconsidering their whole stance on usefulness of password managers and the risk levels their are willing to accept.


> would you place your security in the hands of some 3rd-party providers such as password managers?

I don't know if I can think of a better solution than an open-source password manager which encrypts and stores everything locally. Unless you have an incredible memory.

> but honestly these companies have much bigger internal problems than reconsidering their whole stance on usefulness of password managers

I guess Troy is just focusing on what he knows best.


> I don't know if I can think of a better solution than an open-source password manager

Sure, KeePass(XC)

> I guess Troy is just focusing on what he knows best

No, KeePass(XC), as you said, stores everything locally, no Troy needed.


Not sure if you're agreeing with me or not. Yes, I think KeePass is the best way and I use it myself with SyncThing to sync between machines. That's still trusting a third party to some degree: you're trusting that the code doesn't have many bugs that someone could exploit to bypass the master password. But it minimises the risk.

The part about Troy was in response to the OP's claim about public shaming.


> Not sure if you're agreeing with me or not. Yes, I think KeePass is the best way

I do agree about KeePass. Its code survived in the open for that long. About "Troy is just focusing on what he knows best" - security is hard, be precise :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: