- Glad they're recommending a stop to the pointless "password must be no longer than (16, 20, ...) characters". Aren't you storing a constant-length hash anyway?
- Why do some logins restrict which ASCII characters can be used? When I see that I can use any symbol from '%!#&' or whatever list they provide, I can only imagine it's a really naive SQL-injection defense. Is there any valid reason for this?
- And glad to see they're recommending against "security challenges". Half the time I'm forced to pick a security question, either none of them apply, a bunch of them are ambiguous ("what's your favorite movie?" - uhh, I'll give you a different answer depending on my mood, etc.), or they're easily searchable ("where did you go to high school?")
Unfortunately I doubt the bad actors will pay too much attention to this. I know Google is planning on dinging sites that don't use HTTPS, is it possible they could ding sites for poor password policies?
The issue was actually that we support many different protocols (not just mail) and some combinations of clients/protocols have had issues in the past (it might have been some FTP clients I think, but can't remember right now.)
Anyway, this restriction no longer applies as we now require server-generated app passwords for 3rd party apps: https://www.fastmail.com/help/clients/apppassword.html. So feel free to use as many spaces as you like in your password!
My suggestion: use a nonsense answer and use it for all of them. I don't reuse passwords but for exactly the reasons you state the answer to all my security challenges is something like "Because a kipper doesn't red the blue."
Alternately have a few nonsense phrases for stock challenge questions (first car, first pet, favorite <thing>, etc.) It's better than using the real (googleable) answer.
Finally, as Dale Carnegie would have loved: the more absurd it is the easier it is to remember. So while you won't remember if your favorite movie is the matrix or titanic, but you WOULD remember "I clocked blithely cookie everywhere." as the answer when you see that question.
Here's what I use to generate answers to secret questions:
< /dev/urandom tr -dc a-z0-9 | head -c 16
This leads to things like:
> "What is your first pet's name?"
"q1ry9nftmxb1gmag"
I haven't had it happen yet, but I wonder what a customer service rep's response will be when I spell out "yrlmduihhyju5il0" when asked what my favorite color is.
Be careful with it. Was dealing with credit card stuff (raising limit to go on travel) and when they were verifying my identity the policy was evidently to combine my questions with a "background check"
Said background check basically being they googled my name.
Which you would think isn't a problem until you get to "What school did you graduate from?" and have to go through four levels of reps to explain that (not actually what I typed) "Omelette Du Fromage" was my way of making it harder to social engineer my account.
The woman on the phone at the utility company that was messing me around didn't laugh when I said the answer to my security question "what do you think of customer service" was "f*cking retards" :-D
I just use the security questions as another password, like my favorite color is JyQ|l[Duc-I6KrU-0k and I went to elementary school at ?YfBW+Yurh@m$lml":.
I do this. I told the CS rep that my password hint was "just random characters mashed on the keyboard" and she accepted this and moved on. I'm not sure what to think of the security implications.
Worse, if reps can see the answer, then this is equivalent to not hashing the passwords at all since you have a password-equivalent stored in plaintext.
>> they were annoyed about people being able to see part of their SSN
Part of? I worked for AT&T back when they merged with Cingular. We only asked for the last 4 over the phone, but the entire 9-digit SSN was shown in the app. Every single low-level employee had (has?) the entire SSN in front of them. Never dared tell a customer that little fact when they made a fuss over my having access to their last 4.
That's what you get when the reps can see the answers. The only working solution is to have the reps "log in" to the users account by entering the security question answer.
If the reps can see the answer, it's far too easy for the attacker to turn the verification process into a game of twenty questions.
I've had this before with my bank, when I've had to authorise a large card payment (for a car). I was asked various security questions about monthly recurring payments from my account (in the UK, so standing orders and direct debits), but I've so many I can't keep up, and I change savings accounts and health, car, home, pet insurer every year to get a good deal.
The rep on the phone kept prompting me when I was unsure. She'd mention an amount, then when I was unsure they'd say something like, "maybe it's for your mortgage...? Maybe the company begins with the letter 'N'?"
It was all a bit silly, security theater at its finest.
I had the opposite recently. Trying to log into my alma mater's website to get a copy of transcripts, but my account had long ago locked out. They asked me questions over the phone to reset it, but I couldn't answer any of them.
"What is your phone number on file?" Shoot, I don't know, it was an old number that I changed maybe 6 years ago...
"What is your address on file?" I've moved maybe five times since then? I tried "was it in another state?" to narrow it down, but the answer was "I can't say that".
"Okay, we can verify you by classes you took..." Great, now we're getting somewhere! I took Intro to Ethics. "We need to know what term." Okay, this is tricky, it was like 10 years ago... Fall of 2006? "We need to know professor's name." Um. I think I have the book here, I know he wrote it... Professor McLaughlin? "I also need to know the day of the week the class was held and what time the class was."
Are you effing kidding me? I wish I was joking. I ended up just calling my old advisor and he "verified" me with an email to the helpdesk.
I was literally just about to post this. I refuse to make my password less secure though - and security questions really are just "alternate passwords".
I've never had a problem but I have had a few reps who are trying to not act really surprised. I've had one instance of someone trying to stifle laughter (of the "You can't be serious") kind. Taking security seriously is a rare thing. :(
They usually stop me after the 8th or so character. I'd be concerned but if any potential social engineer has the first 8 characters they likely have the full string anyways so stopping early makes both our jobs easier.
On several social engineering calls I've had reps who were happy with just "it's just a bunch of random characters, would be a little silly if I tried to read it out"
Perhaps I should say that my first pet's name was "mellower retry audited grieves" rather than "esrhciaiyzhkj". (Both are random, and both have very close to the same information content given the dictionary I used.)
> When I see that I can use any symbol from '%!#&' or whatever list they provide, I can only imagine it's a really naive SQL-injection defense. Is there any valid reason for this?
One reason I've seen for this is that the website is just a front-end for some older mainframe system that has password rules from 1987. Banks and insurance companies are frequently culprits here.
They should have required proper guiding when creating a new password. I normally use passwords that are like 80 characters long, generated in a password manager. If I paste one of these in a password field, then submit, I get the warning that it's too long. Why didn't it say so when I pasted it? Why put a limit on it below 256 characters? Does that really matter nowadays? It's not a 5MB selfie I'm uploading.
The limitions of the password should be built in the HTML of the form like a regex or something easier.
Couldn't Google just auto-register an account with a known good password, and penalise the account if it fails? Someone else mentioned improving the http authentication 'api', which would definitely help in this regard; until then there are drawbacks to the auto-register approach including things like captchas. Sites would have to explicitly allow the Google bot to bypass them.
How do you check if password is salted and hashed correctly afterwards? There's not much use of strong passwords if they are stored in plaintext anyway.
- Glad they're recommending a stop to the pointless "password must be no longer than (16, 20, ...) characters". Aren't you storing a constant-length hash anyway?
- Why do some logins restrict which ASCII characters can be used? When I see that I can use any symbol from '%!#&' or whatever list they provide, I can only imagine it's a really naive SQL-injection defense. Is there any valid reason for this?
- And glad to see they're recommending against "security challenges". Half the time I'm forced to pick a security question, either none of them apply, a bunch of them are ambiguous ("what's your favorite movie?" - uhh, I'll give you a different answer depending on my mood, etc.), or they're easily searchable ("where did you go to high school?")
Unfortunately I doubt the bad actors will pay too much attention to this. I know Google is planning on dinging sites that don't use HTTPS, is it possible they could ding sites for poor password policies?