Mistakes were made, and there are definitely lessons to be learned, but if we want to improve the state of security, we really need to change the way we react to these types of bugs.
If a service has an outage and a company posts a postmortem, we all think: "wow! that was an interesting bug, lets learn from this".
We shouldn't be treating security issues differently.
People who make security mistakes aren't idiots. They aren't negligent. They're engineers just like us, who have tight deadlines, blindspots and mistakes.
Shaming people and companies for security bugs will only cause less transparency and less sharing of information - making us all less secure.
This is a really cool bug. Kudos to the researcher for finding it, responsibly reporting it, and to paypal for fixing it in a timely fashion.
Hopefully - this type of bug changes some internal processes and the way the company thinks about 2FA.
As for security questions - these are obviously insecure, and should really never be relied on. If you can opt out of security questions - do so. If you can't - just generate a random password as the answer. "I_ty/:QWuCllV?'6ILs`O12kl;d0-`1" is an excellent name for your first dog / high school. Just don't forget to use a password manager to store these.
I disagree. Your "lets be super nice to everybody" strategy has come to an absurd conclusion. Is there no-one who can be held accountable for competency which they claim, when it comes to computer stuff?
PayPal doesn't write on its websites "We're some enthusiasts with no software or security experience. Let's see how well this works, together!" No, like everyone in this industry, PayPal claims its security experts have your money and financial information super secure. It's one of the first in this space, and has almost two decades of experience.
This wasn't a tricky subtle bug, this was obvious. This should have been caught in code review and tests. PayPal should be afraid of rolling out slick easy-to-use features without code review and tests. It is many years too late for PayPal to be learning the basics.
>I disagree. Your "lets be super nice to everybody" strategy has come to an absurd conclusion.
You and I must have read a different response, cause I saw nothing in there about "being super nice to everyone." What I saw was a reasonable request not to commit the Fundamental Attribution Error. Which is paraphrased as: when I screw up, there were extenuating circumstances. When you screw up it's cause you're a moron.
A company comprised of otherwise reasonable people can behaving shockingly dumb. The only way to make companies learn is to impact their bottom line, and that means not-nice words need to be said.
> If you can't - just generate a random password as the answer. "I_ty/:QWuCllV?'6ILs`O12kl;d0-`1" is an excellent name for your first dog / high school. Just don't forget to use a password manager to store these.
Be wary of social engineering attacks though.
- <support on the phone> I'd also need you to provide me an answer to your security question. What was your first dog's name?
- <me> Oh, you know, it's a long string of random characters I generated, I'd have to give them to you one by one...
- <support> (looks at the answer) uh, right. I see. Let's continue then.
I always fill all social engineering-vulnerable questions with nonsense, especially when it is a banking site. I like when they let you set the question yourself so you can put something like "Why would a secure financial institution allow such a horrible security hole in it's system?" To which the answer is Tyrolese4Tokyo_Beulah!Papuan.
I fill them with nonsense words unrelated to the question. Mother's maiden name? Fire truck. First car? Air conditioner.
If I have to call a company they always ask me why. The explanation is anyone who has me as a Facebook friend can figure out who my first girlfriend was, my maternal grandmother's first name, my mother's maiden name, where I was born, my first car, etc. And if every company has the same data, a data breach at one makes the entire system fall apart.
Same here. But recently, United airlines changed their system to only allow selecting from a list (your favorite dog breed ? Choose 1 of 8. Your favorite movie genre? Choose one of 12). I picked a random set and wrote it in my password stash.
And the answer is "because, by and large, it works just fine". Yes, people fall afoul of these kinds of questions, but the general public cannot handle proper security hygeine - and educating them takes so much effort on both sides, that your customers will just go elsewhere. Proper security procedures would also lock a great many more people out of their own accounts than would be lost to fraud. Can't satisfy security questions? Well, take the morning off work on Monday morning and bring in several forms of identification...
It's why ATM PIN codes are so short - it's easier for the bank to just reimburse losses in case of fraud than to properly/strictly control security access.
Any time I see someone talk about how dumb general banking security procedures are, it tells me that they've spent no time in tech support for the general public :)
But it must be said that GPU evolution, and that password cracking software developers are naturally going to go where the passwords are, that this type of simple password design does NOT work anymore.
How so? The point of a random-four-words password isn't that it won't be hit by existing brute force software, it's that it's easy to remember but impractical to brute force with any software - with a 60,000 word dictionary there are more than 2^63 possible passwords.
That's true, but the whole point of the strip was that you use words that evoke an easily-memorable scene in your head.
That will probably mean you can confine your list to words that most people know, which reduces the search space significantly. "correct", "horse", 'battery" and "staple" are all very common words.
Is it really an easily-memorable scene or has the strip just been referenced in every HN and reddit discussion about password security? There is no way I'm remembering some random story for an account I login to once a month. The point is to have a password that is easy to see in a password manager and then type on a different device. Seeing D8hsegfw_#7Ax42 and then trying to type it into a hidden password field is painful esp. on a phone. Seeing Dynamo-Stench3Player and typing it in is very doable.
Irrelevant. It works fine for passwords too. The security of "correct horse battery staple" method is (nearly) optimally resistant to GPU (or any other) brute force attack.
Generally what I do is put something tangentially related to the question.
For example, "What's the name of your high school?" would be answered with something like "Khan Academy" (the name of a site that helped me) or "Mr. Jefferson" (A teacher, or best friend)
Mine was Rainy Purple Road. Then I get to educate the person on the phone to, in her personal life, never give the correct answer to anything googleable for a security answer. That usually involves a discussion of Sarah Palin...
at least with one of my banks customer support centres this wouldn't happen, if you stumble for a split second they shut down the call and tell you to go into a branch to verify your identity, this is pretty annoying...
That's terrible, because it makes using password managers impossible (while on your phone for example, or you simply don't have it open that instant because you didn't know when/if they would ask).
While I strongly agree with the thrust of your comment, I'd like to chime in and say that this is not a cool bug. On the scale of web security bugs, this is the kind of thing you expect an intern to find.
I actually think the post was written in recognition of that fact, and was amused by the thudding, abrupt conclusion it had; it was like the author was sharing a joke. "Yup, it was that easy".
People who do this kind of security work (check out the rest of the author's posts) tend to be running their browsers piped through a local interception proxy. Once you develop the habit of mind to look for stuff like security parameters, it's hard not to notice these kinds of things. I think more developers should tool up the same way and learn the same habits.
The open source tooling here is getting better but the gold standard, used by virtually every professional application security worker in the industry, is Burp Suite. Lots of people have tried to make modernized, open source versions of Burp, but at this point cloning it is like cloning Microsoft Word.
If I was your director of security, one of the first things I'd do is build a plan to get all your developers trained up on Burp. It's useful for more than just security testing.
In addition to burp that's already had a mention, I'd recommend looking at OWASP ZAP. It's fully open source, which is nice and has had a lot of new features over the last couple of years.
It can also be integrated into CI pipelines for automated security testing.
All great points and true! The problem is PayPal hasn't been a great company to so many people their practices are abysmal. I've had my company account frozen more then once and it was a terrible experience and it's happened to lots of people. This is a company that makes a lot of mistakes and has bad judgement. They don't deserve my understanding. They haven't earned it. Other companies have.
But otherwise you are right. Less scrutiny more understanding so companies will be open and honest when they screw up.
Indeed - I've long since given up on security answers/questions as being secure. Kind of defeats the purpose of unique passwords if all the answers are common knowledge...
Had to laugh at one instance where I actually had to read out the 30 character secret answer on one support phone call :P
The problem is in PayPal's case, 2FA has been terrible for years. I've even been locked out of the account for a whole week because of their shitty SMS sending service. This prompted me to disable 2FA on Paypal, because weirdly enough that makes me feel "safer" (as in safer from losing my money due to Paypal's stupidity by being locked out of the account).
So in this case I'm certainly not one to say "hey, mistakes were made - let's give them another chance." They've been getting reports about their 2FA system for years. So there's no excuse at this point.
PayPal's 2FA broke on me when it started locking my account every time I attempted to use it, because I'd previously made it send too many SMSes (poor signal).
I was thankful that support let me disable it, but it was worrying they didn't try to verify that I actually controlled my device first.
It's weird, don't all services that enable 2FA give you reset codes? Shouldn't they ask you to use those, or at least give them one if anything so they can help you disable your account? Kind of odd.
The simplicity of this exploit demonstrates something profound. The most dangerous things in life are not hidden deep in the weeds. Rather, they stare us in the face in the most obvious spots. It isn't the unknown that presents the biggest threat. It is the known that we never gave a second look.
The cardinal rule of security is: you never, ever, trust anything the client sends.
This bypass is a perfect example. Although author doesn't mention which interception proxy he used, I'm 99% sure it was Burp. Replaying modified content is trivial.
>you never, ever, trust anything the client sends.
The author likely wrote code that correctly validates "for all security questions a correct answer is given" and just forgot about the part where "for-all propositions are trivially true of the empty set."
It's easy to read a for loop for what it's intended as - a loop - and not think about "what if we never enter it at all?"
It's not the number of casualties that scares people, but rather the nature of the threat.
Fires have existed for several millennia. Our ancestors who built and lived in the very first settlements suffered from their homes/stores occasionally burning down. We know what types of conditions increase risk of fires and we know how to minimize those risks and put the fires out when they occur.
Bombs on the other hand are unpredictable. They also cause their damage instantly and there is no way to minimize or prevent it. You can escape from a burning building, or if stuck, wrap a piece of wet cloth around your mouth to minimize the amount of smoke you breathe while you wait for rescue. You can't outrun an explosion.
That's why people are a lot more scared of bombs than they are of fires (or car accidents, for that matter, which kill many more people than both fires and bombs combined).
Availability bias is definitely one aspect, but I think a big part of it is also how easy it is to tell a story that separates oneself from the victims (this often takes the form of victim blaming, but not necessarily). It's easy to tell yourself the story of how heart attacks happen to people with different lifestyles or genetics, or how car crashes happen to drivers who are less attentive, or how violent crime happens to people who live in other neighborhoods. It's a lot harder to tell yourself the story of how you'll avoid the plane with the latent mechanical fault or how you'll never be at a gathering place that would make an attractive terrorist target.
One of my PayPal 2FA phone numbers is listed twice and both cannot be removed (errors when I try). Their support can't help with the situation because their side wasn't able to see the duplicate.
Is 17 days an acceptable TAT here? I know investigation and fixes can be a challenge, but with the severity of this exploit+PayPal being a serious financial service, I kind of would hope for a faster fix. Maybe I'm off base...I really don't know; curious what others think.
How much time would've had to pass (without PayPal doing anything) before the author is ethically obligated to post to HN/media/etc about the hack? I believe publicizing an (unpatched) exploit like this crosses into criminality, but it would be essential to demonstrate some kind of proof, for credence and gravity. I'm guessing the community has some standardized guidelines for this sort of thing, but I'm not aware of them.
Just to be clear, it bypasses any of their 2FA codes, not just SMS-based codes. The security questions bypass "feature" also appears on my account for which I use a VeriSign 2FA dongle.
Notice that 17 days is basically what is needed to add the issue to the next sprint, complete its development along with everything else for that sprint, and deploy to a live site. To me that sounds fair.
The "standardized guidelines" sometimes vary -- mostly dependent on the nature of the vulnerability -- but 90 days seems to be a pretty common timeframe. That's what Google gives others before they publicize the details, for example.
I've seen equally as ridiculous web bugs, computing prices browser side in javascript, credit card numbers encoded in REST API endpoints, financial websites not supporting 2FA at all or mixing http requests into the sites. We're solidly in the dark ages of web security still.
When I went to setup my online account for my old bank, I entered a randomly generated 16 digit key and got an error; "Maximum password length limited to 6 characters...only alpha-numeric"
I called to inform them that their account creation was broken, because obviously that was a bug. They told me that sometimes people have a hard time remembering their password, so they "need to balance between ease of use and security". My jaw dropped and my head rolled off my shoulders.
It seems standard practice for German banks to limit online passwords to five alpha-numeric characters. Fortunately, you need a TAN number (generated by a device or from an SMS message) to actually make a transaction. I have no idea why they limit the password length like this.
I'm guessing it's five characters so people don't just use their four digit PIN. I don't have any explanation for why they would limit it to five characters though, or why it has to be alphanumeric.
That said, Comdirect seems to offer regular passwords or six digit PINs and Bank of Scotland (in Germany) seems to also offer regular passwords.
But there are plenty of other offenders. For example my energy provider E-wie-einfach requires a mix of alphanumeric characters but forbids pasting and autofill (the latter of which luckily Chrome simply ignores).
I don't know what idiot ever came up with the idea that disabling paste makes logins more secure (only justification I've ever heard was about preventing brute force attacks, proving an utter lack of understanding of the technology involved) but sadly it's still a thing and it still leads to people using trivial and easy to type passwords.
Sure, except then it would intercept the copy, not the paste. And it basically trades clipboard vulnerabilities for keylogging vulnerabilities.
A more realistic exploit is a Flash banner on another tab intercepting the password in the clipboard. This is why offline password managers automatically expire the clipboard though.
The danger of discouraging complex or long passwords is far greater than either of these two attacks, both of which rely on the user's system already being compromised.
Heh, both my banks (Banco do Brasil and Santander) are worse. 6 characters, numbers only! "For my safety" they recommend not using my birthday - how thoughtful.
It's the personal identifier (Kinda like social security number I guess? You write it on every contract you sign basically) and a 4-digit pin here in Spain. Stupidly insecure.
Attacker's first attempt has a nonnegligible chance of success. Attacker can just do one attempt against one account and move to attacking a different account after each failure.
It's been a looong time ago but I remember when some instant messenger application was found to be performing authentication client side -- i.e. "Hey server, I'm $user. I promise!" and you were in.
I want to say it was Yahoo Messenger but my memory could very well be lying to me.
WhatsApp used to use your devices MAC address for authentication. A quick screenshot of the vicitim's settings page would be enough to send and receive messages in their name. Since whatsapp does not store messages after they have been delivered, the victim would never see the messages sent from his whatsapp number (except when looking at the recipients phone). You could, however, realize that your account has been hacked when you notice that some messages were not arriving (they would arrive at the attacker's client only and whatsapp will not transmit already recived message again).
The only fix was to buy a new phone and hope nobody will make a screenshot of your settings page again (or spoofe your MAC address which would not always work).
What exactly is wrong with offering SMS 2FA? I don't have a smartphone, but I have a great little prepaid phone. Why should I get no features just because they are not necessarily as good as it gets ? Also, as far as I'm aware, all of the major "attacks" on SMS 2FA are just the fact that a smartphone can be compromised in many ways. I have much less attack surface: an attacker would need to reprogram my undocumented exotic architecture phone with a bug in a parser which is probably too small to contain bugs of that nature.
The other way is SMS MITM, which on some networks is demonstrated feasible, but requires basically setting up an SDR near the victim, a lot more complicated.
With my prepaid provider, customer service is shoddy but would need considerably more to do a number port than just the number.
By removing SMS 2FA you gain nothing, and I lose my only viable second factor.
All the major 2FA attacks I have read about involved social engineering of the phone provider's customer service to number port. The thing is, since it is not a software system but depends on humans, attackers can keep trying until they get a CS rep they can manipulate.
... hackers can bypass the encryption protections by exploiting SS7 to create duplicate accounts that receive all the messages intended for the target phone.
This is done by tricking the telecoms networks into believing the hacker’s phone has the same number as the target’s. That means they can set up a new WhatsApp or Telegram account with the same number and will receive the supposedly secret code that confirms they are a “legitimate” user. From there, they can impersonate their target, sending and receiving new calls and texts.
There is a TOTP/Google Auth 2FA application for J2ME, which will run on many feature phones: http://totpme.sourceforge.net/
In addition, 2FA systems are not limited to devices the consumer already has -- Paypal could easily send you a device that generates a one-time password, or that uses a challenge-response protocol to do so.
They should be fixing service providers and not blaming google as in couple of days ago. (of course, if a nation state is trying to hack you good-luck!)
The attack angles are slightly different but it's not really any safer. The classic scenario is to call the provider to say the SIM card was "stolen" and either have them send the new SIM card to an address you control, or if that is not an option, snatch the new card as it arrives.
As I just mentioned elsewhere on this thread, SMS isn't the problem here. I use a VeriSign dongle for PayPal 2FA but PayPal still offers the same option of using security questions instead. I was previously under the reasonable assumption that the security questions form was ar least handled correctly, but apparently not.
I have both a yubikey and an auth app but can't seem to find a way to use them with paypal. Do you have some kind of special account or is that a feature bound to a certain market?
I don't think there's anything special about my account. It was a "personal" account when I created it, probably almost 15 years or so ago, then upgraded to a business account maybe 8-10 years ago.
I don't have my password handy right now so I can't login to check, but look for settings related to their "security key". I don't know if they still do or not but at one point they offered a hardware OTP generator (similar to the old RSA SecurID key fob) for a one-time $5 fee. Alternatively, you could use an existing one you already had just by entering its "ID number"; I used the IDs of my Symantec VIP Yubikey and also the app.
Sorry I can't be more specific or give you better guidance. I know that the option does exist, though; perhaps just explore the available options and maybe you'll stumble across it. Good luck!
This seems like a good time to rant about PayPal 2FA and its poor usability.
Every time I open the PayPal app I have to wait for a text message and type a code across. That should not be necessary! PayPal should count the app as the second factor and only ask for the password. I am happy to us 2FA with Google because I only have to use it when on a new device, or once a month or so in the browser.
Second, support 2FA apps like Authy already. SMS based 2FA is both insecure and unreliable.
This is scarily simple. Profit indeed for a black hat. Coupled with a recent post about Gmail on how phone carriers are the weakest link, I just don't feel safe with anything but a dongle based 2fa these days.
Unless the master key is compromised allowing anyone to generate authenticator codes, as I seem to recall happened a few years ago with a major provider.
Am I the only one who found it odd that the author had internet access, but there was no phone signal? Maybe it's because I'm Kenyan, where phone penetration is much higher than internet penetration, and where internet access over GSM has the biggest share of the internet access pie chart.
This often happens when I'm travelling internationally. If I plan on buying a local sim card instead of purchasing a roaming plan - I might not have access to my SMS until I get back home.
Get a next gen phone; They should all do Wifi Calling now. This causes your phone to tunnel the cellular via internet link, and you get full call and sms coverage.
Of course, 2FA via SMS is a bad and deprecated pattern and needs to die! But! you can get your phone overseas without roaming which is pretty neat.
Not really. If you're American international roaming fees are usually pretty steep so many times if you want phone service you get a local number. WiFi is ubiquitous, especially hotel wifi.
Just looping trough input arguments from the client, validating them and then acting on them gives the client control of the code execution.
It's not enough to validate each input argument. You musth also verify that all parameters are really there and no extra parameters can slip into the system. The whole combination must make sense. Enumerating all used parameter combinations in a record that can be changed easily is one way to solve this.
I'm assuming that the relevant code, is simply an if statement checking for the existence of the url parameters, not even checking if the security questions are correct.
Or they designed it to show a variable number of security questions (so management could come along and say "we need 4 questions now" without causing havoc). Then they'd iterate through the responses, verifying them against the appropriate question. Simply forgetting to enforce that the number of questions asked has to equal the number of responses sent would cause the described vulnerability.
Nowhere in the article does it say that the POST data was in the URL. As I understood it, he was editing the request body before the request was sent to PayPal's server.
isset is do not handle all corner cases, it would return true for empty strings or false for NULL. You should use framework like Laravel: Input::has('key')
By design type of security challenge should not be an option. API endpoint should not check for $selectedOption == SECURITY_QUESTION. In this case you still vulnerable for the same attack.
You always should return something. having just return; is bad.
Finally you should use something safer than PHP since mistake can cost you money.
def validate_security_questions():
if not question_0 or not question_1:
raise AuthException('Invalid security questions')
try:
validate_security_questions(question_0, question_1)
except AuthException as ex:
# Todo: Present error to user
pass
I'm happy to see that the article doesn't have any BS that I have to ignore. It's a simple page that only tells the 'required' story. As a reader, I want more people to cut the crap about 'blah blah blah' and get to the subject.
If a service has an outage and a company posts a postmortem, we all think: "wow! that was an interesting bug, lets learn from this". We shouldn't be treating security issues differently.
People who make security mistakes aren't idiots. They aren't negligent. They're engineers just like us, who have tight deadlines, blindspots and mistakes. Shaming people and companies for security bugs will only cause less transparency and less sharing of information - making us all less secure.
This is a really cool bug. Kudos to the researcher for finding it, responsibly reporting it, and to paypal for fixing it in a timely fashion. Hopefully - this type of bug changes some internal processes and the way the company thinks about 2FA.
As for security questions - these are obviously insecure, and should really never be relied on. If you can opt out of security questions - do so. If you can't - just generate a random password as the answer. "I_ty/:QWuCllV?'6ILs`O12kl;d0-`1" is an excellent name for your first dog / high school. Just don't forget to use a password manager to store these.