I'm usually the first person here to jump to 37s' defense. I know some of their people, they're hometown heroes, and I use and like their products.
This is hard to defend, guys.
It is literally the-simplest-thing-not-to-fuck-up. Nobody's asking you not to have security vulnerabilities. In fact: nobody's even asking you to fix vulnerabilities. We just need a reliable way to communicate with you about them.
If you're selling accounts on a web app, you need:
* A security page
* With a PGP key
* And an email contact
* of someone who will write back
* who knows what a security vulnerability is
* and who will write back quickly.
That's it. Do that, and you're not a punch line. If someone dumps zero-day about you onto Twitter, you're already two steps ahead in the PR war, because you had a reasonable process, and the researcher ignored it.
Bonus points --- things that are trivial to do, but that nobody's even asking you to do:
* You can assign special issue numbers to vulnerabilities, to make the researcher feel like an XSS disclosure isn't the same thing as a bug in your online help.
* You can thank researchers privately, and let them know that you'd really like them to keep disclosing thing to you --- you could even give them (wait for it) a phone number.
* You can do what every vendor with a real security team does, and keep a public web page thanking people who have discreetly disclosed vulnerabilities to you.
You're exactly right, Tom. We dropped the ball on having the security@37signals.com account setup before this issue so reports went to our normal support team. A specific email address with an associated GPG key has since been added to our security page and there is a person who is tasked to respond. This was added on August 23rd when the problems with the process around the previous XSS problem were discovered.
I never used to have travel insurance but after my first intercontinental flight my suite-case broke and I learned the importance of having one.
In a perfect world your support team would be able to distinguish between someone rambling about bytes and an actual security issue. That's hard without a lot of technical knowledge.
I believe that's why the Rails security team responded rather quickly and 37 Signals support team didn't. I'm sure they will do better in the future.
The problem isn't that 37S tech support doesn't know how UTF-8 works. The problem was that security reports were routed to tech support in the first place. Again, the solution to this problem is a single web page with just a couple pieces of information on it.
It would be easy to defend if being an airline passenger is even vaguely analogous to being an application service provider who charges money and promises you your data is secure -
Brian, I'm on the receiving end of security@37signals.com and @rubyonrails.org. I read your post with great dismay, to put it mildly. You're understandably pissed: we whiffed on our response to you by changing venue to Rails security without keeping you in the loop.
This is my fault. I identified it as a Rails issue and requested that you forward your findings to the Rails security team so we could investigate in concert.
Craig here at 37s narrowed down a root fix with Michael, Rails' security ombudsman, who then enlisted Manfred's help to track down and repair the root cause. What you see today is the end result of those efforts. The security process worked, but you only saw the Rails arm of it. The apparent 37signals arm of it amounted to runaround. Completely not OK.
There are still a couple of issues Brian brought up you haven't addressed.
The main one being the hubris of the copy on your security page. Declaring users data to be uncompromisable then justifying this by listing mostly physical restrictions to the datacenter seems to ignore rather the larger security issues for web-based applications. A firewall and latest security patches do not make one immune.
His other peeve seems to be the perceived bullshit of your support team saying they replied to his initial complaint when it appears (at least to him) that they had not, then putting the blame for this on his spamfilters.
Don't ask 37s to meet a standard that nobody else meets. It's just muddying the real issue, which they're clearly trying to address.
This comment, btw, isn't about 37s. It's about the singularly bad advice that web startups should have a fully-transparent conservative "security" page that talks about cross-site scripting and CSRF attacks, when their competitors have pages about "state of the art firewall security". To normal people (ie, customers), the "state of the art firewall security" people sound like they know what they're doing.
I can assure you we're looking into this entire busted chain of communications. The way this was handled (by us) was completely unacceptable. I am not a happy man this morning.
Yes, and that's a good page. Now tell me how to navigate to it on the OS X site, and note how much security marketing fluff you'll see before you ever find it.
I don't even think Apple is a bad example of the form. I think it's entirely reasonable for them to market security on their main pages, and leave the researchers to find their support page on Google. There are tens of researchers, and millions of customers.
Apple has a lot of really smart people working in security research and software security. Some of them are friends of ours. And some of those people are frustrated with Apple for any number of reasons. But none of them --- in fact, nobody I know that works in software security --- is particularly upset about http://www.apple.com/macosx/security. It is what it is.
Funny, but I've always thought the 37signals team was bipolar.
On one hand, they're pretty much "our way or the high way" about any support concerns/feature requests.
But, once caught out in public about any issue, they're all over it.
I suggest a very strong dose of self-administered humility for their founders would go a long way. That doesn't mean being un-opinionated; it just means being realistic about human nature and its vagaries as it applies to themselves.
Microsoft is actually decent at security now. The AV they released for free was actually on par with a few commercial products out there (all AV at the moment is pretty bad though, if you're curious). The really difficulty with Microsoft and security is countering their reputation for bad security that they earned over the past several years.
That's because Microsoft is good at anything that they throw resources at. Unfortunately, we haven't thrown enough resources at determining what to throw resources at.
Here's a hint: throw resources at supporting web standards in internet explorer. Seriously that is the main reason I hate Microsoft and around here I'd wager I'm far from alone.
IE's fuckedness is inexcusable. If you actually fixed it, 90% of the reasons I hate Microsoft would just melt away that instant.
And until you fix IE, it's just "DOS Ain't Done til Lotus Won't Run", Web Edition, as far as I'm concerned.
Well, Microsoft has no strategic interest in helping the web, as that would interfere with their desktop products. This is well established. IE was just a big Trojan to slow (yes, slow) and mess the Web world.
Well, yeah, obviously. And the trojan is still doing damage.
Which is why I can't take astroturfing douchebag shills like snprbob86 et al and their promises of a happy happy fun fun friendly new MS seriously. A leopard can't change its spots. And until I see IE change - MS is the worst company in tech and everyone who works for them is culpable.
Regarding astroturfing: I work for Microsoft. It's no secret.
Regarding promises of a happy happy fun fun friendly new MS: I have no illusions. All I'm saying is assume incompetence, not malace. I'm also going further to say assume there are lots of smart, well intentioned people working hard on things that involve problems and complexities beyond the understanding of an engineer who has never worked on anything the scale of the Windows ecosystem.
For the record: I don't have Windows installed at home. I run Ubuntu on my desktop and Snow Leopard on my Macbook. I have an iPhone and primarily use Google web services. Hell, I was an intern at Google.
Personally, I'm at Microsoft to work on Xbox/Gaming. Among my co-workers I'm known as that guy who won't shut up about open source software, startups' web services, and Apple's great taste. You can call me culpable if you want, but I'm part of the solution, not the problem.
He was fucking linking to the IE site - how much more blatant do you want it?
Paid shills are douchebags and I am utterly unrepentant in saying that. Just because someone has an account here doesn't transform them into a paragon of integrity and virtue. You work for the bad guys, you take the criticism, that's how it works. Or are you suggesting there should be no social penalty whatsoever for contributing to the crippling of the WWW? How much damage have they done? How much is 5 years of progress worth?
I will never tone down my attacks on these pricks. It's my honest opinion and I'm going to say it. I feel no need to maintain a "professional" demeanour or keep up appearances. I hate those motherfuckers, and I want them to know it.
Despite the somewhat inflammatory ad hominem remark (I had to look up "astroturfing".. yay urbandictionary), I do agree with what is likely a controversial sentiment:
Well, good enough at least. ;-) Honestly the AV was a good move. Give stuff like that away for free for long enough and the public perception of Microsoft and security may change.
On the other hand, over the years people have hated them and filed antitrust claims against them for including free stuff in their releases. Norton used to make a file manager before the Windows one was any good...now their AV product faces a built in competitor.
Hrm, interesting. I didn't know that Microsoft was leading the industry in security. If we're talking about the commercial OS market, I suppose that makes sense because you're pretty much only comparing them with Apple.
Well, it's tough, right? SE and Trusted both have lots of kernel hardening features that Win7 doesn't have. And even base Linux and FreeBSD are "simpler" (until you add OpenSSH, Apache, and the NFS stack). Win7 has baggage; MSFT is still paying for the DCOM mistake.
You have to counter that, though, with the hundreds of thousands of dollars Microsoft spends on security testing every functional unit of the shipping product. It is not unpossible that they paid someone at Leviathan, iSEC, or IOActive to spend a week auditing Minesweeper (they haven't paid us to do that).
They don't just audit their code. A couple times a year, they do a little internal conference called "Blue Hat" (pace Black Hat), which, as the beauty pageant for all their consulting vendors, tends to get the best researchers from those firms as speakers. They highlight trends and findings for execs, and try to get some of the benefit of the audits spread across multiple projects.
There's also an entire layer of researchers, testers, and project managers on top of the security tests. Some of those people (like Leblanc and Howard) are actively turning the results into curricula for training, or for new code standards, or even changes in the shipping VC++ config. Other people develop automated testing tools. Still others develop better, more secure APIs.
When you think of the resources Google has, you assume that the best developers there all have access to a MapReduce cluster that will run their "hello world" test programs against the corpus of the entire Internet as of I dunno 3 weeks ago. Only Google has that resource. Microsoft has more ongoing security test results than any other company in the world --- even moreso because they had so. much. catching. up. to. do. from the late '90s. That has to be a killer resource for them.
So, we'll see. I wouldn't run a Microsoft OS as a server, for a lot of reasons. But I have more respect for the work they're doing --- and the intentionality of that work --- than I do for a lot of Unix security projects.
Everything OpenBSD did to fix NetBSD's security in the '90s, Microsoft adopted on a massive scale, and then spent tens of millions of dollars to improve.
Sorry for the long comment, I just don't want to come off like I'm sniping at you, or trying to start an OS war.
Sorry for the long comment, I just don't want to come off like I'm sniping at you, or trying to start an OS war.
Actually, the long comment is much appreciated. It's a very interesting subject for me. I wasn't trying to say that I doubted you, just that I'm no expert. :)
As I understand it, Rails' string escaping would treat an invalid byte sequence (eg, 0xFF, 0x1C) as a single multi-byte code point, and thus not filter it, even though 0x1C (which is '<') should have been escaped.
The browser, however, would correctly treat 0xFF as an invalid initial byte, and then interpret the next character point, 0x1C ('<') independently.
So, you could pass arbitrary characters through Rails' string escape functions by prepending an initial invalid byte sequence, and thus cause the browser to interpret arbitrary JS/HTML.
I don't think taint mode is helpful for this. All data comes from outside of your program, but you somehow still have to display it. As we've seen recently, this is hard to get right. Escaping HTML is one thing, but what if you want to let a user type in a URL? Make sure to exclude javascript: URLs. Even if you whitelist only "http://... URLs, how do you know that a browser bug won't allow an attacker to inject JavaScript, compromising any account used by a user of that web browser?
Basically, web browsers need taint mode. The programming language that produces the web page is a whole other issue.
I think what we've really learned from this is that the current JavaScript security model is not good for what people are using the web for these days.
We really need something like a "<without-scripts>" block, where anything inside could never use JavaScript (including links that are to javascript: URLs). This would make life a lot easier for web developers.
Then XSS attacks would insert </without-scripts> before <script></script> =)
What we need is literal separation of control statements (eg, <script>) from content such that neither can be easily misinterpreted, but that would be a significant departure from existing design.
A modernized, preferably static version of perl's taint mode might work. You need two types of strings, one for trusted and one for untrusted strings, all your output functions accept only trusted strings, all your input or request parsng function return untrusted strings, and all naive string manipulation functions return untrusted strings if at least one of their arguments are untrusted. Then the possibly vulnerabe code is limited to a few statically identifiable routines that take untrusted strings and return trusted ones. They still may be buggy, but at least you know which parts of your code may induce vulnerabilities and need special attention.
And with modern type systems, these types might even be phantom types incurring no runtime overhead. Athough things like the differences between "safe for passing to a browser" and "safe for passing to my SQL-server" might compliate the architecture.
Ruby has a taint mode for String too. I think it's used in Rails, or maybe they rolled their own, but the concept is definitely there.
Problem is, at some point you have to be able to display user-entered data. You indeed mark it as tainted, or equivalent, then escape it as best you can. The issue here was a bug in the escaper. Tainting was working as planned.
That would require escaping the HTML on the server prior to output, which is exactly what we already have (and regularly fails, even when people think they're doing it right).
The real analogue to SQL query parameter binding would be an output format that keeps the two distinct.
Ironically applications that bypass HTML entirely and use JavaScript to build up the DOM (e.x. Cappuccino apps) are largely immune to XSS attacks (as long as you avoid innerHTML and such)
It's probably better to disable the JavaScript engine based on certain heuristics, for instance when there is invalid character encoding in attributes.
You found a security exploit, feel special. Finding an exploit isn't a voucher to rant against the people responsible for it. The bottom line is that nobody can be 100% sure that their data is secure after they've put it in the hands of a third party.
The author did not claim anybody can expect or provide 100% security. The write-up was (among other things) about something more important - how do companies respond when presented with an important security issue. 37signals responded fairly poorly and that's useful information. Interestingly, this is not the first report of a somewhat strange attitude they seem to have regarding possible exploits -
I didn't read the post as a rant at all. If anything I think that the author has articulated an important factor in evaluating any third party provider's security infrastructure: attitude.
"Web application security is still an immature field, and many of the layers are sufficiently poorly designed that issues like this will pop up for a good long while. Just like buffer overflows have been a weak spot for C security as long as the Internet has been around, escaping issues will continue to be a weak spot for web security for as long as we're afflicted with this particular architecture."
It seems like a field not only in its infancy but also oddly unglamorous and under-reported. There's no repository (that I know of, at least) of vulnerability reports of major web apps, for instance, yet it's easy to look up an exhaustive history of Flash vulnerabilities down to the seventeenth decimal sub-version. And yet the various XSS/CSRF/etc vulnerabilities are easily as dangerous and as exploitable. Twitter's dreams of a billion users and a new internet were not exposed by a buffer overflow, after all.
That's possible especially since I'm not a 'security practitioner' and I'm essentially talking about a subjective personal impression - that it's taken less seriously, is less reported and incidences of specific vulnerabilities or exploits in specific apps are not tracked in the way they are for operating systems and major applications. This may, in part, be because in the case of web apps fixes are immediately available to all users. On the other hand, you can head to the RoR download page right now and click your way to downloading the current vulnerable version of RoR. At no point will you get a suggestion to check for recent security advisories or patches.
This is hard to defend, guys.
It is literally the-simplest-thing-not-to-fuck-up. Nobody's asking you not to have security vulnerabilities. In fact: nobody's even asking you to fix vulnerabilities. We just need a reliable way to communicate with you about them.
If you're selling accounts on a web app, you need:
* A security page * With a PGP key * And an email contact * of someone who will write back * who knows what a security vulnerability is * and who will write back quickly.
That's it. Do that, and you're not a punch line. If someone dumps zero-day about you onto Twitter, you're already two steps ahead in the PR war, because you had a reasonable process, and the researcher ignored it.
Bonus points --- things that are trivial to do, but that nobody's even asking you to do:
* You can assign special issue numbers to vulnerabilities, to make the researcher feel like an XSS disclosure isn't the same thing as a bug in your online help.
* You can thank researchers privately, and let them know that you'd really like them to keep disclosing thing to you --- you could even give them (wait for it) a phone number.
* You can do what every vendor with a real security team does, and keep a public web page thanking people who have discreetly disclosed vulnerabilities to you.