If everyone is really going to take the route of "My X Framework is fine b/c nothing's been reported" then I'd like to contribute these links showing vulnerability break downs...
You are shooting your own feet with these links you know. According to your data Django had -ZERO- sql injections & code execution repots, now compare that to RoR which had 6 sql injections & 3 code execution reports since 2009. Even if you went by just the numbers RoR had way more vulnerabilities, now if you also take in consideration the kind of vulnerabilities i can tell you i feel way safer on django than RoR.
How many times did you have to stay up late at night to patch your framework ?
That seems a little unfair on PHP if taken at face value. I don't know PHP, but doesn't it come with things like database client libraries and templating? That's not really comparable with the core Python distribution.
Presumably a fairer comparison would compare (Python + Django) with (Ruby + RoR) with PHP?
Would it ever be possible to have a self-updating framework?
As in, I use Rails to develop web applications. In the past months, I've had to painstakingly go back to every single app I've ever worked on, and manually update it in whatever minor way it needed updating. Now, I'm going to do that again.
If you consider that I'm going to continue to build Rails applications, the number of apps I will have to update every time a security vulnerability comes out will be larger and larger. For a framework that prides itself on sane defaults, it doesn't seem quite sane to have to worry about taking down, updating, and then relaunching every app you ever had when one of these vulnerabilities come out.
I don't mean updating a Rails 2.3 app to 3.2 automatically, just applying these security patches automatically, or prompting the user to do so. Our operating systems do it, our IDEs do it, our programs do it, why can't a framework? I'm not saying it would be easy, but I'm sick of having to be subscribed to these email groups just to start the manual process of fixing everything.
I don't know if I like the idea of a single point of failure; whatever service pushes the update would have a big, fat Wile E. Coyote target on its back.
I am currently trying to figure out something similar for a Django-based app at the moment. I include a version in the settings file that I plan on comparing against some sort of version array on a central website.
If a new security update is available, a conditional notification will show up for admins, but they will still have to update somewhat manually - I can't set up a cron job to trigger the whole procedure, especially because it might wreck their service, depending on how it works.
'I don't know if I like the idea of a single point of failure; whatever service pushes the update would have a big, fat Wile E. Coyote target on its back.'
This is a solved problem though, right? Antivirus companies deal with this when they push out definition updates that render backdoors ineffective, and yet while they probably are targeted by the virus makers, they have been quite resilient.
Not sure about the AV servers themselves but a common thing for malware to do is to MITM or rebind DNS entries so that AV software updates are served from the attackers server.
That still doesn't automate the process, and there is no telling whether 1.0.1 is a small update or a security update. Having said that, I really want a service like this, and I wanted to make one for Python (PIP) at one point, but hopefully someone more competent will do it or has already.
A different version notation is needed to know whether the version delta between your installation and the most recent contains vital security updates.
You could easily automate this (cap,puppet,chef) if you have a lot of installs. If you genuinely don't want to test updates, you could run it on a schedule.
What it doesn't do, and can't do, is guarantee that security updates will never break your app, but they do quite a good job of isolating them, you do have to do some testing. There is possibly an argument for lts releases which receive few new features and focus on bugs, but what you're complaining about here are really the complexities of running multiple web apps/servers, not something a framework can really help with.
I don't think you have to worry about this update if you have already updated tho latest anyway (which you should have done if on 3.2.x).
That assumes you know about the vulnerability, though. If you aren't subscribed to an email group, you might not find out for some period of time. The majority of Rails programmers, or beginners even, aren't a part of these email groups, so they might leave vulnerable code up on their servers indefinitely -- not good.
It might be better to have a cronjob that nightly runs the bundle update command, then restart the server. But again, this depends on you being smart enough to realize this, which since we are talking about sane defaults, isn't something we might want to assume.
The whole issue is that Rails has tried to package what's 'probably a good idea' into the framework by default for quite a while. Unless every guide ever tells you to subscribe to the rails-security group, only those 'in the know' will know to do so. What's better, to have access to these updates by default, or having half the Rails apps out there potentially being compromised?
As long as you're using a reasonably good deploy script (capistrano, moonshine, Heroku, etc) you shouldn't have to worry about updating gems on your server.
Most deploy scripts will look at your Gemfile.lock and update if necessary during the deploy (or just run bundle install every time, which won't give you a locked version error if your Gemfile.lock matches your Gemfile).
Definitely, and that's the approach I personally would take. I think jarin was looking for a way to quickly update all his servers without having to redeploy for every app.
Still doesn't fix the fundamental problem though.
Let's say you automatically install updates on a staging server and automatically deploy to production if all tests pass.
What do you do when you're faced with a choice of deploying an app with a few failed tests (for perhaps not totally clear reasons) or leaving an old version up with a vulnerability?
I missed this earlier. I think it's a good question and point about tests. More and more companies are relying on having good tests, but the situation you propose certainly can arise. It would be a technical and business decision at that point. What is the exposure? What is the risk? What is missing? How long would it take to replace it? What kind of PR do you want to send out. If I got a message that said -- hey, we know this feature is broken, but your security is more important to us, I probably would accept that as a customer.
Because you've built a ton of custom stuff on top of it that may be relying on various quirks or be designed around bad design decisions made in earlier versions.
You will still get this problem with browsers also for example. The amount of times a firefox update has resulting in half of my plugins breaking.
Agreed, there is a small possibility that the patching might break the app in a major way, but for the majority of apps (especially those developed by one person just for fun), the benefit of being patched will likely outweigh the chance that the app breaks.
For those with larger organizations and larger apps, it would be the same process that big companies do whenever a windows operating system patch comes out: they test, test, test, and once they are sure the update will work ok, they deploy the patch.
Stop installing latest shiniest things manually and start using default package manager of your linux distribution. Then all the security maintenance you will probably need is to do equivalent of "apt-get update; apt-get upgrade" once in a while. Yes, it was that easy before all these people who knew better came and decided that they really need latest ruby installed via rvm, latest gems etc. All these ruby/rails deployment "best practices" stroke me as advice given by people who had close to zero experience of maintaining a mission-critical application for a large period of time.
This is the worst possible option and I strongly recommend you minimize your dependencies on your OS package manager; if possible, build your web server from source too.
You need to be prepared at all times to deploy workarounds to newly disclosed security problems. Package managers can and do delay fixes to accommodate the lowest common denominator of users. You cannot run a high profile application and be at the mercy of whoever is administrating your OS package manager.
While I usually agree with your advice, I think this approach, while theoretically correct, is actually damaging to the majority of your audience here. The closest analogy I can think of is that it's like requiring users to change passwords every 30 days: great in theory, but in practice it's a disaster.
The problem is that it is simply untenable for all but the highest-profile sites. Finding good ops people, even in the bay area, is extremely hard. Most sites are only going to realize that a security update has been released when their package manager tells them it has, and updating that package is usually a 30s process.
The companies I've seen have a hard enough time keeping track of security updates with package management. For a small to medium team with two, one, or even zero dedicated ops people, asking them to custom-compile (for example) a webserver, ruby implementation, and other critical libraries (openssl, glibc, etc.), subscribe to the relevant security mailing lists, and follow along with updates and security patches is tantamount to having them leave unpatched vulnerabilities on their systems for months or even years.
If you ask your users to change passwords every 30 days, there are inevitably going to be a few who take it seriously and generate and remember secure passwords every single time. But the vast majority are going to use weaker passwords than they otherwise would have, and duplicate those passwords across accounts as much as they can figure out how. Likewise, if you ask already overworked ops guys to manually compile and keep track of security vulnerabilities for their webserver and dozens of libraries, a few are inevitably going to keep on top of things and release fixes minutes after vulnerabilities are announced. But the vast majority are going to simply give up after a month or two and be significantly worse off than if they just use Ubuntu's automatic security package updates.
I don't know what to tell you other than that this is something you actually have to get good at if you're going to run a high-profile app or hold sensitive information of any sort.
I'm not saying you need to be able to find your own nginx vulnerabilities or even write your own patches. But reinstalling nginx or Apache from source shouldn't be a science project for your team; you should know that you can get your prod servers running on a from-source build.
Sink in the time to make sure you can do that now, so you aren't caught totally flat-footed when an emergency happens.
I agree with your intent. I do. If you're a high-profile site, or you're storing extremely sensitive customer data, this is absolutely something you need to be doing.
But your audience here is startups. Almost always, these startups are cash-strapped, time-crunched, and have zero dedicated ops guys. Ideally, even these types of businesses would prioritize security to the level you're asking.
In reality, as I said in my previous comment, almost none of the startups I've worked at have had the ops capacity necessary to handle this. Even with package management, servers go months without having critical security patches applied. Asking these types of companies to do something that increases the ops overhead necessary to apply patches is going to result in a worse outcome. Keep in mind that it's not simply compiling from source and having infrastructure to apply security patches across multiple boxes. It's also keeping an eye out for reported vulnerabilities — and not all projects have dedicated security mailing lists. Not all projects even report this information via mailing lists.
I wish it were different. I understand where you're coming from. But the incentives are set up ass-backwards, and until companies start having serious liability for data breaches, protecting customer data simply isn't going to be a priority. In the meantime, encouraging them to set up their infrastructure in a way that requires even more ops effort when they're already struggling to keep up is going to have an adverse outcome.
I'd like to think that if a company doesn't have the resources to properly maintain security patches on their deployed applications, they would use some app hosting platform (like Heroku), but I suspect that there are indeed many companies who are set up like you describe.
That doesn't change the fact that relying on your OS package maintainers to properly update packages results in the same "having them leave unpatched vulnerabilities on their systems for months or even years" (at least the "months" part if Ubuntu is any indication).
This would seem to be a "damned if you do, damned if you don't" scenario.
Also, the 30 day password change thing isn't even something that's "technically correct". It's a "why do we cut the ends of the roast off" vestige from old DoD recommendations.
Hot-off-the-presses updates often contain serious bugs themselves as well. I lost count of the number of Drupal patches-to-patches-to-updates I've had to deploy, each coming within days or sometimes hours of the prior one. It got to the point where unless it was a critical remote vulnerability I would never apply a fix that had not been out for at least a week with no subsequent serious bug reports.
And this common refrain is the shrill cry of someone who hasn't had to manage a Ruby on Rails application. Application libraries are the domain of the developer, not operations. Your job is to give me a stable system for me to deploy on top of, not to dictate what I can and cannot deploy.
Rubygems and bundler makes updating your application stupidly easy. I blogged about it here: http://bottledup.net/2013/01/10/bundler-and-gemfile/ but if you have a good Gemfile and automated CI then all you need to do is `bundle update rails` and then deploy your code. This is at least as easy as using apt to update your stuff.
Not if you want prompt fixes to the current spate of Rails bugs. As I write, Ubuntu currently has the fix for the last remote code execution vulnerability in Rails marked as being of "undecided" importance, and still "unassigned", even though rails-core shipped a patch weeks ago. This is remote code execution against any vulnerable app, which is pretty much as bad as it gets.
Debian has a fix for the version of Rails 2.3 that they're shipping, but 2.3 is years obsolete, and officially unsupported upstream.
(comments in the Ubuntu issue reference the Debian fix.)
There are things that the Debian and Ubuntu maintainers do well, but their packaging for Rails and the underlying Ruby language have both been problematic for quite some time, and their use is generally not recommended.
This is not a good idea for certain packages. For example, if you are using something that is under seriously active development, such as Rails and Ruby, and in another case, SBCL Lisp, and even often Emacs.
Up-to-date security is one seriously important reason.
I have remote code execution working when YAML syck parser is being used. I've also got RCE working when psych is used (default in 1.9.x) using a similar trick to syck.
I know you're not supposed to share these things, but I found it really useful how last time there were public PoC we could use to test our patches. Any such thing this time around?
I don't know why this is being downvoted - I don't think this comment is sardonic. It IS good that they're surfacing because that means we can patch them. Better visible than invisible.
The vulnerabilities here only affect versions that have been maintenance-only for almost eighteen months. The last two major versions of Rails (since 3.1, released in August, 2011) are unaffected.
In my opinion, the fact that someone bothered to comb through 2.3.x and 3.0.x to find exploits similar to the last one points to a very good process, not a broken one.
Hardly the same. Lets not kid ourselves into thinking any of the popular software we use is bug-free, which is nigh impossible feat for any large codebase. If this rails bug had not been discovered, it would still be there.
When a particular bit of code gets audited, you tend to find a bunch of holes. And any time you find a conceptual mistake that was made once in code, odds are that a careful audit will find it repeated.
Everyone thinks that they are different, until it happens to them.
Django has had its fair share of patches. This is no reflection on the quality of the project, as it shouldn't be for any sufficiently complicated framework (like Rails).
I also think the vulnerabilities are similar enough to suggest that the first one gave valid reason to ensure the same issue doesn't occur elsewhere. Rather than dusting these issues under the rug or silently fixing them, they're being responsibly reported with patches and updates provided at the same time.
No, it points to a project with a larger user base and much greater degree of scrutiny. No web framework is entirely secure - some have just had more of their insecurities made public.
> Similar projects with a similar user base, like Django, don't have vulnerabilities of this severity with this frequency.
Not all Django vulnerabilities have been discovered or reported yet. Just because no one has found or reported a vulnerability, doesn't mean that it it doesn't exist.
I did not mean this sarcastically. It is much better for bugs, especially security bugs, to surface and be dealt with ASAP. Rails is a fantastic framework, but when you have so many dependencies on external libraries, all contributed by different people, things like this are going to happen. I don't think it has anything to do with Rails in particular.
This simple command line tool to execute arbitrary code on your server works, kids. I'm also north of 90% probable that I could weaponize it to turn any image tag on the Internet into "roots your local machine", and 100% certain I could do so for any page I could coerce to execute JavaScript.
> I'm also north of 90% probable that I could weaponize it to turn any image tag on the Internet into "roots your local machine"
Definitely not saying you're wrong, but I'm not convinced this is doable. Every exploit I've seen requires a request body -- how would you do that with an IMG tag?
Go on a Rails security safari, armed with the knowledge that any YAML parsing is victory, and pay very careful attention to code paths involving Rails/Rack route/parameter processing, especially anything which smells of magic. To clarify: I haven't actually done the work yet.
I'm actually going on a Rails security safari later, though not particularly looking to widen this/these vulnerabilities. I figure I've gotten enough out of the community over the years to contribute part of a workweek and get one more hole plugged.
Not unless nginx/apache routes the request directly to public/. There will definitely be more code-paths to YAML.load, but so far ActionDispatch::Http::Parameters has been the entry point.
I mean "This will also let the adversary root your Macbook, Rails developers, if e.g. localhost:3000 is running an unpatched Rails app."
One would think this is strictly less important than "root your server" but that hasn't been true for 100% of Rails developers that I've recently spoken to so, if losing your Macbook is the inducement you need to drop everything you are doing and patch, I will supply that inducement liberally.
My website gets a handful of single-page visits, referred from some real sketchy domains, every day. They are very regular and appear to be automated. I wonder if it's part of a broader scam to get website owners to visit sites which root their dev machine via 0-day browser or server vulnerabilities?
ah. i wonder if there isn't any json stuff that yaml doesn't support because the convert_json_to_yaml method does a bit of munging. if you take a previous PoC and feed it raw without any modification it corrupts it and makes it invalid yaml. for example convert_json_to_yaml will put spaces around the ':' character.
The only meaningful take-away from these continued security vulnerabilities is you shouldn't ever let a rails project you maintain ossify to the extent that you can't easily/safely run "bundle update", commit, and deploy.
(Didn't expect to post this comment twice today, JFC)
Well, there's also an issue of trust that I think is being overlooked.
We now need to ask ourselves, "Can we trust the Ruby community, and can we trust software written in Ruby?"
Before these recent exploits, there were a lot of us who would have already answered "no" to both parts of that question. Now there may be many more people who answer them the same way.
The warning signs have been there for a long time. The general attitude of the Ruby community is one of these warning signs. The smugness, the emphasis on "best practices" (which usually aren't very good, in reality) and the drama and semi-religious worship surrounding certain members of the community (DHH, Zed, and _why) are what I'm talking about. This kind of attitude promotes an environment where bugs can happen in the first place, then go undetected, and in many cases also go unpatched once discovered.
At this point, I think it's necessary to scrutinize the Ruby community and their software much more closely than has been done in the past. The complacency of the past is not acceptable any longer, given what has happened recently.
Can someone explain the security issue in more detail? Is it that I can supply Symbols (and other Ruby objects) in my YAML, which normally can't be included in JSON? That seems to be the basis of it, but I'm looking for more info if available.
The Rails YAML parser allows execution of arbitrary Ruby code, by design. If an attacker can figure out how to get input into the YAML parser, they win.
YAML does not allow execution of arbitrary Ruby code. Some YAML types allow specifying a custom class, which the Psych YAML parser (default in Ruby 1.9) will call the initialize or []= methods. If you can find a class that eventually evals() input passed to initialize or []=, then you win.
http://ronin-ruby.github.com/blog/2013/01/09/rails-pocs.html
Thanks for the link, but it looks like that's for CVE-2013-0156 and CVE-2013-0155. This HN post is about CVE-2013-0333. It does look like there's a newer blog post about this issue though: http://ronin-ruby.github.com/blog/2013/01/28/new-rails-poc.h...
Took a while to write a new blog post. Still, CVE-2013-0333 relies on the same YAML deserialization technique as CVE-2013-0156, so all the previous information is still relevant.
There's been a lot of rails bugs coming up lately, why are so many being found at this particular point in time? Who's finding them and what's spurred their interest?
This is related to YAML loading (I believe) so it is somewhat related to the major bug from a couple of weeks ago. My guess is someone went through the Rails code combing for any interaction with YAML that wasn't caught last time.
Hmm, I'm curious of the story behind this vulnerabilty NOT existing in 3.1 and 3.2. How the heck did it get fixed in 3.1 and 3.2, but still exist in 3.0 and 2.3? It was accidentally fixed in 3.1? It was on purpose fixed in 3.1, but it didn't occur to the fixer to backport to 3.0 and 2.3? Eh?
I wonder if it's worth someone putting together a kickstarter so that all of these rails dependant startups can crowd finance hiring some penetration test firms to do a thorough audit of the entire rails codebase?
The smell of late night coffee, having to update Ruby on Fails yet again, or better, the colder more bitter coffee in the morning, when having to offline and rebuild a compromised server due to this framework.
It started with such promise, and now, we are looking to migrate all Rails apps off to alternative frameworks at the first opportunity. It really is a shame.
The usual fallacy: "the more vulnerabilities found on a piece of software the less secure it is."
That just doesn't logically follow: your chosen alternative framework just might have not been thoroughly audited (yet.) Or not audited by the right people, etc.
Actually I see it more of indicative problem with the design of Rails. We keep getting told we should like magic, convention over configuration, but maybe it would be better to have a config file where we can turn the magic off in Rails.
But please, no more Yaml files - it seems parsing Yaml has already caused enough holes in Rail this quarter.
This is good advice in general, but Rails used the wrong kind of parser for this kind of data. JSON is not YAML. A YAML parser does not provide the same guarantees as a JSON parser about its output. It doesn't matter if both parsers are completely correct if you use them inappropriately.
My understanding is that YAML is supposed to be highly general-purpose and is not supposed to be resilient to malicious input. These aren't YAML parser bugs, these are framework bugs that pass malicious data to the YAML parser when that parser should never be exposed to it. And that in turn appears to be entirely due to an extremely overly-helpful and magical nature of the framework.
Nope, the problem is not due to the magical nature of the framework.
They implemented the JSON decoder with a JSON to YAML converter, passing that through the YAML decoder.
The fix involves using an actual JSON parser and skip the going through YAML part. So it does qualify as a JSON parser bug, IMO (which is what I clumsily attempted to imply with my "(or JSON)" clause above.)
What if I told you that there is a new web framework similar to Rails, called Fortran on Fails. It has zero vulnerabilities reported against it, ergo it must be 100% secure.
> Symfony applications are not vulnerable to this attack
And it definitely has not the same track record. For example, it has been audited by a security company in one of the first versions. So far I have only seen minor security vulnerabilities, nothing like what Rails brings every week.
Anyway, that's not why I changed. I found it to be much better architected, 100% decoupled (as opposed to monolithic). You can change anything you want if you have to, or if you find better vendors and want to try them. It has actually been designed from day 1, and not by someone who has read about design patterns little time before creating the framework, and calls himself 'the master chef'.
Having an OOP background, I thought (I don't remember why) Ruby/RoR community was some sort of elite, and everyone had a much higher minimum level (as opposed to php, where most people are noobs). I was disappointed, most of them didn't even understand what interfaces were for (no wonder ruby hasn't added them yet), let alone knowing the most basic design patterns. They also seem to enjoy laughing at developers from other languages. All this was a pain to watch, but I didn't care as long as Rails was perfect, and that's how I felt about it at first. But it wasn't, so I moved to Symfony2, and found out not only the framework was superior, but also the community was awesome.
* Rails: http://www.cvedetails.com/product/22568/Rubyonrails-Ruby-On-...
* Django: http://www.cvedetails.com/product/18211/Djangoproject-Django...
* CodeIgniter: http://www.cvedetails.com/product/11625/Codeigniter-Codeigni...
* Top 50 Products (Better stop using these too! /s): http://www.cvedetails.com/top-50-products.php