I remember reading this when I was the sole Infrastructure Engineer for Reverb.com. I knew we were being attacked and I knew we had issues but I didn't have any idea where to start. This article sparked my interested in Cyber Security and helped me find a bug in the website that allowed me to set the CEO's credit card as a primary card on my account in production. That was an amazing day.
Thanks to the author of the article for inspiring me to dig in the rails codebase and find vulnerable patterns that I could exploit. Thankfully I was able to pivot into a cyber security focused career and I credit this article for starting me down that path.
Rails has a few things going for it that other languages and frameworks don't but it still lets you shoot yourself in the foot if you're not careful. I ended up writing a blog article about preventing XSS in rails as a direct inspiration from the OPs article:
https://product.reverb.com/stay-safe-while-using-html-safe-i...
Just because this article is old doesn't mean it's not useful. Thanks for posting!
The unscoped find issue is fairly easily solved by using devise's current_user in combination with something like cancancan. Let them send anything as a param but have the controller blow up if the user doesn't have permission to access it.
I suspect an insane number of websites are validated only by the frontend and can be exploited like this.
At some point before that, it was known as "forced browsing", though that name took on a more particular meaning and then fell away. It is by far the most common software vulnerability.
In practice, XSS is largely mooted by the rise of front-end frameworks like React, Vue, and Angular, which are the modern norm for delivering UI (I don't think we have a single client that uses serverside-templated-HTML anymore), and the front-end framework approach is better than the Rails/Django approach; I'm very unlikely to find XSS in a simple React app, but not at all unlikely to find it in a Rails app, because people always dip out of the XSS protection to do programmatic tags.
I think making the mistake of "unscoped find" that "atom_enger" was referring to is just as easy a mistake to make when writing a REST backend (to support a SPA front-end framework) as when using a traditional non-SPA web framework.
It's tempting, when writing a REST backend, to respond to e.g. "PUT /message/:id" by just executing "UPDATE ... WHERE message_id=?" from the parameter, without checking that that message belonged to the user whose credentials have been used to access the call.
That's possible with a non-SPA web framework, and it's also possible when writing REST backends.
Authz bugs are the most common bugs in every application, and no framework has a particular edge on stamping them out. I'm just responding to the claim about Rails having a security edge due to XSS protection, which it does not; in fact, it's become somewhat the opposite.
> I'm very unlikely to find XSS in a simple React app
I do offensive security. A lot of developers are ignorant of when/how React apps tend to be XSS vulnerable. Since it has a reputation as being 'safe' from XSS, devs often assume it's just something they don't have to worry about.
This has led to a small renaissance of XSS bug bounties on sites like hackerone, where you see a lot of specialists who just go around finding obvious, common XSS vulns in eg Angular apps.
I do offensive security. My direct experience, in my audit workload of web applications, which I've had pretty steadily (with some short gaps) since 2005, is that XSS vulnerabilities are far rarer in React applications (regardless of backend) than they are in Rails apps. It's not hard to see why: Rails begs you to disable XSS protection to get anything interesting done in HTML (basically, any time you dip out of Erb or Haml), there aren't a similar number of use cases for overriding React's control of the DOM (because the whole point of React is to give you safe control of the DOM), and when you do, in React, you do it (almost all of the time) with "dangerouslySetInnerHtml" (though this holds just as true with Vue and "v-html".
It's not my argument that there's no XSS in React apps. I've definitely found React XSS. But I assume any Rails app I test will have it somewhere, and, based on experience, I do not have that assumption about React applications.
I completely agree with your overall point, I was just making an aside about the tendency of more inherently secure systems to sometimes lead to dev laziness. The same thing can happen to Rust devs, but I wouldn't argue that it isn't way more secure generally than C.
Thanks for sharing this. I'm new-ish to web dev and have considered a similar pivot. Also a customer on Reverb so that's cool to hear your story and read that write up you linked.
Don't get me wrong: I'm really glad that you caught the bug in your code before anyone lost an eye.
However, a critical summary of your situation could be "improperly used an advanced method that exists for the specific purpose of marking a string as having been certified safe, potentially allowing an XSS that would have otherwise been successfully filtered out by Rails' extensive anti-XSS mechanisms".
I was super upset when one of the drives in my RAID 0 set went, but it was still my bad for not learning that RAID 0 isn't mirroring. (Hint: the 0 is the amount of information you can recover in case of failure.)
My point is just that you can't really claim that Rails dropped the ball here. If there are footguns installed, it's because you installed them (and didn't read the manual for html_safe).
If you're using Ruby on Rails, you should be using a static code analyzer to look for vulnerabilities. Please take a look at Railroader, an OSS tool I maintain that does this (and contributors are wanted!):
https://railroader.org/
I recommend that you also use a web application scanner like OWASP ZAP and something to scan your dependencies for known vulnerabilities (e.g., bundle audit or GitHub's scanner).
That is in addition to normal software development tools like a style checker (like Rubocop) and a test suite with good coverage (e.g., minitest).
If you develop software, it's going to get attacked. There are some pretty straightforward ways to help resist attacks, but you have to use them.
Static code analyzers don't seem to find too much interesting in my experience. They could basically only find string interpol in queries and uses of constantize. Ruby is just too dynamic to find any really interesting issues.
Really? I find that brakeman is a pretty amazing tool which finds a number of surprising issues. Of course, these days the vast majority of Rails apps already have brakeman set up, so it's used more as part of the commit process and less of a "wow, here's a few dozen potentially high-impact web vulns". I wouldn't hesitate to say that it's the most high-signal SCA tool I've used across any language/framework.
(source: a few years of webapp pentesting and Rails app dev)
I ran brakeman on our app and it found a bunch of things that were almost vulnerabilities because it was unable to work out the source of some data going in to a potentially unsafe function but after I inspected all of them, none of them were actual vulnerabilities. Meanwhile a bunch of real issues go undetected that could have been spotted in a language like rust.
The bar is low for such tools regardless of programming language. In a language as dynamic as Ruby it's several miles into the Earth's crust. The tool won't be able to tell you much of anything you shouldn't already know. "Potentially high-impact web vulns" is a next to useless metric when provided by such a tool. The rate of false positives is high. A distraction such as this when your application surely has more serious vulnerabilities is not helpful.
Railroader and Brakeman compensate for this by not being generic analysis tools for Ruby, but instead focusing specifically only on Ruby on Rails. Because Ruby on Rails has a lot of additional conventions, it's much easier to build a specialized tool to look for violations of those conventions.
I suppose this is good, since this document was written for Rails 3 and Ruby 1.x; the author mentions that most of the attack vectors stopped working by Rails 4.1.
We're now well into Rails 6 and Ruby 2.x is in its last year before 3.x rolls out. So far the sky hasn't fallen.
Pretty sure anyone working in plaintext is using something better than ed or similar. emacs, vim, and many others are plaintext editors. It's not hard to imagine we can come up with even better tools for it as well.
I figure, if you ever want to attack a Rails app (white hat on), go through the CVE list and try every vulnerability. There's been so many with exploit codes and it's dubious that every single service is patched.
I figure, if you want to attack literally any app, go through the CVE and try every vulnerability.
I don't think it's defensible to claim that Rails itself is inherently more vulnerable than other similar systems. If you disagree, feel free to cite references.
I never claimed whatsoever that Rails itself is inherently more vulnerable. I made my comment with having maintained Rails apps for many years and watching the CVE list.
The question isn't whether a framework is or is not more vulnerable than other similar systems.
It is whether using that framework <dis/en>courages developer behavior that produces more or less vulnerabilities.
Ruby in general almost certainly does encourage dangerous developer patterns, however I doubt that's the case for Rails in particular as it has largely been practically a DSL for nearly a decade.
As a corollary of "convention over configuration" and dominant patterns in popular accessory frameworks however, this only applies so long as you don't try to be too clever.
I would genuinely appreciate cited references to back up your assertion that Rails almost certainly does encourage dangerous developer patterns.
I'm not trying to be pedantic; I do hear this sort of claim a lot but when pressed people rarely have anything more than pearl clutching about various metaprogramming capabilities that rarely get used outside of blog posts.
In other words, just because you can redefine + doesn't mean everyone is doing that in production code. :)
All I had to do was modify a post parameter in flight and the backend would accept it. Turns out this is what is known as an "unscoped find". More info here: https://brakemanscanner.org/docs/warning_types/unscoped_find...
Thanks to the author of the article for inspiring me to dig in the rails codebase and find vulnerable patterns that I could exploit. Thankfully I was able to pivot into a cyber security focused career and I credit this article for starting me down that path.
Rails has a few things going for it that other languages and frameworks don't but it still lets you shoot yourself in the foot if you're not careful. I ended up writing a blog article about preventing XSS in rails as a direct inspiration from the OPs article: https://product.reverb.com/stay-safe-while-using-html-safe-i...
Just because this article is old doesn't mean it's not useful. Thanks for posting!