Hacker News new | past | comments | ask | show | jobs | submit | ben_h's comments login

Absolutely. At theconversation.edu.au we run a content site—we publish the news, which is the same for everyone. This means we can cache the front page and all the articles as static HTML, and then annotate the page with user info for signed-in commenters, editorial controls for signed-in editors, and so on.

(We have a separate cookie that is present for signed-in users, so the fronted knows whether it should fire the annotation request.)

The result is that we can serve a sudden influx of unauthenticated users (e.g. from Google News or StumbleUpon) from nginx alone, which gives us massive scale from very little hardware. It's likely that the network is actually the bottleneck in this case, and not nginx.


Interested in what you mean by annotating the page after caching it, do you have any more info on this?


The cached page contains content suitable for everyone, so it looks the the user is logged out.

An extra AJAX request grabs the users logged in status, CSRF token and similar data as JSON and then modifies the page so the user sees what they expect (a logout button, a comment form, etc).


Doesn't that cause content movement?


Essentially talking about edge side includes [http://en.wikipedia.org/wiki/Edge_Side_Includes].



Great post. I didn't understand the role of controllers and directives but this explained them well with a thoughtful walk through.

The idea of wrapping distinct chunks of logic in separate directives that can then be sprinkled around the DOM declaratively is great.

In my view this is a welcome alternative; in my (maybe limited) experience, many other JS frameworks have you writing substantial amounts of glue code (i.e. bug-magnet code) to achieve comparable things.


You can configure `git push` to push just the current branch to its configured remote, by setting 'push.default' to 'upstream'.

`git config --global push.default upstream`

The relevant docs (droplr'd, since you can't anchor-link into the online manpages): http://d.pr/Rlqn


Hope was definitely a bad choice of word. By that I meant, the sort of tight query you'd hope an ORM would deliver.

Anyhow, it's a matter of taste, but what may appear at first as hieroglyphics actually is straightforward. It's just that concision here means some packed meaning and some assumed knowledge, so you have to know how to read it. In this case, that's an easy trade-off for me.

I'm not familiar with SQLAlchemy, so I find your example equally hard to read, compounded by there being much more code to spelunk through to understand.

Different strokes and all that, though. There's room for plenty of frameworks :)


> I'm not familiar with SQLAlchemy, so I find your example equally hard to read, compounded by there being much more code to spelunk through to understand.

If you remove few column definition and setup code, it actually boils down to just:

    class Article(ArticleColumns):
        @property
        def users(self)
            return self.collaborations.join("user").with_entities(User)
    
    class User(UserColumns):
        pass
    
    class Collaboration(CollaborationColumns):
        user = relationship("User", backref="collaborations")
        article = relationship("Article", backref=backref("collaborations", lazy="dynamic"))
    
        @hybrid_property
        def editorial(self):
            return self.role == 'editor'
In which you can now do

    some_article.users.filter(Collaboration.editorial)
which generates similar SQL query as #merge.


It's just a little friendly/competitive poke! Don't take it seriously, I just couldn't resist.

I mostly wanted to demonstrate that the functionality of Rails' merge() can be considered in other ways that are just as succinct.


Good point, #merge isn't itself an arel method. I say arel because it's the core sitting behind the #merge / #where / #joins frontend -- but AR::Relation deserves credit too :)


The frame showing Facebook vs Greek debt is particularly good. We ran a piece in a similar vein last week: http://theconversation.edu.au/how-david-beckham-caused-globa...


Good explanation, thanks.

The part I don't understand is the POST hitting the victim app. I don't know django but rails apps require an authenticity token to be included in all non-GET requests. How does the attacking app satisfy this token check?


The bug was that Rails didn't check for the authenticity token in case of requests that were labeled as XmlHttpRequest (i.e., Ajax), and the redirect-from-flash game allows the attacker to forge the label. The fix makes it check in all cases; this is why it comes with stuff that you're supposed to patch into your layouts and application.js to put authenticity tokens into all your Ajax calls.


Rails WAS configured to accept the token OR a custom header, relying on the fact that custom headers can't be created cross domain. The patch fixes this, by requiring BOTH. Hmmm... So how do they know what the custom header is? Are they typically static?


The CSRF token is generated and stored in the user session, so rather than just X-Requested-With: XMLHttpRequest, you now also get an X-CSRF-Token: <some token stored in the user session>.

Rails doesn't check for XRW anymore; it just cares that you passed a valid CSRF token through, either as a POST variable (normal POST/PUT/DELETE) or in the X-CSRF-Token header (AJAX).

In case you're wondering, yes, this does make caching with Varnish a bitch and a half.


I want know how the exploit works. The Flash app has to write a custom header. How does it know the value to put in the header, unless it's always the same for all sessions across all users.

P.S. You cache POST/PUT/DELETES?


That's the point of the fix. The header is custom per user now. Before, the presence of the "X-Requested-With: XmlHttpRequest" header was enough to let Rails assume the request was legit. Since Flash doesn't respect the victim's crossdomain.xml, this is no longer a valid assumption, and you have to use a unique header per session.

This means writing this unique value out into the page somewhere, to be included with any AJAX requests, which means that you cannot cache these pages as you might before, since AJAX calls would fail for everyone except the person who populated the cache.


If you cache the page from which a user submits a POST/PUT/DELETE, how are they going to get their CSRF token?


That's the point of the fix: X-CSRF-Token requires the CSRF token, which is per-user, as its value. X-Requested-With, the old way of doing things, just had to be present in the request.


The exploit SWF just puts in that it is an Ajax call:

X-Requested-With: XMLHttpRequest

which says "I'm an AJAX request". Since the value is static, it is easy to use in an exploit.


Worked a treat, cheers.


Agree to disagree :)

The only Flash component is the embedded screencast, which is hosted at Vimeo and downloadable as an mp4: http://vimeo.com/6782671


Thanks for submitting this, speek :)

For those that are interested in trying out babushka, here's a 30 second crash course. To install:

    bash -c "`curl -L babushka.me/up`" # If you're on a Mac
    bash -c "`wget -O - babushka.me/up`" # If you're on Ubuntu
Some good examples to start with:

    babushka rubygems # Installs, updates, or adds gem sources as required for your system
    babushka homebrew # Sets you up for sudoless `brew install`s
    babushka Cucumber.tmbundle # Clones the latest, installing / restarting TextMate as required
    babushka Chromium.app # Pulls the latest Chromium nightly to /Applications
If you want to see what will happen without making any changes, use `--dry-run`:

    babushka Transmit.app --dry-run
If you're on a Mac, you can follow along in TextMate too using `--track-blocks` — babushka points out each piece of each dep as it runs them. (This also works with `--dry-run`, so you can inspect the code a tree of deps would run.)

    mate /usr/local/babushka
    babushka 'Ruby on Rails.tmbundle' --track-blocks
Any questions, get in touch with @babushka_app on Twitter, #babushka on Freenode, or email hello@babushka.me.

Cheers — @ben_h

[edit: fixed the babushka.me/up links.]


Thanks ben_h. I did two VPS deploys with babushka just yesterday... Saved me a load of time. Keep up the good work!


bash -c "`wget -O - babushka.me/up`" # If you're on Ubuntu

Needs a closing tick before the last double quote there.

This project will no doubt save a significant amount of time. Thanks for sharing your work.


I should have copied and pasted instead of typing it out :) Thanks for the correction, fixed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: