Hacker News new | past | comments | ask | show | jobs | submit login
Diaspora re-writes its front to Backbone: why and what it means (joindiaspora.com)
152 points by plunchete on Jan 10, 2012 | hide | past | favorite | 58 comments



The way that this post describes porting and existing interface from server-side/ajax rendering to client-side rendering is particularly interesting -- I imagine that there aren't many apps that have made that jump after first being built around server-side templates.

The speed benefits that Diaspora is seeing from distributing their HTML rendering is probably only the first step. There are so many more interesting interactions you can accomplish once your data is being modeled in the browser.


It's not that revolutionary anymore... Twitter has been doing it for quite a while now, and see http://news.ycombinator.com/item?id=3236820 for more discussion.

Personally, I've rewritten a bunch of code at work to go from server-side rendered templates (in Jinja) to client-side rendered templates (using a Jinja-to-JavaScript compiler I wrote, https://bitbucket.org/djc/jasinja). Add some WebSockets magic and we now have a very fluid real-time application page.

The model is obviously very powerful. The one problem I have with it is JavaScript-the-language. Using it makes me love Python so much... I know it's not that bad and there are good parts, but it's still nowhere near Python.


coffeescript might be worth a look. I know my enjoyment of writing javascript is much improved since starting to use cs.


IIRC, foursquare.com's client side was recently rebuilt in Backbone and now they retrieve all data using their API. It's an interesting trend that I think more and more large websites will follow.


It's a very pure model which is extremely appealing from a last-mile-developer standpoint. Actually building everything that you need to get there is the challenge, but it's a very worthy goal.


Anyone using GWT has been doing this since... well, since GWT exist.

There are challenges of this approach.


IMO, GTW is decent for writing something like internal reporting apps. Even then, I think it's pretty dumb.


Internal reporting apps and _dumb_ ? hmm... interesting observation and... lack of truth.

PS: Lots of downvote when it comes to Java and GWT eh...


lots of downvotes when it comes to lack of substance in a comment... or defining things as "a lie", "dumb" or "it sucks"


Like what? Fluid layouts?


Live re-rendering of data -- imagine if the (now) "24 minutes ago" timestamp next to your handle was always accurate, instead of just reflecting when I loaded the page.

Easier implementation of otherwise difficult features. Doing a client-side autocomplete of people's names is trivial when you already have all of the other accounts in your organization modeled in JavaScript ... but not as fun when you have to ajax for server-rendered HTML for it.

Live updates of pushed data. When another reader +1's a post that you're currently looking at, if the server pushes that data to you, and you have a model for that post -- it's easy to increment the counter. If the server has to push the re-rendered HTML for the entire post, it's much more difficult.

... and those are just the tip of the iceberg.


This can be done without the radical switch to modeling everything client-side. I mainly do java web apps with JSP's (which everyone seems to think is stodgy and past its prime) but its rather simple to do something like what you're describing with keeping things up-to-date. I simply have broken my project down into "widgets" that self-load so a page actually is made up of many widgets that use javascript to call various server-side controllers, you can then set them to do whatever you want (such as refresh every 30 seconds) and it doesn't require some massive client-side data model.


Your solution is more like VNC, where backbone solution is more like X11. Both can live side by side and used in different situations.


I agree, there are cases where you would want a more robust client side data model, but, IMO people are a bit too quick to jump to using this and it can make more work in the end and make a project more costly to maintain. I think the cases where it is useful are mainly for highly interactive sites like those that would have previously done in flash, with animations (such as games, etc), for just regular CRUD webapps I feel its a bit of overkill


What did you just say!?


Since Diaspora is based on federation, you might even be able to do some neat tricks having your browser bypass your current servers and talk directly to other Diaspora servers.

There are probably tons of security issues, but for some read-only cases it might be useful.


>There are probably tons of security issues

I don't think there necessarily are, since CORS allows cross-domain communication to be done in a safe, controlled way.


Ok, maybe not tons of security issues, but maybe some pitfalls.

My imagined use-case is where your friend on a different server posts an event, it gets put into your feed, and when you view your feed you're able to see an updated list of people attending by directly querying the friend's server. Would be pretty slick.


This is the biggest win for Diaspora? Really? Sounds like they've been deluded by all those NYU CS professors who think knowing how to add 32 bit floating point numbers is the key to success.

Just dump everything, spend 1% of your time rewriting everything in PHP, and the rest actually doing some marketing, not imaginary work.


I'm also getting a "premature optimization" vibe from this[1].

[1]: https://en.wikiquote.org/wiki/Donald_Knuth#Computer_Programm...


They're running a fairly large scale app, which has a lot of users and recently experienced serious capacity issues (after Ilya's fairly well publicized death). Optimizing it doesn't seem premature at all.


Maybe this is a really stupid question, but wasn't Diaspora supposed to be decentralized? In the sense that anybody could set up their own hub? (so people would not have to accept features pushed by some big corporation and things like that)

Then what is this whole database backend for? Is it like the "main" Diaspora hub? And could anyone set up their own secondary one? Would they run into the same difficulties they're trying to solve here?

Though I get the feeling I'm probably misunderstanding the entire Diaspora project, here.


Each hub has its own database to store the data about its own user(s). Then the hubs can communicate with each other so you can "friend/follow" users from other hubs.


Okay, then I did understand it correctly (phew!).

But doesn't that mean that the problems the Diaspora team has to solve now will also pop up for other hubs as they grow in scale?

Or is the solution to create loads of smaller hubs and have them communicate? Then why don't they do that?


The problems Diaspora is solving will be solved for the other hubs too, since they all use the same codebase.


That makes sense, thanks!


Sorry, I don't know what "Backbone" is. I see links to New Relic and Pivotal Labs, but no links to anything called "Backbone".

I guess this blog post was maybe not meant for purely technical people, but it would be nice to understand what "Backbone" is, and exactly why it would solve their problems.


Here's a link: http://backbonejs.org/

Perhaps more importantly, here are some examples of what sites are doing with it: http://backbonejs.org/#examples


Not sure why you got down-voted.

It's entirely possible that you don't get the same search results on Google as I do if we're logged in because of what Google 'remembers' as our browsing habits and interests.



Backbone.js


I hate to be That Asshole, but https://www.google.com/search?q=backbone

The first link is to a javascript library widely known in the web development community.


I actually did a google search for https://www.google.com/search?q=backbone+web

Which gave me the first hit of www.backbonemedia.com/

In fact, backbone.js is no where on the first page for that search.

That's amazing it comes up first for "backbone". I did actually find that link eventually, and try out the "example" todo list, but that gave me no information about how this would fix a garbage collection problem. Does anyone know how it fixed their GC problem?


The article was confusing a bit confusing about the Garbage Collection so I'll try to explain.

1. Dispora is slow, why?

2. Requests take a long time on the server, why?

3. Most of the time is spent in ruby doing processing.

4. Why is ruby slow?

5. Ruby is slow because of the garbage collector.

6. How can we get around this ruby being slow problem?

7. Reduce calls to server side by writing replacing server side calls with javascript rendering of templates and whatnot.


It fixed their garbage collection problem by reducing the amount of server-side processing they had to do (and hence, reducing the amount of garbage that required collection). Rather than send completely rendered HTML to the browser, they're using javascript in the browser to turn the data coming from the server into the rendered page.


Ha. Well, that's why I'm the asshole. ;)


you would only be an asshole if you linked it like so: http://lmgtfy.com/?q=backbone


500ms+ for rendering a template seems ridiculous. Were they just Doing It Wrong, Ruby-wise?


If I had to guess, they're running with the stock Ruby GC parameters, which are fairly horrible for a Rails app, and can result in multiple GC cycles per request. Ruby's GC is painfully slow, so if you're kicking into it, you're going to slaughter your response times.

REE and Ruby 1.9.3 both offer GC tuning parameters, which let you instruct Ruby that you're going to shove a huge app at it and to not garbage collect every time someone sneezes, which can have a pretty massive impact on response times.


It sounds like they were jumping to conclusions regarding the templating being responsible for creating a large number of objects.

ERB, Haml (which is what Diaspora uses), and any other templating engine I've seen use either concat or << when rendering a template. These never create a new object, they mutate (and perhaps resize) the original string.

Maybe next time they should profile better before following their gut feeling and rewriting their front end ;)


They got a 3-fold speed up. Seems like they were doing something right.


I'm coming from a Python perspective; no idea if this is easy/possible in Ruby.

Maybe this is a really stupid idea, but what if we gasp disabled the GC during the course of each request and did a collection run after the request is completed and before the next one is accepted? With a sufficient number of workers, wouldn't this solve some of the problems they were having?


I just remembered that Unicorn actually has an option for out-of-band garbage collection. It works best when you're at < 50% CPU utilization, and improves response times with a possible overall performance decrease.

http://unicorn.bogomips.org/Unicorn/OobGC.html


So you'd only handle one request at a time?


Rails usually has a request lock so most rails servers end up forking off a number of processes that each only process one request at a time.


This post made me curious to learn more about new relic and their architecture. I found this post pretty interesting: http://highscalability.com/blog/2011/7/18/new-relic-architec...


Does it mean they won't blow away all their user accounts when they get bored and start again? No? Well that is why Diaspora is a joke.


I don't understand the "blow away all their user accounts when they get bored" statement. I'm guessing I'm missing something in Diaspora's history?


Diaspora has always been a joke, ever since that first awful naive release.

It's a good idea (and older than the diaspora project) and kudos to them for pushing forward, but I really hope someone else implements this idea correctly.


Maybe they're just encouraging you to run your own server.


Is there a main deployment of Diaspora somewhere? Or is this a purely academic project?


It's the main domain: https://joindiaspora.com/

Public content is indexed by Google, like the 'cake' tag: https://joindiaspora.com/tags/cake


Since when do all pieces of server-side software require a hosted version to qualify as not "purely academic"?


Here's a list of deployments: http://podupti.me/


okay http://i.imgur.com/jHu1N.png

this is from webkit nightly after I clicked a user name and got a "you messed up" 404 page and went back


Rewriting the front-end using Javascript and Backbone won't fix the backend problems necessarily. How about reducing the objects creation and tuning the backend too?


They discussed this in the post. They found that the templating was the source of many of the objects, as Ruby creates new objects when performing string concatenation.


Rad! That graph is awesome!

Also cool that the source is public on Github. thanks for that. Always fun to peek in on source.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: