Hacker News new | past | comments | ask | show | jobs | submit login
Ruby on Rails and the importance of being stupid (law.harvard.edu)
120 points by lackbeard on May 18, 2009 | hide | past | favorite | 92 comments



Yawn. Yet another misinformed "Ruby/Rails can't scale" article.

Seems to me the main problem is that his "MIT trained" friend had no experience building a scalable web service. He would've botched it up in PHP, Python, Java, whatever. There's nothing about his main mistakes- running your database on a shared server, naive (ab)use of SQL- that is unique to Rails.

And this bit: "pull entire tables into Ruby, the most beautiful computer language ever designed, and filter down to the desired rows using Ruby and its “ActiveRecord” facility" is completely incorrect and makes it obvious Philip Greenspun knows less than nothing about what he's ranting about.


I didn't read "Rails can't scale" at all. Did you even finish it? He recommended still using Rails at the end.


If this wasn't a RoR hatchet job, why put it in the title and sprinkle sarcastic jabs all over the article?

Regarding his recommendation at the end, if someone suggested a server with 32GB of memory to run a site that has "About one user every 10 minutes", I would think "damn, this doesn't scale" :)


It was in the title because many of the snooty programmers who think they're so much smarter than the .NET guys, and who go off and build overly-complicated things that don't work just for the sake of building something complicated, are in love with Ruby on Rails. It doesn't attack RoR so much as a subset of the RoR community.

It pretty much abstains from the topic of RoR as a language and framework.


I think you're spot on. And I'm a .Net developer (a bumbling fool no doubt). But really, you grasp the point best so far.


well it would be bad advice for the programmer to simply throw away all of his work at that moment and start from scratch - regardless of what you think of the technology he used

"Rails can't scale" was implied


I don't think it was implied at all, especially since he recommended using it at the end. It's implied only if you read the title and nothing more.

What was both implied and directly stated was that a cloud-based architecture is often not the best idea for a lot of people, despite the modern mania for it.


Phil specifically addresses the idea that he is dissing RoR in a comment.(emphasis mine)

"Angry Rails Enthusiasts Whose Comments I Deleted: A lot of the comments were of the form “Your assertion that it is impossible to build a responsive Web site with Ruby on Rails is wrong. Rails is in fact great if programmed by a great mind like my own.”

The problem with this kind of comment is that I never asserted that Ruby on Rails could not be used effectively by some programmers.

The point of the story was to show that the MIT-trained programmer with 20 years experience and an enthusiasm for the latest and greatest ended up building something that underperformed something put together by people without official CS training who apparently invested zero time in exploring optimal tools.

Could some team of Rails experts have done a better job with mitgenius.com? Obviously they could have! But in the 2+ years that our MIT graduate worked on this site, he apparently did not converge on an acceptable solution.

My enthusiasm for this story has nothing to do with bashing Ruby or Rails. I like this story because (1) it shows the fallacy of credentialism; a undergrad degree in CS is proof of nothing except that someone sat in a chair for four years (see http://blogs.law.harvard.edu/philg/2007/08/23/improving-unde... for my thoughts on how we could change the situation), (2) it shows what happens when a programmer thinks that he is so smart he doesn’t need to draft design documents and have them reviewed by others before proceeding (presumably another set of eyes would have noticed the mismatch between data set size and RAM), (3) it shows that fancy new tools cannot substitute for skimping on 200-year-old engineering practices and 40-year-old database programming practices, and (4) it shows the continued unwillingness of experienced procedural language programmers to learn SQL and a modicum of RDBMS design and administration, despite the fact that the RDBMS has been at the heart of many of society’s most important IT systems for at least two decades."

That is exactly what I understood from the article.

I don't see any rails bashing in the original article and you would have to cherry pick phrases to get that idea. I read the HN comments first and thought phil had gone off on a rant against RoR t judge from some comments here.

that will teach me to read HN comments before reading the original article!


It doesn't show any of that because it's all made up. He just slapped together a story that would appeal to someone like you based on your preconceptions, but there is no actual argument. The whole thing could be reduced to "idiots can't write software" and it would lose no substance.

Even some of these points that are supposed common sense engineering wisdom are specious. Do you need to draft design documents to build a workable product? Of course not! Is the first thing you should do when you start a new website to buy $20k worth of hardware? No! Do you need enough RAM to hold your entire database? Maybe it's the best optimization you can do, but it's far from a foregone conclusion.

Why I am I so vitriolic? Because the article is not truthy. The quote above says "MIT-trained" in the same sentence as "without official CS training." Uh, it doesn't get much more official than MIT. Suggesting that a programmer with 20 years experience couldn't get a single web page to load faster than 5 minutes is a flight of fancy plain and simple.

I might as well write a long-winded story about how Microsoft hired a Chimpanzee to program the next version of Word, and failed, therefore mammals make terrible programmers and furthermore decided to use C based on ill-informed simian whimsy.


Critical reading skills here are a lot lower than you'd think, especially for comments with dozens of upvotes.


I think he only reason it's still in there at the end is just so the site doesn't have to be rewritten. He's just changing the hardware to something beefier instead of optimizing it.

It's sad that rails, by default though it may not be the case anymore, makes n+c requests to the database when you load a page that lists n objects. It can be fixed in one line of code per page.


"Using Rails doesn't prove your smarts" was more clearly stated.


Using something that's good and not trendy suggests intelligence. Using something that's good and trendy does not suggest anything meaningful.


Well actually that's not true. You use the best tools for the job. Front end can be done in PHP/Ruby/Django quickly while the entire back-end can be written in java or lisp. It all depends on what you need and what is the fastest, most scalable way to accomplish it.

That is true intelligence.

In my eyes Good and Trendy vs Good and Not Trendy is irrelevant. They are both good, so solve the problem.

Oh and "rails does not prove your smarts" should be: "Using technology X does not prove your smarts." because there will always be another X. Today its Rails, tomorrow its Open Sails, and the day after we all go back to Lisp.


My point, put more directly is that using something good only because it's good suggests intelligence.

My previous post only applies to the perspective of an outside observer who must guess at the reasons for a given technology choice.


pull entire tables into Ruby, the most beautiful computer language ever designed

I read that as sarcasm. He is a Lisper, after all. I think his point is that he does know less than nothing, but he still came up with a solution, at least for the time being.


This is Greenspun we're talking about. He has been writing large scale web apps since most of you were in your Turbo Pascal diapers.


I recall having an email conversation with him about PHP vs his TCL home grown stack years ago. His response said only amateurs would use PHP (Are Facebook and Yahoo amateurs?).

Anyway I use Rails now and he long ago lost my respect for being too narcissistic and closed minded.


What's your point? That he's a dinosaur and totally out of touch with current technology? If so, I totally agree!

edit: Thanks for the downvote(s), but seriously, what does it matter that Greenspun wrote this? The article is still seriously flawed.


He might be out of touch with the gadget side of web applications (can phillg make rounded corners with Processing.js or Raphael.js or Shoooes? I don't think so.) but his server architect credentials are untouchable. The fucker started a social networking site for photographers in 1993. He taught web application development at MIT and the book he coauthored is still used today.

And if you think there is anything "modern" to scaling web applications, you would be wrong. When push comes to shove, everybody reaches out to their Unix system call manpages and pulls out 20+ year old profiling tools. It's the front-end that's sassy ;-)


I agree some tools never go away, but I hope you're not literally saying that no advances have been made in scaling techniques in the past decade or so.

Almost a decade ago I worked on a large scale site that ran on PHP3, MySQL 2 (before it had replication), and depended on a hardware load balancer. Today, between RoR, Memcached, Amazon AWS, etc. etc. I can build and scale out an app 10x faster and cheaper than I would've been able to a decade ago. (Hey, isn't this the fundamental rationale behind YC?)

Even ignoring new software, the availability of cheap memory and gigabet networking alone inform different architecture decisions than 1993.


Server performance hacks age the best out of all the performance optimization techniques I can think of. The same socket descriptor multiplexing tricks that were used to make news servers pump gigabytes of alt.binaries.* a day, in that age of "megabit networking", can be used today to push html and a few pngs very comfortably. Why? because network capacity, memory size and processor speed are growing orders of magnitude faster than network service consumers are being created, making the old network performance hacks far more powerful today (solely focusing on human users here, though even software "users" can be tolerated with more intelligent "push" architectures allowing the server to deliver content to its subscribers at its earliest convenient time.)

Also, changes in kernel architectures and the addition of fast and faster system calls only makes it better, but not different.

Unix (mostly BSD) ftp servers were at some point in the not-so-distant history the only places to get software; Simtel.net run on ONE server with no load balancing. Email? anon.benet.fi scaled really well. Not to long ago, single Unix servers were household names and their admins and systems hackers were Gods.


"single Unix servers were household names and their admins and systems hackers were Gods"

Back in those days, akebono.stanford.edu was as famous as Britney Spears, Segways dominated the roads, and women outnumbered men in CS departments.


Indeed. And the trolls were tenured PhDs.


With programming pop culture being so large today, why aren't the greybeards (and/or their ideas) a larger part of it?

I would love to be reading more oldschool stuff on sites like this, but "ruby + cloud" style stuff is news.

And for me, at least to some degree, what I read is what I use.


Because there is no $ in hyping the tried and true.


There is if you rename it and market it correctly.


Yeah, okay, but this article should be titled "Virtualization and the importance of being stupid"

This has next to nothing to do with Ruby or Rails. He's right on the count that you should go with the options that work, but the key lesson here isn't that mongrels don't work, it's that shared hosting sucks for their purposes.

Also, ActiveRecord isn't a replacement for SQL. It is a convenience layer on top of SQL. I love my ORMs (and i've enthusiastically moved to DataMapper), but guess what i was doing today? Yep, that's right, i was writing SQL :P


When anyone says "XYZ technology can't ABC" it's usually not a fault of XYZ technology but a lack of that anyone's understanding of the technology. Ruby can scale. Java can be fast and Microsoft can be secure.

It's external vs internal motivation. In the first case, it's the world's fault if it doesn't work. In the second, it's due to a lack of understanding of how the world works.


I wouldn't consider "at most 1 user at a time" a service that need "building a scalable web service" skills!!


"Bad programmer writes bad code"

This is hardly a new idea. The fact that a bad programmer used Ruby on Rails, or Django, or PHP, or C++ to write bad code and implement it on a shoddy system is no reflection on anything. This is essentially a story of someone who took good advice for a hosting environment, and someone who took bad advice for a hosting environment.

Learn your tool, don't buy into the hype. Make sure you are aware of the reason behind everything that you do (because they I read it on a blog is not a reason). Don't be a bad programmer.


Simply knowing particular languages has been pointed out on sites not too far away from here as a sign of a Good Programmer. This counts as evidence to the contrary. But not absolute evedence, I guess we all still have to think for ourselves.


I could understand an argument that knowledge of certain languages is correlated with being a Good Programmer.

This guy seems to have heard about the cool tools of the day, jumped in, and failed mostly due to a lack of experience.

One great lesson of experience is that there is often a simple solution to complex problems.


"Configure the system with no swap file so that it will use all of its spare RAM as file system cache."

That will do EXACTLY the opposite of what he wants. Give it a swap file and unneeded parts of memory will be swapped out to disk, freeing memory for use as a file system cache.

There is _never_ a reason to configure a system without a swap file - except if it's a laptop and you don't want the disk to spin up.

Don't want the system to use swap space? Don't allocate more memory than you have (or buy more memory), but disabling the swapfile never helps. In some cases disabling it doesn't hurt anything, but it never helps.


Why should it ever be acceptable for someone to sit down in the morning to a machine with 4GB+ RAM and have things like a volume OSD take tens of painful seconds to swap in because the night before the machine ran 'updatedb' and the system decided to swap out a few bits of "unused program memory" with a useless cache of the entire disk index?


Then set /proc/sys/vm/swappiness to 0 and be happy.

But actually linux never ejects pages to swap just for the cache. It will only eject pages for memory pressure, which, when freed, can leave extra room for cache. But it won't eject the pages in the first place for the cache.


Since you know its useless, why not just turn off the updatedb cron job?


Because updatedb isn't inherently useless, but rather that file system caching (especially for pages read once) shouldn't eject application pages from RAM to disk.

Being able to run locate and have it quickly return an accurate result is useful at times. I just don't want it paging my entire session out to disk every night in order to do that.


"I just don't want it paging my entire session out to disk every night in order to do that."

It doesn't.


is there any case not having swap hurt ?

assume the RAM alone are enough for heaviest load, no memory leak, no exhaustive ... that's it, the server is never out of RAM memory

hmmm, does hn server use swap?


http://www.openbsd.org/faq/faq14.html

14.4.1 - About swap Historically, all kinds of rules have been tossed about to guide administrators on how much swap to configure on their machines. The problem, of course, is there are few "normal" application.

One non-obvious use for swap is to be a place the kernel can dump a copy of what is in core in the event of a system panic for later analysis. For this to work, you must have a swap partition (not a swap file) at least as large as your RAM. By default, the system will save a copy of this dump to /var/crash on reboot, so if you wish to be able to do this automatically, you will need sufficient free space on /var. However, you can also bring the system up single-user, and use savecore(8) to dump it elsewhere.

Many types of systems may be appropriately configured with no swap at all. For example, firewalls should not swap in normal operation. Machines with flash storage generally should not swap. If your firewall is flash based, you may benefit (slightly) by not allocating a swap partition, though in most other cases, a swap partition won't actually hurt anything; most disks have more than enough space to allocate a little to swap.

There are all kinds of tips about optimizing swap (where on the disk, separate disks, etc.), but if you find yourself in a situation where optimizing swap is an issue, you probably need more RAM. In general, the best optimization for swap is to not need it.

In OpenBSD, swap is managed with the swapctl(8) program, which adds, removes, lists and prioritizes swap devices and files.


This is a slam on virtualization, not Ruby/Rails. Bad title, but the overall point remains: When you buy a "slice" of something, you have no idea what you are really buying. If you buy a piece of hardware with 32GB of RAM, then that's what you get. And if you know what you are doing, it is going to be much cheaper than buying "slices." In other words, the whole pizza is always cheaper than if you were to pay for 8 separate slices.


Most virtualized infrastructure providers give you some guarantee of how much RAM, CPU, and diskspace you will have available. You may be able to exceed the limits from time to time, but the minimums should be clear when you sign up.


that's not the point; the point is that if you buy 32 1GiB 'slices' even if you are getting completely fair slices that really have 1GiB ram, they are going to cost more than buying one server with 32GiB ram. (now in some situations, 32 1GiB slices are going to be more useful, but 1 32GiB server is going to be cheaper than 32 1GiB slices. )

I sell slices myself; and this is something that should be clear to everyone; the reason why you'd want to buy a slice from me (or someone else) is that a 1GiB slice from me is a lot cheaper than a dedicated server with 1GiB ram.

now, some VPS providers have really cool provisioning systems that add a lot of value (I'm still working on mine... but I'm not there.) but those are not unique to virtualization. You can build really cool provisioning systems to work with hardware servers as well. The win from virtualization is that 8 cores/32GiB ram is the most economical server configuration at the moment; with virtualization you can slice that up into smaller servers for people who need less than 32GiB ram /8 cores, and save them a bunch of money.


I haven't seen any that guarantee disk access latency.


Probably because it would be impossible without having a dedicated physical disk for each slice on the machine, or some kind of really fancy network storage array.

Maybe that will change once SSDs become the default hardware. Without the bottleneck of seek time, latency should be much easier to quantify. Also, since you wouldn't be penalized for "context switching" (I know this term doesn't really apply to disk IO, but I mean switching disk jobs often, which requires a head move on HDs) you could maybe someday slice up the SSD time like a CPU and guarantee it directly. (For instance, if the SSD is capable of 200mbps, you're slice could be guaranteed 10mbps. Or something more technically realistic, I am but an amateur.)


Judging by some of the comments here, it seems people are giving Greenspun a free pass because he's apparently getting at deeper point. However when I read this article, it is chock full of straw men. The comparison between a competent Microsoft programmer vs a complete bumbling fool labeled as an MIT Genius is at best intellectually dishonest. I wrote a lengthy response which I'll post here in case his moderator decides he doesn't like it:

I read the moderation policy where it's suggested that reviews of the post are not valued. However I feel an obligation to point out the factual errors in this post. There are dozens of nonsensical assertions and could potentially be very misleading to anyone who doesn't understand Rails or web development in general.

My first general critique is that there is no real comparison going on here. It says the business guy called up Microsoft and they recommended buying a bunch of hardware, but there's no discussion of who developed the site or how they got up and running. There's no discussion of the price of the hardware, which clearly looks to be well into the 5-figures, or the price of the fiber connection at home, system administration, backups, etc. To get into some specifics:

The programmer, being way smarter than the swaptree idiot, decided to use Ruby on Rails, the latest and greatest Web development tool. As only a fool would use obsolete systems such as SQL Server or Oracle, our brilliant programmer chose MySQL.

This is a caricature of an "MIT Genius" that doesn't jive with reality. Anyone who was actually that smart would know better than to dismiss Oracle in favor of MySQL. They may prefer using Ruby on Rails and be more productive than if they used .NET, but they wouldn't go around calling people idiot's for such superficial reasons. Therefore you're not describing an actual genius, just someone who thinks they are a genius, but is actually a fool. Using such a person as the basis for an argument of why Microsoft's recommendations are better than Rails is intellectually dishonest.

How do you get scale and reliability? Start by virtualizing everything. The database server should be a virtual “slice” of a physical machine, without direct access to memory or disk, the two resources that dumb old database administrators thought that a database management system needed.

The reason that virtualization is done in the web deployment world is so that you can get access to fast and reliable hardware if you need less than the cost of the full resources. A degenerate example would be that if your capacity requirements could be met by a 250mhz processor, you would get better throughput by using 1/8th of a 2Ghz server. The reasoning for this is that the vast majority of sites don't need dedicated hardware, which you seem to imply as being cheaper, but clearly it is not if you are leasing server capacity.

Ruby and Rails should run in some virtual “slices” too, restricted maybe to 500 MB or 800 MB of RAM. More users? Add some more slices!

I'm going to assume you are talking about EngineYard here, since that is the managed Rails hosting provider I am most familiar with and is somewhat inline with your pricing figures below. First, the 500 or 800 MB is just a base amount of RAM that is good for most small Rails apps. When that starts to run out, the solution is NOT to add more slices, you simply commission more RAM. EY can do this without even restarting your slice. Incidentally you can also commission more CPU if you need it. The reason they start with two production slices is for redundancy. One of your slices goes down for some reason? That's okay because there's a backup.

The cost for all of this hosting wizardry at an expert Ruby on Rails shop? $1100 per month.

What you described above is a very poor description of what you are paying for at a managed hosting provider like EngineYard. I will describe managed hosting in a minute. But to compare to your unmanaged Microsoft example, I currently pay $8/month for a 256MB of unmanaged hosting that is plenty to server significant traffic on a well optimized app. This is an order of magnitude less than the Verizon FiOS line alone, and provides much better network connectivity (ie. multiple tier-1 connections, lower latency to more endpoints).

With managed hosting at EngineYard, you are not just paying for the server. You are basically paying for a fulltime system administrator. They have people all over the world ready to help you at a moment's notice any time of day or night. They proactively monitor your server and contact you if they notice any abnormalities. They provide a large suite of finely tuned recipes and standard software installations that they can install on a moment's notice, and will tie into their monit-based server monitoring setup. The individual machines in the cluster are optimized for their specific tasks. The network hardware and topography is optimized for real world usage scenarios. They continuously tune the machines for throughput and move clients around to avoid bottlenecks. They will even take significant steps towards helping the client tune their own application, above and beyond their contractual obligations for server adminstration. In short, you've completely ignored 95% of what they do, and painted it as extremely expensive without even providing a comparison against the overhead costs of buying and managing your own servers.

For the last six months, my friend and his programmer have been trying to figure out why their site is so slow. It could take literally 5 minutes to load a user page. Updates to the database were proceeding at one every several seconds. Was the site heavily loaded? About one user every 10 minutes.

If a request on an unloaded server takes 5 minutes to load, and the programmer can not figure it out in 6 months, then that programmer is incompetent plain and simple. Laying this at the feet of Rails is just plain ridiculous.

I began emailing the sysadmins of the slices. How big was the MySQL database? How big were the thumbnail images? It turned out that the database was about 2.5 GB and the thumbnails and other stuff on disk worked out to 10 GB. The servers were thrashing constantly and every database request went to disk. I asked “How could this ever have worked?” The database “slice” had only 5 GB of RAM. It was shared with a bunch of other sites, all of which were more popular than mitgenius.com.

Are you implying that you need enough RAM to keep the entire database in physical memory? That is patently false. In a worst case scenario, yes it could take performance down quite a bit, but disc access is not nearly as slow as implied above. I've served tons of sites on pure shared hosting (not even virtualized) with much higher load and orders of magnitude better performance than you are describing here.

How could a “slice” with 800 MB of RAM run out of memory and start swapping when all it was trying to do was run an HTTP server and a scripting language interpreter? Only a dinosaur would use SQL as a query language. Much better to pull entire tables into Ruby, the most beautiful computer language ever designed, and filter down to the desired rows using Ruby and its “ActiveRecord” facility.

This is nonsense Philip. Please don't take this as an ad-hominem, because there's no other way to put this. What you described here is 100% pure nonsense. ActiveRecord, like any ORM component, abstracts away some SQL in order to simplify common database interactions. The lion's share of ActiveRecord code is all about constructing efficient SQL. When you are developing with Rails it shows you all the SQL running the development log, and you can quickly spot n+1 errors. If you need something more efficient, it offers plenty of levels of access right down to pure SQL.

In reviewing email traffic, I noticed much discussion of “mongrels” being restarted. I never did figure out what those were for ... What am I missing? To my inexperienced untrained-in-the-ways-of-Ruby mind, it would seem that enough RAM to hold the required data is more important than a “mongrel”. Can it be that simple?

I'm shocked that a programmer would speculate so wildly as to say something like this. A mongrel is an application server. I don't understand what you seem to think it is, but it's simply the process serving up Rails requests to the web server and passed through to the client. Typically you run more than one so you can serve multiple requests concurrently, but for a well-optimized app usually no more than 3 or 4 are necessary. Rails uses a non-threaded share-nothing architecture which means you can scale horizontally across unlimited servers. Note that I am not talking about virtualized servers. I'm talking about when you have more traffic than the biggest server in the world can handle, Rails will let you scale out painlessly at the web server level until your database can not be served by a single box. At that point you need to look at database sharding, or alternative data stores using Map-Reduce or some other scalable database solution.

None of this is to say Rails doesn't have its warts. Ruby is memory hungry, leaky, and relatively slow. Deployment has traditionally been very complicated compared to something like PHP (although it's much improved with Phusion Passenger aka. mod_rails for Apache/Nginx). There are many reasons why you would be well-advised not to use Rails, however this article doesn't touch on any of them. Rails, just like Oracle, .NET, Java or many other technologies is a proven platform with pros and cons. In this article you pit an apparently competent programmer developing swaptree.com against what can be described as nothing less than a complete bumbling idiot using Rails. You insist the cost of Rails is high without any justification or direct comparison against the costs of swaptree.com.

I've read your blog in the past and found it to be pretty interesting, which is why I've taken the time to write this response, and suggest politely that you retract this article.


You are glossing over a lot of sarcasm, I think.


Fair. However it's really hard to isolate the sarcasm because there's no real factual meat to the article.


My short reaction: Good lord, you know all that and still feel like the article is targeted at you? Or at Rails as a technology?

Long reaction: You missed the point of the article, which is that keeping on top of the latest and greatest technologies is almost never necessary, and it is never sufficient under any circumstances. You don't have to know what a mongrel is. You do have to understand the orders-of-magnitude difference between different levels in the memory heirarchy. (RAM is much better than disk -- a simple, stupid fact that people ignore all the time.) There are lots of people running around with credentials and hot technologies who don't know what they're doing, and there are lots of young people who worship those guys and spend their time running after trendy stuff because they haven't yet figured out the difference between learning technology and deciding what to wear. (Which might not be as bad as relying on engineering principles to choose your wardrobe. Hmmm, personal food for thought.)

Sure his article isn't particularly original in intent or execution, but the need for this article is perennial. You have to keep updating it because it's aimed at people who only pay attention if you talk about the current latest and greatest. That's why Rails was the perfect victim -- that's where his target audience is right now. (And the Microsoft stack is the perfect frumpy foil to Rails.) Not that the Rails community doesn't contain other kind of people; it evidently does, or posts like yours wouldn't exist. But it is also The Trendy Thing and is therefore cursed with attracting the naive my-favorite-band-is-better-than-yours types who think "follow the buzz" is the successful strategy for all domains of life.

Fast-forward ten years, and I'm sure he'll have written the same article with the blanks filled in with another hot technology. Which is a good thing.


Agreed. To get a feel for his style: http://philip.greenspun.com/careers/


So first of all the article takes two degenerate cases and stereo types and generalizes them to the extreme.

Part of the problem is that if he was dealing with EngineYard, they definitely have some issues with their infrastructure. After having hosted with them, I don't recommend clients use them any more. They are more marketing than technical substance. One of the critiques that he makes is that their database infrastructure is on a shared architecture - which unfortunately is true. They separate all of their front end slices out, but all of their database are on a shared architecture. Unfortunately with most apps that have scaling issues, those issues are related to database access - which makes that the exact worse part of the system to be shared. Without going into too much more EY bashing, they sell you on the idea that you have a full time admin working on your site, but the reality is vastly different. Personally, I've had much better luck using SliceHost than EngineYard, but YMMV.


I've been working with EY for two years. I've also had a few SliceHost slices over the years. I've also worked with a dozen other hosts ranging from managed Rackspace servers through Xen and Virtuozzo VPSes down to shared hosts, administered my own Linux server on Internet2 a university, worked for a full service agency that resold white label hosting to over 1000 clients, and I've been building websites since 1994.

The fact that you would compare SliceHost to EngineYard is very very fishy. These two companies are both excellent, but they are not providing services that can even be compared to each other. What I said about EngineYard is not based on "marketing", it's based on considerable experience working with them. I have put them to the test and they have never come up short. I've worked with techs in 10+ countries at all hours to solve problems, and solve them quickly. I think it borders on libel to say they don't have support on hands 24 hours a day, they absolutely do. Rackspace is the only company I've used that matched that level of service and expertise. Try asking SliceHost for help with your server admin, it's just not available. Ezra Zygmuntowicz, one of the founders of EngineYard, literally wrote the book on Rails deployment. They have significantly sponsored Rubinius and Passenger development. The made TextDrive (aka Joyent, originally the "official" rails host) look like amateurs. Their cap recipes gem and stock monit scripts are more comprehensive and reliable than anything else I've seen anywhere else.

As far as the database sharing issue is concerned, the purpose of sharing is to allow you to save money by using only the resources you need. It's not true that all database machines are shared. They have spec'ed them out and refine based on the amount of resources need by on actual usage of their clients. If you get to the point that you actually need a dedicated database server than they will gladly provide that to you. At that point you will probably need multiple dedicated app servers as well, though of course that depends on the particulars of your app. In any case, I can't think of anyone that can build you a better Rails cluster. Sure you pay a premium for their expertise, but that experience is currently second to none.


My comparison of SliceHost to EY is also based on actual experience deploying large scale sites. EY claims to offer all the services in the world, but at the end of the day the technology they delivered was sub par. I understand all of the things you say about Ezra writing the Rails deployment books and their support of the community. None of that changes the fact that their offerings are vastly over priced for what they deliver.

I've been building web solutions since 1994 as well, and have built dozens of web sites for Fortune 50 and better companies. I fully understand what it takes to scale out an architecture to 30,000 transactions per second at 30% average cpu utilization, or to deal with 70 Terabytes of text and images in sub second response times. I've built arbitrarily deeply nested hierarchies with 100's of millions of items that have to return their result sets in sub-second response times. I don't mention this to brag, but to illustrate that I have substantial system engineering and architecture background.

I'm not sure how you can say that comparing SliceHost to EngineYard is "very very fishy". It depends on your perspective I guess. EngineYard seems to cater to companies that don't have strong in-house database talent, or who aren't comfortable with certain things like creating a Capistrano deployment file or basic sysadmin tasks. If you are comfortable with those things though, when you compare what they do with what they claim, they fall very short imo.

The thing with SliceHost is that they focus on one thing, which is delivering virtualized resources. And they deliver them fast. You can have a new slice up and running with SliceHost within minutes, where with EngineYard that same tasks takes weeks. On multiple occasions we had to escalate to one of the company owners before we got a new slice created.

So perhaps for my background, and for the needs of the companies I've been at, maybe EngineYard's services weren't a great fit. I'm willing to give them the benefit of the doubt. If you just need virtualized resources though and are comfortable building out your own architecture from there, EY may not be the best fit.

I do find it interesting that you mention Rackspace's service, considering they bought SliceHost sometime ago.


If you have the manpower and talent to administer your own servers then clearly EY is overpriced. However the cost of acquiring said talent is significantly higher, and much riskier for smaller companies. My experience is not at as big a scale as yours, but I can see how there are increasing economies of scale of self-administration (and purchasing your own hardware, etc) as you get bigger.

However my beef is saying that you had better luck with SliceHost than EY, which to me is a non-sensical comparison. SliceHost doesn't offer any of what you're paying for at EY. However you explanation clarifies things significantly for me. EY is not very good at SliceHost's core competency, I'll tentatively agree with you here since I haven't done a lot of Slice commissioning on EY.


+10 great post. Would be interesting if he replies.


He did not reply. He deleted the comment (along with ostensibly dozens of others), and you can see his justification in comment #8. Clearly he did not address any of my points, and is not interested in an honest discussion.

The guy has lost all my respect. If he deletes this kind of comment than what else is he deleting? I thought of a number of his articles were quite good, but I can't trust someone who deletes comments that were based on this much consideration and experience. As far as I'm concerned he's an intellectual hack and I'll be avoiding his site from now on.


Honestly, were you expecting otherwise?

The kind of guy who would sincerely reply to your post would not have written the original article in the first place. Intellectual honesty does not seem to be one of his priorities.

Disappointing, but not surprising given the things I've seen written about him.


I would add that the original article is the perfect example of why a top-down argument is a not a valid argument. In general, you should never start from examples and work your way down to a bottom line - as this post shows, you can easily make correlations that are flat out wrong or can be explained in other ways that are more substantial.

If you want to learn how to properly draw conclusions from examples, learn statistics. Stats is all about making sure you have sufficient evidence (large sample sizes, small p-values) to back up a correlation.


"For the last six months, my friend and his programmer have been trying to figure out why their site is so slow. It could take literally 5 minutes to load a user page. Updates to the database were proceeding at one every several seconds. Was the site heavily loaded? About one user every 10 minutes."

I would have replied with "My staff sent me an internet last thursday and it only arrived this morning", but you get points for restraint.


Ram is King for most workloads; and he's right, it's ridiculously cheap. If you need more than 16GiB ram, it makes a lot of sense to buy your own hardware and co-locate it.

Now, I disagree about keeping it in your basement, at least once you have users on it. co-locating a 2 cpu box is going to cost you around $100/month, and you get much better connectivity than DSL at that price. DSL isn't much cheaper, if you get a reasonable uplink speed (at least around here) and I don't know about you, but the power in my house (I live in California) isn't exactly /enterprise grade/.

But yeah, I see a largely untapped market for renting high-ram otherwise-cheap servers; I've rented out one 32GiB server with a bunch of drives (and some slow CPUs) to a guy for $1200 setup and $175/month... once my next load of servers is up and built, I'm thinking about chasing that business model again, if I can build the servers faster than I get new vps signups, anyhow.


The problem is not that you need better hardware for this scenario, you just need a better programmer.

"Not helping matters was the fact that the sysadmins found some public pages that went into MySQL 1500 times with 1500 separate queries (instead of one query returning 1500 rows)."

Looks like someone forgot to use :include on some finders. Let's say you have 1000 users with an address each. This will produce 1000 SQL queries:

  User.all.each do |user|
    p user.address.street_name
  end
This, however, will only issue two queries:

  User.all(:include =>:address).each do |user|
    p user.address.street_name
  end


Well, if you know your way around RoR, the first thing you do after most development is done, is to load the query_reviewer plugin to get an automated profiling on each page, and take steps to limit the number of queries. Either by memcached, or by adding indexes. It's not unusual to go from a couple of hundred queries per page to less than 10 per page in a day of optimizing.


The classic ActiveRecord 1+N trap.


Ruby on Rails and the importance of being COMPETENT.


Interesting read. I work with Ruby on Rails myself but I'm not that knowledgeable about scaling it.

It seems to me they didn't really know what setup they were running if they are wondering what a 'mongrel' is.

I hope they weren't trying to serve the site on only a couple of mongrels.

My first thoughts,

Benchmark a bit, use a tool like fiveruns to find out what's really happening. I wonder what the real bottleneck is.

Of course they shouldn't use a shared database server, but I'm wondering if they are they using caching? What I'm reading about that site I think they could cache the hell out of it. I've only used the builtin Rails cache methods but a tool like memcached should help out on all the database requests.


Thankfully there are numerous parties out there that will handle the scaling issues for you (heroku.com, engineyard.com) so you can concentrate on what's important, the app itself.


If the application is performing idiotic operations such as "[querying] MySQL 1500 times with 1500 separate queries (instead of one query returning 1500 rows).", merely moving to a better provider or using more powerful servers won't help. A modern web-based application is very complex -- the programmer has to understand how the database and application communicate, how HTTP works, how to cache data, etc.


As a RoR developer I have often been hired into optimize various services. One of the biggest issues I have seen is this exponential growth of queries.

I have routinely seen requests at clients with 2-3000 queries (even with query caching). Most of them are small, but at that amount it doesn't matter how small and efficient each query is.

I love AR to death as an OR library, but it is extremely easy to get into these kinds of issues when you iterate over a large dataset and then call associations of associations without thinking too much about it. I don't think it is only an AR issue, conceptually I think it is true for all ORMs.

They can be a PITA to unravel and are often very hard to do in a clean AR like way in more complex data models. Normally you end up doing some fairly un AR like pre loading like you would do in a pre ORM app, which while ugly works.


Agreed, I do all my dev with query trace in order to make sure I'm not looping back into individual queries.

to those that don't know, Rails doesn't load the children of AR objects by default, so if you do something like

Select * from books

and then iterate through the books and get books.author_name where the author data is a relational table, you're going to get a separate query for each row.


Usually the point where you could get into trouble would look more like: book.author.name in the given example.


This sounds like a simple eager loading fail. Viewing the log for any page hit uncovers these mistakes quickly.


If you're operating a top site/service, scaling is the most important issue you have. It's naive to think that you can get someone else to handle it for you flawlessly.


The article is partly about how hosting service like that sucked.


Shouldn't the lesson to take away from this rather be that it's important to not be stupid? Clearly, the "MIT trained" programmer don't know what what he's doing, at all.

I'd expect similarly disastrous results had he used the same 100% ignorant approach to any other language or platform.


Umm, it sounds like some tweaking of your ActiveRecord::Base.find calls with the ":include" parameter might improve performance by 10,000x as well.


this is a little funny. This isn't the first time Greenspun has been evangelizing the benefits of MS ASP (and now ASP.NET) vs x technology (back then he was deriding Java). The structure of the post is the same: some sensationalism with some missing context (since he doesn't usually do his homework on what he is criticizing).

I guess some things don't change (much)


I think he is using the MSFT technology as the worst case or "caveman" approach and then contrasting it with situations where the person outsmarted themselves.


Classic case of overengineering the problem. I'd venture that it's a common enough class of mistake for those who have recently learned the hip/new/cool way of doing a thing. Thing = scaling a webservice in this example.


seems to be about ruby and rails by coincidence.


This post spoke poorly about two concepts: * query optimization * robust hosting Both of which are generic and have nothing to do with Ruby on Rails in particular.


I'd been puzzling why the name was so familiar and it's because this man is the source of the ultimate Smug Lisp Weenie quote...

Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp."


ORM is rarely effective unless it is just used for CRUD. Most OOP programmers using ORM rarely use it efficiently.


ORMs are extremely useful. Many extensions/plugins/gems/whatever can be developed for an ORM. With ORMs not being tied to one database technology extensions to the ORMs can reach a wide audience. Look at Rail's ActiveRecord. There are many extensions based on that ORM that tie into Rails allowing for developers to not have to recreate the wheel over and over. When using ActiveRecord I have seen how a simple over-site can lead to inefficient database use, but it makes that inefficiency pretty obvious and can usually be easily corrected. ORMs can also take care of query caching and other optimizations for you. If you pay attention to what your ORM is doing for you, an ORM can be efficient and a giant time saver. BTW, what do OOP programmers have to do with things anyway? Do functional or procedural programmers use ORM more efficiently?


Troll bridge. Pay troll.


Trolls are usually anonymous.


just built a service in PHP that can process 15,000 rows of data, with a complexity of about O(1000*(10|100)n) and it runs in 4.5min. I was looking to beat 15m. I expect to get that down further.

right now i use a LIMIT of 100 to minimize sql queries, and memory. i might try going to 1000 rows or about 10k in data, but in my experience when the DB has to return that much data you are not really gaining that much.

what do you think? any other optimizations i should look to?


One optimization is to write big O notation in minimized form. It looks more impressive to write O(n).

And to try being helpful: do aggregation in the db where possible, be sure indexes are good, play around with the number of rows you process at a time, use joins if appropriate to minimize queries, if your queries are expensive at all use EXPLAIN, depending on the database, look into using a cursor (maybe?).


Cursor?

I was using LIMIT and OFFSET, did you mean something else?

update: I processes 200k-300k rows in a few hours, and since the DB is on the same box, as the php app, there is no need to make this more complicated.


Do all your work in the database...you should be able to crush that speed.


Hm. It seems the linked article is somewhat over-engineered for general consumption.


Choice appropriate tools for your task, not the task for available tools. MS fans, Delphi fans, Java fans and now RoR fans are thinking second way - they love their tools.

The altrernative approach is about using mix of technologies and tools to complete actual tasks. Today's linux distributions, which are actually mix (if not mess) of applications and tools written on every possible scripting language is a good example.

In the area of web-development the same approach works the same way - you can build different [sub]services and [sub]systems with different tools which are more appropriate for some particular task. REST-JSON-Key-Value-Storage here, classic SQL back-end there and so on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: