Hacker News new | past | comments | ask | show | jobs | submit login
Web Framework Benchmarks - Round 8 (techempower.com)
174 points by curiousAl on Dec 17, 2013 | hide | past | favorite | 165 comments



It's interesting to compare what the code looks like.

CPPSP (C++ Server Pages) which is putting up ridiculous numbers... here is the Single Query test:

https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

It's quite different from the more typical implementations, where they all sort of look the same...

(Go) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

(NodeJS) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

(Gemini) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

Also interesting to compare it to C# / HttpListener... which would benefit from moving all the framework code out into a separate library;

(C#/HTTP.sys) https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...


Interesting observation regarding the differences between the EC2 and i7 results: the platforms at the top of the EC2 benchmarks are generally MongoDB+async io java, while the ones at the top of the i7 results are MySQL+heavily threaded (go, servlet, openresty). I think it's a pretty interesting result because it shows how much your choice of available hardware has on which platform would be best - and it's not a small difference either.

If you're going for an EC2/digital ocean setup with a lot of small instances, then you want to go with something like vert.x or node or whatever - while if you are deploying directly onto bare metal high core/ram servers, you'd be better off with something that is better at handling high thread counts - something like Golang.


Go has low number of threads, usually a number of your cores or close to it. Goroutines are not threads.


If you have 8 cores, you'll get 8 threads and all your requests will be nicely distributed across the cores. This is why Go is up near the top for the i7 benchmarks. On the EC2 ones, there are far fewer cores, so the overhead of distributing them is much more pronounced. As you add more cores, you'd probably see Go pull further ahead of some of the competition. However if you're only ever going to be running Go on small instances as many people do, this advantage is actually a hindrance because of the added overhead. Not that it's necessarily a big issue or anything, it's just interesting to consider.

The point I was making is that your actual hardware and workload can turn this benchmark on its head. You may naively think you are upgrading performance by switching to a different framework/language, yet if you don't understand why each platform is getting the numbers it does you might end up rewriting your app and actually decreasing performance because of your server hardware.


I've been following these for most of the rounds, and Go has been improving impressively. Whether that's because of improvements in the language itself or a more zealous crowd sending pull requests, I don't know, but it made me want to try go, so I did. It's not as comforting as the scripting (PHP, Python, JS) languages I'm used to. Having no REPL and having to think about types takes a bit more getting used to than I thought (arrays vs slices/maps, and having no REPL). I find having a quick build script (mine's in vim) so you can compile+run and go back to the code quickly helps a lot. Also, http://play.golang.org/ isn't too shabby either.

It would be fun to see this project (https://github.com/TechEmpower/FrameworkBenchmarks) become more and more popular, with formidable developers squeezing out performance from their framework of choice.


Yeah but I'm kind of confused as it's my understanding Go is not a web framework so much as a language. Is this just testing how fast Go can print out the string "{message: 'Hello World'}"? Or are they testing a specific component/library in Go? I mean obviously having a language just spit out a line is going to be faster than having a fully blown framework such as Rails work through all of the query parsing, view building, etc. so it doesn't seem like a very fair or useful comparison.


It's included, alongside Go frameworks, for the same reason PHP/Ruby/ASP.NET are included -- so that you can see how much overhead the frameworks are adding compared to a minimal implementation in the language they're built on. The code behind every benchmark is available under the source code tab up top. The Go benchmark is using some JSON library, not just printing a string.


The issue is that Go comes with enough included in the standard library that it could be considered a web framework. If you're just build a few rest endpoint or a simply site, you may not need to leave the standard library.

Compare that to a language like Python or Ruby, where you need something "extra" to make it easier to do a web application. You could certainly do with just the standard library in other languages, but very few would choose that option, because it would involve write a lot of additional code.

I think it fair to include Go, because it's a language/programming environment, that comes with it's one built in web framework. A framework that's actually advanced enough that many don't need to look else where.


Go is a language, but its standard library is very comprehensive for most things Internet related and you can build web applications quite easily using just that. You also have similar examples in the benchmark using nodejs and php, which aren't exactly frameworks.

I know there should be some overhead when using a framework, but sometimes the cost is too high and it's useful to know it (compare php and symfony2, for example).

You can look at the Revel benchmark, a web framework written in Go which did quite well in the benchmark.


Right, Go is a language. It does have a decent (built in) HTTP library, and is actually pretty fast as a webserver in itself. On the more conventionally robust framework end of things, I suppose Revel/Falcore would be a more appropriate comparison. (both of which have impressive performance of their own).


In the past I've noticed posters on HN picking on Rails by lazily linking to these benchmarks but click over to the average latency tab and Rails looks pretty solid with an average response latency of 1.8 ms, which is not at the very top but far better than Django, which is a comparable framework and is near the bottom of the average latency table.

If anything to me this data confirms that Rails is an amazing tool because not only do you get to develop quickly, but you also get pretty good average latency (or at least the potential depending on what you add to your app in terms of 3rd party libraries). And what Rails isn't good at is throughput, which is almost never a problem for an early stage company.

Working at a startup it's a huge success if I ever have to handle a lot of connections to my app, but today and everyday, I want fast response times on a page load.


Fair warning: as far I know, Wrk's latency measurement does not distinguish between 500s and 200s. For some frameworks, you will see unnaturally low latency because the front-end web server is providing a 500 response very quickly.


as bhauer pointed out, 500's are counted in those latency figures. If you look at the error count column, rails does miserably in everything but the "single query" test, which is not a common use case.


Great point. I missed the tabs at the very top, there's a lot more information density here than I first realized. The errors in the multiple query case are worrisome.


That looks a lot like an error in the test setup to me, it seems the rails example hasn't been updated for a while.


I was shocked by the rails results, and the massive number of errors, so I looked into it a little.

The setup they're using is an nginx serving 8 unicorn workers with a backlog of 256. They then throw requests at that with a concurrency of 20. DB pool is 256 too. It seems to me quite likely that the unicorn queue fills up very quickly and it starts rejecting requests, which could be an error. It's hard to see how a maximum of 8 workers would ever get close to the 256 available DB connections.

At first glance the unicorn setup is totally inadequate for the amount of traffic being thrown at it. The first thing to do would be massively increase both the number of workers and the backlog, otherwise this almost instantly turns into an overflowing request queue and literally millions of errors.

There's no denying, though, that this kind of request flood is not exactly rails' strong point and if you're expecting massive numbers of fairly simple requests you're probably better off with something else.


Rails fairs badly across the board. I'm afraid you're the one who's "lazily" linking in this case. Of course you can have good latency if you instantly 500, and process far fewer requests than the competition.


Did you miss the errors column? :/


> Working at a startup it's a huge success if I ever have to handle a lot of connections to my app, but today and everyday, I want fast response times on a page load.

Realistically, I doubt many humans can distinguish between 1ms and 100-200ms. response times.


True, especially when accounting for Internet latency.

However, the purpose of this project is not actually to measure how quickly platforms and frameworks can execute fundamental/trivial operations. Rather, these tasks are a proxy for real-world applications. Across the board, we can reasonably assume that real applications will perform 10x, 50x, 100x, or even slower than these tests. The question is, where does that put your application? If your application runs 100x slower than your platform/framework, does that put your application's response time at 200ms or 2,000ms?

That's a difference users do notice.


This article by Jakob Nielsen goes into that a bit: http://www.nngroup.com/articles/response-times-3-important-l... (1993). He claims <100ms feels instantaneous.


HHVM and Dart seem to be the two new fast performers in town showing impressive performance in some tests. JS has been falling off the charts compared to some of the first rounds, but still a good option performance-wise. C# keeps sucking badly. I miss Nimrod/Jester, I always wanted to see it in the top 10.


> C# keeps sucking badly

I wonder why, the Fortune 500 sites we have built are handling the load quite well.


These benchmark tests for C# are run against MySQL or PostgreSQL on Linux. In the Fortune 500 setup you're probably connecting to SQL Server or Oracle in the back end for which Microsoft and DB vendors have optimized OLE-DB drivers.

That, and JSON serialization on .Net using default MS serializer is super slow. Everyone uses JSON.NET or another faster serializer in the real world.


We have SQL Server tests but they were not included in this round. They were last in Round 7 and we'll include them again in Round 9. Here is SQL Server in Round 7:

http://www.techempower.com/benchmarks/#section=data-r7&hw=i7...

Also, we have a JSON.NET test implementation thanks to community contribution. It's test #138 and named "aspnet-jsonnet" as seen on the following chart of C# tests:

http://www.techempower.com/benchmarks/#section=data-r8&hw=i7...


Why weren't the SQL Server tests included? Technical limitation, something else?


Just time. We were already two weeks late on Round 8 due to a host of other issues. We'd like to do one round per month if we can get our routine ironed out.

We'll make it a priority to get them run in Round 9.


Please don't be annoyed by my following comment as I appreciate your effort and like the benchmarks very much (as a hacker - as project leader I'd prefer the "enterprise frameworks" to perform much better :-) ): You should really avoid publishing incomplete benchmarks. They don't do justice to both, the left out and the included.


> You should really avoid publishing incomplete benchmarks.

If we had followed that advice, there never would have been a round 1. I was so uncomfortable with the idea that we'd be publishing surely-flawed benchmarks of frameworks we didn't really understand that I requested to be taken off the project (prior to round 1). It was only after seeing the post-round-1 discussions and the flood of pull requests that I realized I was wrong.

These benchmarks are always going to have flaws. I think it is better for us to regularly publish our best attempt than to try for perfection.


How about those results under the plaintext benchmark run on windows with no db access to mysql or whatever? Still very slow. The low level libraries for http/disk/etc for C# on windows or linux are simply not set up for performance in general and that's what this benchmark is reflecting. You're dismissing these results a bit too quickly and defensively, I think.


Those numbers might not match up to other frameworks, but they are by no means slow. ~29k requests per second (standard ASP.net) equals 2,505,427,200 requests per day.

That's far more capacity than anyone needs and if your site does ever reach the point where 2.5 billion people visit it per day then you can just put another box up and double your capacity to 5 billion requests per day.


This doesn't equal out to 2.5 billion people a day. One requests does not equal a unique visitor. For most cases that doesn't even equal one page load. I don't know where the 29k/s number came from but I guess its a peak load from one of these bench marks. It isn't realistic to expect a server to be consistently pegged at 100% 24 hrs a day. The real number is going to be a tiny fraction of that


Sure, as long as your website is just a tiny piece of plain text, and all your requests are distributed perfectly evenly over the course of the day, and every person only requests a single file and then leaves. But since none of that is even remotely close to realistic, your numbers are not either. Yes, 29k req/s is quite slow for a "serve a tiny static file" benchmark. No, you can not handle a billion visitors a day on one server with such low performance.


Yes, you guessed part of our setup, it is Microsoft all the way, except for the integrated SAP systems.

We only do mixed vendor stacks, in our JVM projects.


Probably because you threw Fortune 500 sized budgets and hardware at them. This is only really relevant to people who need to maximize limited hardware.


Might be, the servers are actually quite beefy.


Because performance can largely be displaced to outside the app code itself. Things like cacheing, load balancers and large amounts of front-ends make the optimization differences of these platforms largely irrelevant for large projects, and especially large companies that don't mind throwing more money to increase the amount of front-end servers.

For smaller projects, or for companies / people with tight budgets, these performance tests matter more, though the biggest wins still lie in cacheing and load balancing, not in platform efficiency. This can depend on the nature of the application though. Some have tons of cacheable content, some have tons of dynamic content.


>saving tens of millions of dollars is largely irrelevant

Why do people seriously say nonsense like this? Why do you think facebook spends so much money on hiphop/HHVM? Because yes, performance certainly does matter. Slow languages and slow frameworks cost tons of money.


Well, it's not irrelevant, but it's common practice. Of course bad practice. Seen it myself. Facebook and others might be able to choose the better approach but a lot of badly managed companies / projects choose the last resort solution to through out millions of dollars for hardware as they're incapable of fixing the problems properly. Those projects are politically screwed, so they often sell their solution to buy expensive hardware as success. Sounds ill, I know ...


Nice misquote. Put some numbers out.

The evolution of a project as I've seen it at large companies:

1. Build slow app in high level language, build out your infrastructure with cacheing and load balancing, CDNs, etc. 2. See what is slow / can be optimized in the current language. 3. Re-implement critical pieces in a lower level, more efficient language.

Facebook is building HHVM to eliminate step 3. You can have a bunch of people continuously re-implementing critical pieces, or you can have a smaller bunch of people make the higher level language more efficient once and have all your projects, current and future, benefit. And you also save money from the inefficient language.

I'd like to see the numbers on how much, say, using Ruby with all the optimizations - load balancers, cacheing, etc. will cost you over using Java - also with optimizations. I'm not convinced by a random person's statement that it is that huge of an impact, especially considering factors like finding programmers for optimized languages, productivity differences in programming in the different languages, etc.


Yeah, finding java programmers is super hard. Not like there's literally hundreds of times more of them than there are ruby programmers or anything. Facebooks is building HHVM because it is too late to do it right in the first place. If they started with reasonably performant language they'd have saved millions on servers, plus millions on developers writing HHVM. You are posting in a discussion about the very numbers you want. Go look at them.


Fortune 500 has little to no relation to demanding website. Plenty of fortune 500 companies aren't even in the top 1000 busiest sites.


The sites we have built have pretty high demand, I just mentioned like that because of NDA.


That is a generally meaningless statement. You may have a hundred front-end and application servers to service a hundred users. Your users may be accustomed to tolerate very slow service times (e.g. most corporate systems inject several-hundred millisecond delays for the most trivial of operation). Etc.

I've built plenty of .NET-based services, and generally it was very powerful hardware serving a relatively small user base, and where expectations were much less demanding. And that's perfectly fine if the other benefits of the system (tooling, integration, etc) works for the implementation.

For someone building a startup on a shoe-string budget, though, it has to be foreboding seeing such poor metrics when that directly translates into considerable additional hosting expenses.


I'd also like to see Nimrod in the results.


(Author of Jester here)

Unfortunately, I still have not had enough time to improve Jester (or this benchmark) so its performance is still at the stage that it was on in the previous rounds. Hopefully this will change soon. Of course help is always welcome, so if you want to see Nimrod higher in the results then please help us improve the benchmarks!


I'm currently deciding between learning Rust or Nimrod, having a good web framework would help Nimrod adoption.


something to show how important the VM is. PHP managed to jump near the top simply because Facebook decided to bump money and wrote HHVM. Python has PyPY.

And yet Ruby has nothing.


Jruby?


These results are tempting me to do my next project in a modern lightweight Java framework. No Hibernate, bloated frameworks of yore, or weird complex build and dependency management. Play is ruled out - it's Scala (Java is a second-class citizen in Play).

Maybe something that ties together things like ebean ORM, Jetty, Jersey, Jackson, Guice. Dropwizard is the right idea, but is geared towards building REST backends.

Any suggestions on a pure Java framework that has critical mass and would fit the bill?


I had the exact same thought after Round 7, so I started Sparkler, to bring as much of the coolness of Rails to Java as I can. See https://github.com/tobykurien/Sparkler


Following up on my own question, there doesn't appear to be any that quite fit the bill right now, if we define the ideal framework as having the following characteristics:

* Java as a first-class citizen

* Strong core of basic web app functionality

* REST and Search engine friendly URLs

* Action oriented – basic framework for routes, MVC etc

* Stateless

* Good documentation, active community

If we look at action frameworks only:

* Play 2: Great except it's Scala. Ruled out.

* Spring MVC: Spring is bloated old-school Java with Hibernate. Out.

* Stripes: hasn’t had a commit in over a year… which is unfortunate because it looks interesting. Out.

* Spark: appears to be a one-person project. Out.

* Google Sitebricks – ditto

* Ninja: Ditto


Play 2: Great except it's Scala. Ruled out.

The Play guys went to great trouble to ensure that both Java and Scala are fully supported. Perhaps consider being a bit more open-minded about your options. Scala is simply a more modern and flexible language, so I don't blame them for using it.


Grails? You just uninstall the GORM/Hibernate plugin. Controllers have to be written in Groovy, but everything else can be Java.


When Grails developers talk about the minimum Groovy that must be used instead of Java in their Grails code, it doesnt paint much of a picture for Groovy's future. I've heard Gradle devs want to add Scala as an optional build language in Gradle 2, but is Grails thinking about moving away from Groovy as well?


Spring mvc with spring 4 is decent, with the java configuration, the amount of boilerplate is reduced at an acceptable level in my opinion (best case is few annotation) (and you are not forced to use hibernate).

But in general I agree that at the moment there aren't a lot of web framework that fit your description in the java world.


Why did you find lacking in java on Play? I've just started playing with it and aside from the template engine (which I don't count) I haven't found any part of the java support lacking vs the scala.


In my brief try-out of it, it felt to me like if you want to do anything different (e.g. your own implementation of something), you pretty much need to switch to Scala. It's a Scala framework first and foremost.

A minor demerit was getting SBT and Play Java to work correctly in IDEA was enough of a pain to make me wonder how much overhead that was going to incur in the long run.

If someone who has used Play Java in production on a large project can weigh in on whether these points are true or not in real production use, I'd love to hear it.


We use Play Java in production, in fact it plays a central role in our backend. Our entire persistence layer is written in Java.

Going forward our new code is all in Scala. Not because we ran into issues with Play + Java, we just get fed up with Java's verbosity when dealing with futures and actors.


Take a look at Ninja framework (http://ninjaframework.org/) Its built on great foundation.


You describe dropwizzard then rule it out, I don't follow why?


Dropwizard can do templates and views too. What are you missing?


I like the benchmark and I appreciate the work that was put into, but Erlang is missing again.

If you don't even consider Erlang you won't miss it. But if you know it has some strengths for this kind of job and you don't mind the syntax, you'd like to see it compared to other solutions.


I just posted another response to this - we had some trouble with the package manager for Erlang after round 6. Additionally, I had been working on improvements for the suite specifically (better logging/reporting, etc) and did not get a chance to resolve the Erlang problems.

Rest assured, "get erlang running again" tops my 'todo' list for round 9.


While we're at it, any chance of you guys including some Elixir frameworks too? I'd love how to Elixir performance is starting to take shape vs native Erlang frameworks.


As far as I understand exlir compiles into erlang code. So there should be no performance difference.


That's never been the case with other languages like Scala or Clojure that target another VM bytecode, definitely not an assumption you can just make.


I have heard that string operations (and I thus suspect JSON parsing too) is slower on Erlang. Maybe it is not included as it was not built for raw speed but rather stability, hotswapping code etc?


String equivalents in Erlang are binaries quite often. There are also iolists, those are pretty efficient.

This blog post describes this in a lot more depth:

http://jlouisramblings.blogspot.com/2009/01/common-erlang-mi...


Erlang was included in round 6 of the benchmark.


Ah thanks, under Cowboy and Elli.


None of these numbers are significant! Give me something that tries hundreds if not thousands or tens of thousands of simultaneous requests. Then we have a real benchmark that will probably push a lot of these over the edge in terms of mean latency and especially tail/peak latency.


There have been a group of us -- consistently pushing for exactly this. The maintainers of the benchmark are exceptional resistant to this idea... https://github.com/TechEmpower/FrameworkBenchmarks/issues/49 ... https://github.com/TechEmpower/FrameworkBenchmarks/issues/36 ... https://github.com/TechEmpower/FrameworkBenchmarks/issues/48 ... there are even more issues asking for concurrency increase, just search for concurrency.

It it silly that such an rich and awesome set of benchmarks never pushes on concurrency, one of the major points of failure "in the wild" -- more common as you become the go-between for your users and some set of APIs -- users stack up on one side, waiting connections stack up on the other.


There is a very simple reason for this: we do not yet have a test that is designed to include idling. One of the future test types [1], number 12 on the list, is designed to allow the request to idle while waiting on an external service.

Until we have such a test type, there is no value in exercising higher concurrency levels. Outside of a few frameworks that have systemic difficulty utilizing all available CPU cores, all tests are fully CPU saturated by the existing tests.

With that condition, additional concurrency would only stress-test servers' inbound request queue capacity and cause some with shorter queues to generate 500 responses. Even at our 256 concurrency (maximum for all but the plaintext test), many servers' request queues are tapped out and they cope with this by responding with 500s.

The existing tests are all about processing requests as quickly as possible and moving onto the next request. When we have a future test type that by design allows requests to idle for a period of time, higher concurrency levels will be necessary to fully saturate the CPU.

Presently, the Plaintext test spans to higher concurrency levels because the workload is utterly trivial and some frameworks are not CPU constrained at 256 concurrency on our i7 hardware. As for the EC2 instances, their much smaller CPU capacity means the higher-concurrency tests are fairly moot. If you switch to the data-table for Plaintext, you can see that the higher concurrency levels are roughly equivalent to 256 concurrency on EC2.

For example, jetty-servlet on EC2 m1.large:

      256 concurrency:  51,418
    1,024 concurrency:  44,615
    4,096 concurrency:  49,903
   16,384 concurrency:  50,117
The EC2 m1.large virtual CPU cores are saturated at all tested concurrency levels.

jetty-servlet on i7:

      256 concurrency: 320,543
    1,024 concurrency: 396,285
    4,096 concurrency: 432,456
   16,384 concurrency: 448,947
The i7 CPU cores are not saturated at 256 concurrency, and reach saturation at 16,384 concurrency.

We are not against high-concurrency tests; we are just not interested in high-concurrency tests where they would add no value. We're trying to find where the maximum capacity of frameworks is, not how frameworks behave after they reach maximum capacity. We know that they tend to send 500s after they reach maximum capacity. That's not very interesting.

All that said, once we have an environment set up that can do continuous running of the tests, I'll be more amenable to a wider variety of test variables (such as higher concurrency for already CPU-saturated test types) because the amount of time to execute a full run will no longer matter as much.

[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/13...


Don't get me wrong, I am only annoyed because of the wonderful job you guys do... it seems like such a glaring omission... because IMHO, it is where stuff often actually "falls apart" in real life... and is some of the most useful information you can possibly have.

The "trapped between APIs" scenario is one of the concurrency stressing ones, as is slow clients with large content, as is websockets. As you tests show, A LOT of frameworks do a damned fine job with serving lots of requests quickly -- I think concurrency is a far more interesting differentiator.

Glad to see that most of what I want is "on the list": 11, 12, 15, 19. Would be nice to see an additional "slow clients" test with large content -- where the limit is how fast the clients can receive server data... meaning, the limit on the server is how many clients they can stack up and handle concurrently.


Great! Please feel free to join in the discussion about future test types on the GitHub issue if you want!

Based on your comment and some others, I am presently thinking we'll want to bump up the priority of adding new tests in the upcoming rounds. Tentatively, getting the caching test in is low-hanging fruit and may be next up. But the external API test is probably next after that.


> Give me something that tries hundreds if not thousands or tens of thousands of simultaneous requests.

Yeah I can see that being more useful.

If the server is not flooded with concurrent requests and there are only 20 concurrent requests and then, put an file with a TCP socket in Python on it and it will do the job. They should all be long running at least at 10k concurrency.

Longer or even persistent (websocket) connections should be looked at. Hit them all with 20k connections, some very long lived. They don't have to come at the exact same microsecond, but they should come in pretty close and not do just a plaintext file read and close. They should be longer leaved. How about something as long as "validating your credit card" spinner some shopping websites make you wait for when you click "process payment" button. Then you don't know if you should refresh the page or if you do will you be double charged. That kind of stuff. Or say there is story written by pg talking about startups fighting NSA using Go hits HN and a flood of requests bring the server to its knees.

Why bother having nice benchmarks? What are they showing? CPU loading, so user can save money on compute time at Amazon, that's OK I guess. But it can be made more interesting.


No, then you have a benchmark that is useless for 99.999999999999999999% of people whose website does not get hundreds of simultaneous requests, much less tens of thousands.


Ugh your comment is so stupid I don't know where to begin. For those "99.999999999999999999% of people whose website does not get hundreds of simultaneous requests" you know what? They don't need a fucking benchmark at all. They could write their shit in BASIC and get the job done.


This is a fascinating round for WFB, with drastically different results from round 7. I'm impressed with the strides Go has made, and also quite impressed with JRuby. I know the banking app Simple chose it as its language/runtime of choice, and they seem to leverage it well.

I'd still like to see a good showing from Django, maybe using uWSGI + Nginx. I might submit a pull request and see if I can't get that included in the next round. Gunicorn is great and incredibly easy to set up, but pales in comparison to other platforms when it comes to raw speed.


As far as Django goes, there hasn't been much tuning in general[0]. The only thing I see them doing is template caching. At the least they should be running 1.6 with persistent DB connections. Beyond that they have a lot of middleware enabled that isn't being used.

[0]https://github.com/TechEmpower/FrameworkBenchmarks/tree/mast...


I'd love to see how many lines of code each test required, but it's probably impossible to do in a fair way

edit: I meant in the chart, at a glance


In fact, we have some work in progress on that front, along with the number of commits to the test implementation directory. Combined, these will give a rough idea of code length and the amount of community input/review each test has received.


Thank you for your work, it's really interesting



I believe the tests are open-sourced on GitHub.


I'm rather surprised to see rack-jruby up as high as it was. I discounted ruby as an option for a very high performance http service, but I guess I'd be wrong to do that. Don't get me wrong, I love ruby and I use it every day. I just didn't expect to see it in the top performance contenders list.


That is principally thanks to TorqBox, the codename for Torquebox 3, which is built on Undertow. Undertow is the web server that is scrapping with Netty and Vert.x on the plaintext tests.

Also note that the particular Rack test that performs very well is running a very small amount of Ruby code. Thanks to these improvements, however, rails-jruby now consistently tops rails-ruby, if only by a small amount.

See more on TorqBox: http://torquebox.org/news/2013/12/04/torquebox-next-generati...


What always impresses me and leaves impression, is just how fast raw PHP is. At times it seems PHP has been obsoleted by new platforms, but benchmarks like these make a case for it's use. Especially because it is really easy for beginners to pick this up.


The first PHP result for the JSON test comes in at 31.7% of the performance of the top performer. PHP also occupies the bottom 3 worst slots.


That's disappointing. One thing PHP should be really good at is serializing/deserializing JSON.


I know benchmarks should be taken with a pinch of salt, but by round 5 I was totally into Scala (Scalatra), trying to write my own framework, so I could get better bang for buck from my EC2 instances, which to be honest, aren't cheap when compared to say, Digital Ocean.

Around round 6 of these benchmarks, I ditched Scala altogether (and also my framework).The reason I ditched Scala was not because of it's performance, etc. But it was because I was the only developer in my company who knew and learnt Scala after reading a couple of books (one was around 800 pages). Obviously, I needed a language that any other developer should have no problem taking over, and Scala developers are 1)expensive 2)not easy to find. Also, Slick (the database interacting code for Scala by Typesafe) wasn't mature yet.

For this reason, around Round 6, I started writing my own framework in GoLang and used it internally as an 'auxiliary framework'. I will explain more about this framework soon soon. In my company, we have about a handful of backend programmers and a couple of frontend devs. I found that GoLang was much much easier to teach my programmers, than say I could teach Scala. Please note - Scala is a brilliant functional programming language, but if you are thinking switching from Ruby/Python/etc would be easy, then you are wrong.

Now, we have a workflow that allows us to deliver as quickly as possible, but without missing out on performance - We write our entire V1 in Rails. We implement all the UI/frontend related code and then port it to our GoLang framework. We have an internal generator where we just feed our rails app, and the code for our framework is just 'ported'/generated on the fly based on our framework and we just deploy it. So far, our productivity is slightly lost while handling the type conversions, bugs, etc. But it's totally worth it. Go outperforms Rails by a huge margin. I noticed that using something like Puma helps a lot, but it still is no way comparable to our GoLang framework.

As for our framework, it's just pretty simple - Just organize all the files as you would in a Rails application (Models/Views/Controllers/Config) and everything just works without much performance hiccups. We use Gorilla components for stuff like routing and cookies. The rest of the stuff is slightly adapted from other frameworks (like Martini).

All in all, I love the ability to have JVM like performance with the productivity of Ruby with a language like GoLang. And this round 8 benchmark is nothing short of impressive. If you haven't tried GoLang yet, you should try writing your own framework, not only do you learn about all the trade-offs for the 'magic' that rails makes under the hood, you also learn about some new stuff and thus become a better programmer.

I think GoLang is pretty impressive if someone as average as me can even write a framework like Rails, except for better performance. Give it a try, people, you won't be disappointed.


> If you haven't tried GoLang yet, you should try writing your own framework.

Why ? Wouldn't time be better spent learning a language on the JVM that has a whole array of stable, well-tested, production ready frameworks i.e. all of them.

Switching from the JVM to Go is like taking 1 step forward and 100 steps back.


That's what I don't understand about HN. 4/5 fastest frameworks were java, and the takeaway is to pick go (granted it is the fastest though). Go is relatively new with few resources, java is old hat with plenty of books, docs, tutorials etc and a global talent base of developers.

I can understand learning and using go for some things, but companies are moving major infrastructure to it with staff that are still learning it.


I have a bias against Java. I realize I may be deluded, but my experience the vast majority of things written in Java are garbage. Here, 'garbage' is an intentionally vague term coming from my personal opinion of using a piece of software in a consumer and DevOps role. It may be that the Java language is conducive to writing bad code, that the JVM has problems, or that the 'global talent base' is so broad and Java is so 'easy' that talent is difficult to come by, or I might be plain wrong.

I realize that Java is incredibly useful for some things and that my reasons for labeling software garbage aren't always of primary concern; criticizing anyone for choosing Java is beyond me.

However, I would be very reluctant to ever choose Java for a project given the opportunity.


What would you choose instead and why?


Is it a toy? Is it a startup? Is it worth doing really well? Who is going to use it? Is it a tool for the ages? How much do I care about it? How big of a project is it? Is building software my only goal? Do I have the resources to do it right?

... there are so many questions with so many different answers.


It's not actually the fastest. Check out the tabs at the top of the benchmark. You are only looking at basic JSON serialization on an i7. A more realistic benchmark is the 'fortune cookie' benchmarks which actually hit a database and does some modifications on the results - which most of your requests will do. Go performs at only 50% of the more optimized Java/C++ frameworks in this case.


The benefits are just too big to ignore and the barrier of entry is really, really low. The Go language is so refreshingly simple and the standard libraries are very well documented. You can hit the ground running in days.


The benefits are too big to ignore? Would you mind letting me in on the secret then? The only benefit I see in go is fast compilation. That is hardly big enough to justify using such a primitive language.


The thing about Go is if the language appeals to you then the fact that there aren't 10 layers of legacy framework cruft between you and the actual app logic is actually a good thing, not a bad one.

And Go does have some great "batteries included" stuff where it counts. With a few notable exceptions I find the 3rd party web frameworks for Go don't really add much over the standard library's net/http and html template system.


I used Scala, which was running on the JVM. The barrier to entry be it Java or Scala is quite high.

I completed the "Introduction to Programming in Go" in under 3 hours and in less than 6 hours I was able to code a full-fledged application. I cannot say/vouch the same for Java or Scala.

I would like to have an enterprise-level language inside my company without the complexities associated. I think goLang solves my problem and hence I use it.

I love the JVM, it's fast, sturdy, reliable. But throw in more JAVA developers at it, no matter how good, you end up with half-baked code, unused classes and unwanted complexity. I wish I could throw in more Scala developers, but it's not possible at the moment within my financial constraints.

Almost 100% of the developers we hire know C/C++ well, so it's much much easier to teach them GoLang, than say, Java. And that is a huge time and money well saved for me.

>If you haven't tried GoLang yet, you should try writing your own framework.

I only say this because I want people to understand how incredibly simple the GoLang is.

Hope this helps.


Interesting that you find C/C++ people to pick up Golang fast and easy. From my experience, it's the Python/Ruby crowd that tends to gravitate towards it. Most C++ programmers I know are stuck too much in the std::map<what<is<this<oh_god>>>> type of coding and refuse to touch Golang.


AbstractFactorySingletonFactoryFactory killMeNow = FactoryCreatorFactory.CreateSingletonAbstractFactory(…)

But of course, you have to first write about 300 lines of XML to wire up the various BeanInversionContainerFactoryDependencyInjectors.

Java is a needlessly verbose, death-by-pattern-programming monstrosity. Go is a fresh look on programming in general. The standard library is phenomenal, built-in concurrency is excellent, and it's an extremely productive environment to be in. It feels like driving a Mazda Miata vs a Ford F-250.

Different strokes for different folks, I guess.


Did you ever look at Clojure? If not, why not? If so, what didn't you like?

While Clojure surely has a bigger learning curve than Go, it's much simpler and more approachable than Scala. I've learned it recently and am an absolute convert. It seems perfect for your use case and you could even skip writing the prototype in Rails because you'll be just as productive in Clojure.

Note that I'm not trying to convince you to change; you obviously found something that works for you. But I am curious if there were obstacles to using Clojure (missing libraries? poor tutorials?) and if so, how that could be fixed.


Thanks! I will give it a shot :) The main reason I chose Go was for the learning curve for my fellow devs. But if clojure is only slightly higher in complexity, I would definitely give it a shot..thanks :)


A Lisp family language is hardly more approachable than Scala. I found Java -> Scala pretty smooth, but can't make head or tail of Clojure code as it looks completely different to languages I've used before.


Interesting perspective, thanks. I suspect it depends on whether you're more used to langs like Ruby, Python, JavaScript (which are more like Clojure) or Java (which is more like Scala). Coming from Python and JavaScript, and with minimal Java experience, I find Clojure more approachable... but then, Java-style OO is utterly vexing to me.


I love clojure buti rather use clojurescript since i find it more useful for may needs and usecase.


The numbers vary greatly, depending on which test you look at.

In the plaintext search Go only comes in at 13th: http://www.techempower.com/benchmarks/#section=data-r8&hw=i7...


Go also suffers heavily when resource constrained - eg, run on EC2 or digital ocean which most people here are going to be running on. The large amount of garbage generated by Go paired with stop the world GC is the reason there. If you're running directly on high processor machines without much disk access then Go is a good bet, otherwise you'd nearly always be better off with Java - especially when you consider how many well tested OSS libraries are available for everything.


If you look at the Go standard library, it doesn't generate too much garbage. It is avoidable often, don't use .String, use .WriteTo -- the GC cost is not across all used memory, just memory with live pointers.


you also need check the errors , most of the languages in to 12 returned too many errors or failed during the request


This is what I've been exploring in the past couple of weeks. I love the frontside MVC framework position, and being able to build out a resource, hook it up as JSON, then just hop over to Go and recreate it with a clear goal in mind has been a lot of fun. I don't need the performance as bad as many- this is more me just trying to still figure out 'my' language(I'm not long in the industry). Go is starting to feel like it could be it, but it's hard to deny how quick Rails is for getting up and running in record times.


"We have an internal generator where we just feed our rails app, and the code for our framework is just 'ported'/generated on the fly based on our framework and we just deploy it." - Well that sounds useful! Any thoughts on open sourcing it?


Right now it's a mess, I will open source it pretty soon :)


The language is called "Go", not "GoLang". Just pointing this out, not because I'm trying to be smart or anything -- it just irks me to read "GoLang".


You get into the habit of calling it golang because googling for "go" issues isn't very useful. Golang is the nickname that is (or at least was last time I did something in go) what the community tends to use for SO and blog posts. Its sort of become the language's unofficial name.

Its really frustrating that a search engine company would use such an unsearchable name for a new product.


> what the community tends to use for SO and blog posts

As a tag in the tag section, not as a name in prose.


I'm sorry, it's become a habit, because if I used Go here instead of GoLang, someone else googling for articles/forum posts may not come across this thread. It is in good essence that I always make sure to use GoLang instead of Go.


Practically everywhere I've seen it's called Golang since Go is so hard to search for on Google.


So, do you think that posters here should refer to it as "Golang" in order to boost search results for that term?


You forgot the #firstworldproblems hashtag.


Very cool! Do you have any solid numbers that speak to performance gains of the go version vs. the rails version of your app?


Thank you, I think when I measured the GoLang framework was roughly about 25x faster than the rails 4 version.


Looks like erlang frameworks are not represented...


We have been having trouble with Erlang frameworks since before Round 7. Unfortunately, I was still getting up to speed and improving the suite mostly for Round 7/8 and did not get to fix this yet. I do have it topping my todo list for round 9, with the hope being to get them all back in and working soon.


Cppsp (top of the i7 charts) is some mad science

http://xa.us.to/cppsp/index.cppsp


I started a conversation in #python on freenode and people were a bit outraged by the way frameworks are compared. Some open Database connections and never close them (example: GO) and others open and close DB connections for every request (example: flask). The guys at techempowered should review every pull request and check if it is implemented in a fair way.


I find the JSON benchmark misleading a bit. I posted this before, but I'll say it again: JSON serialization in Go is slow (2.5x slower than Node.js for example [1]). The web server, however, is very fast. When they measure webserver+json, Go wins because of its webserver, not because it serializes JSON faster. If you want to parse a lot of JSON objects with 1 request (or 1 script), or if you have a large JSON object to parse, Node.js will outperform Go.

That said, I rewrote my app in Go and I'm very happy with the performance, stability and testability. The recently announced go 'cover' tool is very useful and a breeze to use.

[1] Here are my benchmarks: https://docs.google.com/spreadsheet/ccc?key=0AhlslT1P32MzdGR... (includes codepad.org links to the source for each benchmark)


I optimized the Go JSON serialization in Go 1.2. See https://code.google.com/p/go/source/detail?r=5a51d54e34bb ... it went from 30% to 500% faster. It uses much less stack space now, so the hot stack splits are no longer an issue (also Go defaults to 8KB stacks for new goroutines now).


Regarding symfony2 at the bottom - I submitted a simple pull request to try and fix some issues with the setup, but it's been sitting and sitting there...

https://github.com/TechEmpower/FrameworkBenchmarks/pull/650


Hi mmucklo. We'll get that merged in for Round 9!


Benchmarks are fun but I'll stick with rails and its simple ways of letting you cache data.

I'm ok with getting out the door response times of 8-15ms while serving 20,000 unique hits a day on a $5/month VPS. The server does not even break a sweat too and it's doing more than serving the app too.


What kind of response times do you get on a cache miss though?


80ms-350ms is normal under typical traffic conditions. It depends on the complexity of the page.

That's still not terrible though and it could easily improve by massive amounts with a stronger server. I have not gone crazy with profiling either. Just using fairly basic cache blocks when applicable.


It's amazing how well a young language like Dart and its frameworks performs in the multi-query benchmarks. There's still so much more optimization to go; at this stage it feels optimistically like the sky is the limit!


Whats up with the number of Rails errors?


Anybody could explain what gemini is ? I've been on the eclipse project home, and i really don't see the link with a web framework benchmark.


Gemini is the private Java framework Techempower uses on their client projects. I believe questions regarding its performance relative to various open source and enterprise JVM frameworks inspired the first Techempower benchmarks.

http://www.techempower.com/blog/2013/03/28/frameworks-round-...


Gemini is our internal Java web framework; it has no relation to the Eclipse project (other than I use Eclipse when I work on Gemini) ^_^


Second on this question. Extra points if you've done any development with Gemini. I'm really curious: in every category it was the highest-scoring full-stack framework, which is especially surprising given its apparent obscurity.

Edit: every category but the plaintext benchmark.


Am I the only one shocked to see Grails beat Spring? I mean, I think it's awesome, but part of me wonders if something went awry in the Spring code. I know a last minute (breaking) change kept Grails out of Round 7, so perhaps whatever that was made a big impact.


Spring has dropped quite dramatically in most of the tests, I wonder what changed.


The following was the last notable PR processed for Spring: https://github.com/TechEmpower/FrameworkBenchmarks/pull/606


Interesting to see go moving up there.

Curious - any reason why you guys don't have ASP.NET tests in Windows with SQL Server? I fiddled with the filters and found none.

Update: Never mind. I see it now. You don't have Windows tests on EC2.


I'm curious why ServiceStack.net has fall out so badly, since their own benchmarks shows a lot higher performance than ASP.NET web applications.


I cannot find Flask in the list. Any specific reason?


The submitters didn't create a JSON test, but all the other tests are present. Switch to the Plaintext test to see Flask.


Python has never been a priority in these benchmarks


dumb question: are we sure these things are doing the same thing?

AFAICT some of the larger frameworks by default do a bunch of stuff (csrf and ip spoof checks, session management, etag generation based on content etc) that simpler solutions don't, but this things can usually be turned off.


Exactly the same things? No, of course not. The non-framework code is the same, but the framework specific code (and features/functions) is going to be very very different. A lot of pull requests have been sent that turn off certain features (like unnecessary django middleware).

Barebones frameworks of the same language are generally going to out perform heavier frameworks. Feature count/matrixes are not taken into consideration for these benchmarks.


I'm sorry, I do not understand how this was obvious, I'll see if I can send pull requests.

Of course barebones platforms will be faster, but doing unnecessary work is a different thing.


It's more obvious if you read the blog posts linked to each of the rounds (but not this one), since they describe some of the changes that were made to each framework test to bring them closer to parity.


The source code to all of the benchmarks they do is available on Github. You can look at/submit patches for any improvements that are needed.


I'm curious as to why Finagle has 0's across the board for everything.


I'm really surprised with ASP.NET/C# results :-S


Also note that those tests have tons of errors, so they're probably not representative.


its asp.net mono and i am not surprised.


It's Mono implementation, not .NET.


Why is Python doing so much worse than PHP?


which version of go is used?


Go 1.2rc3


i'm interested in who is financing this benchmarks. really sorry, but for me it looks like a new way of doing seo marketing


This comment makes me dream of putting together an Indiegogo campaign for the project so that we can stop using our workstations and finally get some proper 10 gigabit Ethernet hardware. It sure would be nice if the JSON and Plaintext tests weren't network-limited.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: