Hacker News new | past | comments | ask | show | jobs | submit login
http_load - test your server for heavy load (acme.com)
12 points by mcxx on Jan 24, 2008 | hide | past | favorite | 9 comments



This tool only does throughput tests, which are only good for very limited purposes. I am creating a load testing tool at the moment, and I did load test consulting for a number of years. Generally we would start with a throughput test to establish a ceiling on the pipe's fatness and the webserver's max throughput. That's really just step 1, though. After that is the more important stuff.

In a web application, most bottlenecks are at the database and appserver layers as opposed to the webserver. A good load test needs to be fully transactional so you can simulate login/logout, shopping, searching, or whatever real users of your application do. A good load test also needs error detection (content-checking). Without content checking, I've seen demos where a "testing tool" ran, someone unplugged the DB machine, and everything still came up green. From an end-user perspective, a real user would have seen the "data grid" section of the webpage replaced with an error message and the web app was unusable, but the testing tool was happily chugging along, showing green.


Be careful with this. I tried three different urls:

  close static file: 750 fetches/sec, openload shows 800.
  php output: big stream of errors about byte count wrong
  other php site: 37 fetches/sec, openload does 135.
I'm not sure which is right, but the 37 vs. 135 could easily lead you astray. This run is 10-13% cpu load on the local machine so it is likely not a saturation problem.

Not that I'm recommending openload, it could use some help, but it is useful to compare.


Why not 'ab' which you probably have installed already?


Looks like 'ab' is much nicer than back in the day.

But 'ab' only manages a load of 36 tps and is unable to saturate my web server. (identical machines, php+sqlite3 hairy web page) 'http_load' has the same problem. 'openload' can saturate the web server at 135 tps or so.

'ab' does not present a significant load on its host, so I'm not sure why it can't drive the web server to saturation, but there it is.


Are you using the same number of clients? From the openload website:

Number of simultanous clients to simulate. This is optional and defaults to 5.

I believe ab defaults to 1 client.


I was running both of them at 20 clients. More than enough to start pushing the milliseconds/page up.


try ab -c 1000 -n 100000 <url>

that will give you 1000 concurrent connections, 100k total connections.

good luck.


I've used this in the past. Just note that it only works well if you test it from one colo'd box to your server. Testing locally on your own internet connection doesn't do much - you need a fat pipe.

However this is just HTTP load.. if you have JS running and making database calls after page load I don't think this will help your testing methods.


http_load does not replicate heavy load; it replicates a DOS attack.

It generates N requests every second without waiting for the previous N requests to complete. Actually, it never waits for requests to complete. It kills them so the http server has no where to send data. With Tomcat, this causes hella errors in the logs. It's just not an accurate stress test.

ab is okay but I found it was more difficult to ramp it up to really stress the server. I prefer siege (http://www.joedog.org/JoeDog/Siege) because it performed well, produced useful output (ab's output is pretty good too), and was convenient.

We used these three to test a new app stack for yellowpages.ca a couple years ago. siege became the standard in the end. We also tried JMeter which performed pretty well but the GUI/XML configuration was inconvenient (it's a complicated development environment).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: