Hacker News new | past | comments | ask | show | jobs | submit login
How to Improve the quality of your software: find an old computer (jacquesmattheij.com)
151 points by VeXocide on Aug 22, 2011 | hide | past | favorite | 87 comments



For web stuff in particular, using a comparatively low-bandwidth and/or high-latency connection can also be useful, since not everyone has 10+ Mbit connections with a 20ms ping to a Bay Area datacenter. On Linux, you can use 'tc' to simulate that.

I had a 384 Kbit, ~100ms-latency crappy SDSL connection until recently (was renting in the Santa Cruz mountains in a location that had poor connection options), and it was pretty amazing how two sites that looked very similar from a fast connection, would load much differently on the slow connection, often for reasons not really inherently connected to the site's needs (it's one thing if it's loading slowly due to streaming video, versus due to having a gigantic background-wallpaper image, or unnecessarily serialized roundtrips).


If you're on FreeBSD you can use dummynet to simulate packet delay and loss.

It's quite handy. My partner has slow, unreliable, high-latency Internet in his house and there are an entire class of performance problems that are extremely obvious when I work from his place that are barely measurable when I work from a 100mb line that's only a few milliseconds away from a major datacenter.


And if you are on OS X (and have Xcode installed), look into Developer->Applications->Utilities->Network Link Conditioner.


If you are in the UK you can simply choose Virgin as your broadband provider to achieve the same effect.


Or talktalk, I don't even get cable out here in the country...


Or if you’re on any OS with Java installed, try Sloppy: http://www.dallaway.com/sloppy/


I second Sloppy. I've found it very useful for debugging Flash errors that only happen occasionally. By slowing the loading sometimes them become reproducible. It's also useful to see how a webpage loads on a slow connection - for example: is your copy legible before your background image has loaded?


Thanks! I didn't even realize that was there.


My favorite (hah!) site that demonstrates this is Twitter.[1]

While tethering, it often takes upwards of two minutes to load a page. That seems just a little obscene when the entire useful content of the page is 10 lines of text.

(Hmm, maybe if I disable JS it'll fall back to a more sane approach?[2])

[1] Ok, now I feel simultaneously stupid/vindicated, since I posted this before reading the article... :)

[2] Just tried this, it falls back to http://twitter.com/?_twitter_noscript=1, which, hilariously, still seems to rely on javascript. Good job!


I get that Twitter is the bees knees and all, but it's got some pretty serious problems for such a simple service. For example, the last time I tried visiting it, it only showed the top banner and kept redirecting back to itself. And Google, perhaps on the opposite end of the complexity spectrum, loads before I can start typing my query. And when I do, results come straight away -- even on my shitty TalkTalk connection. But perhaps that's not a fair comparison, since this is Google after all.


On Solaris/Illumos you have Crossbow. With it you can simulate a whole lot of different kinds of network connections and can starve your programs from bandwidth in the most imaginative ways to test things like saturating the links to different parts of the application.

BTW, didn't know about tc. Thanks for the tip.


On windows you can use the testing/debugging HTTP proxy to simulate slow connections.


Ooops...I forgot to include the NAME of said tool - Fiddler!


"... For web stuff in particular, using a comparatively low-bandwidth and/or high-latency connection can also be useful, ..."

If you teamed low bandwidth with virtualised browsers of all flavours, this would make a pretty good testing service.


Fiddler can do the same for you on windows.


This is true. And it used to be false.

It used to be that you should develop and test on the newest, fastest hardware available, because PC sales were growing exponentially and by the time your software had sat in a shrink-wrapped box in a shop somewhere for three months, most of your potential customers would have more powerful PCs than you could buy now.

This scenario is now wrong in two respects.

First, PCs are now a mature market. Sales are flat. People and companies are going longer and longer without upgrading. So a greater proportion of the market is using older hardware.

Second, the supply chain is shorter. Obviously in the case of a website, the update is instant. But even in the case of Apple's app store, it takes less than a month to get your app in your customers' hands.


One scenario I can think of is investing in a tablet, if only to test how a web app runs on it.


And also on Macs for testing on Safari.


You may be missing the point. Sure, buy a Mac so you can test your site on it, but buy an old, slow Mac, not a top-of-the-range shiny thing.

Tablets --- that's more interesting, because they're all new. I don't know to what extent it makes sense to optimise for the ipad 2 -v- the original ipad.


True, after going through the comments again, getting an old Mac mini is going to make more sense than getting a new Mac.


What, really? Buy a 2k+ computer to test a browser you can download on Windows?


If there's any call to do that, you only have to buy a Mini for $600.


This is very good advice, but not only for the reasons that the author lists.

If you use a slower machine (or one with different performance, in general), you might discover race conditions in your code, that otherwise go unnoticed. I get this all the time with software on my mac — run stuff on a loaded machine and discover applications aren't ready for things appearing in different order than on the developer's machine.


We got this a lot when we made our nightly test run on ec2.


I've been saying something similar for the last year or so. I can immediately tell whether your software is well-written by running it on my netbook. I can play Team Fortress 2 and World of Warcraft pretty well, Adobe Fireworks runs like a dream, the entire Office suite is snappy.

And then you try something like Windows Live Messenger or Skype, who both somehow find ways to use more RAM and CPU than half of the games on my system let alone other applications. It is amazing to me how long it takes some websites to load, too - fast browser, fast internet connection, slow CPU, all adds up to a pretty miserable experience if you like Facebook, Twitter, Google+, GMail (is there ANY other website which has a loading bar?).


I've found Facebook actually loads pretty snappily on bad connections. (The text, anyway. Interactive stuff might hang for a while.)


I'm trying to think, what netbook can run TF2 well? I don't think I can even get that working on my 10" Eee w/ the ION2 chip.


Quite a few netbooks on the larger end of the scale can run games. I'm on an HP/Compaq Mini 311c, Atom 260, ION, and 3GB RAM (I upgraded it). I made a post on Reddit explaining what I did[1] - bear in mind this is definitely a compromise, while it boosted one person's framerate from 20 FPS to 60 it bears more than a passing resemblance to Minecraft and 1997. TF2 is on the absolute edge of what I can play on here, but many games run perfectly fine with very few issues. However it's taken me 5 minutes to post this reply because every 20 seconds, textarea inputs on Firefox freeze and ignore anything I type for 10 seconds. Odd!

Running a netbook is very much a reminder that CPU speed is not a linear scale.

[1] http://www.reddit.com/r/tf2/comments/e43fm/how_to_play_tf2_o...


This is an argument I've had many, many times over, and it's nice to see a respected developer like Jacques on the same side.

Quoting from his article:

  A nice example of a website that could do a lot better
  in this respect is twitter.com.

  They've now forced all their users to the 'new' interface,
  but frankly imo it sucks. 
Absolutely true. Even on my fastest machines I've taken to using the mobile version. It's small, clean, fast, and you can do nearly everything on it that you can with the main site.


I guess the new version is running on that web design toolkit they just released to HN applause.


Except that the complaints about Twitter's new interface are all related to Javascript performance, and have nothing to do with CSS. Their toolkit release was all about CSS.


Lots of CSS written in the wrong way can actually be pretty processor-heavy.


Assuming you mean the twitter bootstrap, it's just a bunch of useful CSS classes. The hotlink is a single minified CSS file.

I highly doubt the CSS included in the bootstrap is having a great massive impact on the performance of the new twitter interface.


I've used this strategy for a long time, mainly because the target platforms were often low powered PCs or DSPs. At one one company which I worked for not so long ago I was asked on a few occasions whether I wanted a newer PC, and I repeatedly said "no".

I'm not advocating luddism, but if you use a computer which is a few years old and make some dumb mistakes then the performance penalty is big and immediately obvious at the development stage, whereas on the latest hardware with multiple CPU cores it might not be so obvious and then turns into a crisis during deployment.


Also watch out for that SSD sitting in your machine making file access "free". :)


Is there a tool to slow down disk access?

I'm especially worried about UI thread getting blocked on file access to sleeping HDD or unreliable network drive (i.e. I don't want my applications to be as beachball-death-prone as Finder and iTunes)


Maybe use an actual network filesystem (NFS/Samba/whatever), and then throttle/shape the network traffic however you like (lots of suggestions upthread).

You could probably do it over a loopback device on the same machine if you don't have a network handy.


This has done wonders to show me just how buggy iTunes is, simply by trying to store all of my media on a remote machine.

Hm, I should add these mp3s to my library. Drag, drop, make coffee, hit the bathroom, chat with the QA guy, come back, read some HN comments, hey, it's done.


Can you move IO onto another thread, or use asynchronous/non-blocking methods? Blocking IO should never be done on your UI thread.


That's what I'd like to do, but there are non-obvious cases that I could have missed, e.g. seemingly innocent system calls, observers/event callbacks.

Also, when you make I/O truly async, you need to ensure UI behaves sanely while slow I/O happens in the background.


Use a USB stick?


In this same thinking there are still some USB 1.1 hubs out there too. This should give you a VERY slow drive for a decent price. As an example:

http://www.amazon.com/Ho97958-Usb-1-1-Slim-4-Port/dp/B00030N...


OS is too good at caching reads, so tests are not repeatable.


I completely agree not allowing web developers to use the fastest machines to create the most bloated code, but I do hate suffering with absurdly underpowered machines in the office.

I like to be able to run a couple VMs at the same time without causing my system to grind to a halt. That being said, what I've done in the past is give myself (and my devs) fast machines loaded with RAM and such, but the environment they deploy onto is a low-grade commodity box.

The environment should be bare enough so that it causes developers and operations personnel to think twice before reading in a large file, or opening a tonne of file handles.

As a corollary, if you are deploying into a JVM environment (e.g.: Tomcat), DO NOT give the JVM a tonne of memory by default. Developers will write applications that just drink it up. Instead, start at the default (256MB) or sanely bump it up progressively as required by the application.


I generally use Virtualbox for this.


I generally use virtual machines for testing, however, you can sometimes run into problems with this. For example: I recently discovered a javascript / flash interaction bug in IE that only exists in a VM environment. In the past, I've run into strange IE css rendering bugs in a VM as well. These are really few and far between, but they do happen, and have been documented issues with virtualized environments. If you are testing on a VM and you run into a bug that just doesn't make any sense, try running it on a real box with the exact same setup.


I was about to post the same thing. If you don't have an old PC laying around, running a VM is an easy way to test an application with limited resources. Another tip on is to setup another user account on your system with large fonts to quickly test DPI scaling.


Is there a way to emulate varying processor speeds using VirtualBox or other VMs?


I have done this successfully in the past; Ubuntu boots very slowly when limited to 5% CPU, but once up it's very usable. Look at vboxmanage --cpuexecutioncap


I liked Steve Wozniak's comment in his interview in "Founders at Work" book, saying that a real hacker/entrepreneur will get things done despite not having advanced tools available to them.


In my day job, I experience this almost daily. I spend a lot of time working on government computers with almost no useful data processing and/or trouble shooting software installed on them. They're basically stock XP machines with MS-Office.

It's amazing what you can accomplish given those kinds of constraints.


Fun fact: Microsoft's Windows Mobile team always had a reference hardware, which was a below-average phone from previous generation. For example, WM 6 for 600MHz/128 MB devices was dogfooded on 200 MHz/64MB devices, and a lot of Microsoft people really used them as their personal phones.

BTW, that's why some old phones/PDAs are legendary (for example, HTC Tornado) - because several next versions of WM run there flawlessly.


The only problem with this post, is that you gain as much information from the title as you gain from reading the whole article.


That's why the android emulator, even though it is so so so SO slow, proves a worthy tool to develop with. If your app runs quickly, relatively speaking, on the emulator, you've got a good app.


There is slow and then there is unusable. The emulator is the later. If it had any decent performance it would run at the speed of the slowest android device available, but it doesn't even do that.

Using the emulator you're conditioned to the slowness. You can't even set a valid baseline. Now your application is so slow you can't tell if the performance is due to the application or the emulator.

Seriously, the emulator is a complete joke.


Conversely, this is why the iPhone emulator is bad. If your apps run slowly on the iPhone, it appears "speedy" on the emulator.


On a related note, I also keep an old pc around as a learning box. It's great to be able to easily wipe the hard drive, install a fresh *nix and have a development environment set up that keeps me fearless.

Where I might be concerned with muddying my main pc, I don't feel any problems with doing something potentially hacky or dangerous on the old box. Freeing up this mental space makes the first few steps of learning some new language or environment much easier for me.


You may instead want to try something like this:

http://www.amazon.com/gp/product/B000KS8S9W/ref=as_li_ss_tl?...

It goes in a 5.25" bay and allows you to easily swap out 3.5" SATA drives. You could then have two SATA drives, one drive is your normal OS, and the other drive would have your muddying OS.

There is also virtualization. Not sure if that is an option for you.


The advantage of a second box is that you can use the main and test machines side by side. if you switch drives, you're cut off from your main development environment while you're using the other system.

There are disadvantages too, of course, such as increased desk space, power usage, and heat in your office, and inability to work on the subway.

VMs are good, though VMs don't solve the original problem of letting you test on an older CPU. (Their IO is generally slower, but not in the same way as an older machine.)


You might be interested in virtualization. VirtualBox works nice, or XenClient.


Isn't this how Unix emerged? The team was to build an OS on some given hardware with minimal resources and speed.

I remember reading this but cannot remember where.


I agree that you should test on more limited hardware. But on the other hand, I use my high end development machine as the benchmark for how fast it will go with a production worthy machine (unless one is willing to throw really insane amounts of money around).

If the customer asks "How long does it take to import 10k documents?" I can give them two data points, the VM running on a slow DSL connection over VPN on an older server, and a VM running locally on a i7 with more RAM and an SSD.

Of course, the requirements change if you develop desktop applications vs. server applications: if you plan to deliver software to government accountants running hardware that is at least 5 years out of date, you better go and get yourself a matching setup - "It would be faster if your Boss bought you a faster machine" is not helpful.


Just ask me to test. I have a real crap PC and constantly scream about devs who think everyone has multi-cores and multi-GHz. Every day, this machine becomes slower due to website and browser bloat. And I'm not the only one. All the others think it's something wrong on their end.


Right, but, statistically speaking, you don't really matter. Doubly so if you don't buy software.

The target is "people who spend enough money to make writing code for them worthwhile," not "everyone ever."


Well if you're going to rule out people who have fallen into a pit of debt these days, where do you think they'll go when they're back on their feet? To your company that snubbed them?


The majority of the people who are on dinosaur computers are not, it's probably safe to say, people who are likely to show interest in your software even if they upgrade. It's also probably safe to say that if you win on features and quality, those who would be interested will buy it.

There's a cost/benefit to supporting old hardware, and very often the benefit is vastly outweighed by the cost. Econ 101. Sorry.


http://www.webpagetest.org can do dialup testing with waterfalls

You can also run Windows XP in vmware with just one cpu allocated to it and limited memory (and screen space).

If you have the paid version of vmware you can also do cpu throttling.


Any way you can simulate throttling of the storage layer? ie, slow or fragmented disk?



Excellent point. The first version of my software was released under Windows 3.1, and had to run acceptably on a 90 MHz PC. It's come a long way since, but the core algorithms are still in use, and they scream on today's gear.


Instead of getting an old machine, why not use a virtual machine with low settings?


There are special cases where this won't work. For example I'm working on audio processing software and I can't test it in a VM because audio latency in a VM is too big whereas an older and slower computer might have slow disk access, etc. but audio latency is still near realtime.


Are there any special kinds of emulation that you could use for this purpose? I mean, there are DOS, gameboy emulators out there, it would be interesting if there were emulators out there for PCs with a particular hardware configuration which would emulate slow disk access and so on.


Before anybody uses this as an excuse to save a bit on the hardware: this machine should be used to test the product on, not develop it (are your users really going to have a huge IDE running as well?).


Depends if you're running a huge IDE, or a text editor.


even then, the selenium tests I use run about 3 times faster on my iMac compared to my macbook air.

To me, fast tests are good tests.


Well, if not an IDE they might have running Photoshop in the background. Or some other resource hungry software.


Or even worse, they might not have a huge software running, but the small updaters, toolbars, anti-malware monitors, hardware 'managers' and all that crap can add up pretty quickly.

It's not rare for me to see applications running faster on my meager single-core 1.6GHz laptop than on considerably faster machines, only because they don't have to compete as much.


I wonder if anyone out there does actually test in a browser with 19 toolbars, virus scanners which intercept every read() call, and bonzibuddy?


and 4 viruses concurrently trying to connect to their respective master nodes?


Like a virus scanner. Or a virus.


Great advice. Especially if you interface with any device drivers. Thinkpad T42! All initial build/testing goes to an older Thinkpad T42 and we move up from there into the Windows7/64bit.


Great insight. It never occurred to me to think fourth dimensionally about software until now. Thank you for saving me countless hours of dealing with support emails I don't understand.


Github is another problem site. Though it's not just the Javascript, it's all the random AJAX calls. I'm often browsing over a high-latency low-bandwidth connection from a netbook. So just using the netbook also misses an important point, that you need to see how badly your site breaks down when you're in the middle of nowhere connecting over an AT&T USB dongle.


Or use a netbook.


I've always developed software on not high end machines. (Or tested on old devices ... was still using an original iPhone when the 3gs was around).

I guess some webdevs don't do this - at least I have to assume they have some maxed out monster workstations when I visit websites that let my CPU usage go to 100%.


Yes, I was still using my original iPhone, for at least, ooh, 6 hours after the iPhone 4 launched. To be fair, I did resist the 3G and 3GS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: