I don't really notice whether a page renders .1 seconds faster. But I do notice when a browser starts up quickly, and how smooth its responses are. And while I love Firefox to death, it is falling short of the mark in those areas of everyday use, compared to Chrome and Safari. If it were not for the addons, I would have dumped it by now. It makes me sad to even type that :(
IMO it's usually the longest wait that is most irritating. It was never the 1 second load in games that annoyed the crap out of me, it was the 2 minute load to initialize everything. Every subsequent load in the session felt like the first, regardless of length.
I know full well that the .1 second that Chrome may be faster is negligible, and largely irrelevant. It's that IE took forever and Firefox wasn't significantly better, Chrome however was a dream, any load or delay from that point on was never associated with the original painstaking load.
I don't know if it's rendering or if it's disk access or what's going on, but I installed the Django docs locally expecting it to be pretty much instantaneous to access. With Chrome, it is. With Firefox there is a very noticeable and very annoying delay.
> If it were not for the addons, I would have dumped it by now. It makes me sad to even type that
I was with you for a long long time. But I think Chrome avoiding addons for so long was a brilliant move. I haven't really missed them since moving exclusively to chrome (which was a shock to say the least!) and I think a big "performance boost" exists there too.
You think you don't really notice it. However, Google has measured that rendering even 100 ms. faster increases the number of pageviews (and hence a add views). Moreover, it has been established that snappier website feel snappier, even if you don't notice the difference in each snap.
I consider this notion of speed being the only important factor pretty stupid. No amount of speed matters when a browser's UX sucks. Firefox is the only browser where it's actually _possible_ to get a decent UX (vertical tabs and a way to quickly search through tabs are absolute necessities).
I think it depends on what kind of "browser" user you are.
I spend most of my time online, and only use my browser for viewing web pages, and sometimes testing during development. I rarely ever bookmark anything, and if I do, it's a link I "really" go to a lot and just put it in the bookmark toolbar. Basically, I use tabs and hot-keys.
In a nutshell, start up speed and browsing speed is most important to me. Take it with a grain of salt though, as I don't mind viewing documents online in lynx/w3m/etc.
I usually have around 200 tabs open, and any other browser is a nightmare to use. Firefox + Tree Style Tabs + Ubiquity is the only way I can maintain some degree of sanity. Since I keep my browser open all the time, I don't particularly care about startup speed.
Hell, forget about admittedly niche things like vertical tabs. Chrome gets so many basic things wrong, from scrolling to text selection, that it's just a poor joke of a browser.
If I may ask, why do you have 200 tabs open? Are you really using the tab bar as a random-access array of pages, or are you using it more like a queue?
I'm just in the habit of opening tabs and forgetting about them for a few days, then coming back to them when I have time, so I guess it's more like a queue. I'm not saying it's desirable or optimal for everyone -- it's just how I work, and I find it quite effective, and I treasure my workflow a lot more than I care about "speed".
I also usually have lots of tabs with API documentation open, so it's important to maintain state such as the position of the scrollbar. Keeping tabs open is a much easier way to maintain this state than creating bookmarks for every function in the API.
I've looked at them and tried to use them before, but the problem with any bookmarking technology is that I always get the feeling that bookmarks are permanent. (When was the last time you deleted a bookmark?) I really don't want most of the links I open to be permanent. With tabs, it feels good to know that there's always an out -- the red X in the top right corner.
I fear it is not the browser that is the problem: just that your workflow is unique.
As an aside I'm surprised you can survive with FireFox and 200 tabs. Once I get above about 50 the memory usage and lag just gets too much for it to be usable. Any tips?
I just use Firefox (currently 3.6b4) with a few addons, and don't do anything special at all. Memory usage tends to be 600-700 MB, which isn't enough for me to care. I'm not sure what you mean by "lag" -- I don't use any web apps, so there's very little background stuff running.
Aren't FireFox extensions based on JavaScript? Wouldn't faster JavaScript in FireFox directly benefit the extension developers who are trying to push the browser UI into new places? I agree that speed isn't the only important factor but the Mozillia folks are working hard to keep up with WebKit based browsers on performance so Google's focus on speed will ultimately benefit Firefox users as well.
What makes google chrome fast? A solid javascript compiler, a fast page renderer with a high performance DOM (webkit), and a few additional tricks like dns prefetching.
... and lazy competition. IE is satisfied with incremental performance improvements every 5 years, firefox is arguably still burdened with the luggage of AOL's Netscape rewrite from ages ago (though they've been making tremendous progress with each release), and safari on windows is an after thought.
Lazy competition isn't what makes Chrome fast, it's what makes Chrome exist in the first place.
By shifting people's expectations of speed, security, stability and capability in a browser upwards they also move the lower bound that Microsoft can lag behind (in order to prevent the internet undermining their monopolies) without publicly embarrassing themselves.
It's like a technical version of the Overton Window:
While safari has a fast rendering engine, and most of its performance benchmarks makes it out to be competitive - I don't know what it is - but it simply /feels/ slower. Less responsive - from beachballs on link clicks to loading up the initial 3D eyecandy. Than Chrome on windows, that is.
Safari feels fast to me, but it's an open question how much our respective views are shaped by our preconceptions about our respective preferred browsers. Are you using it on Windows, or on OS X? I don't know (because I don't use it on Windows) what its performance is like on Windows, although as InclinedPlane pointed out upthread, Apple have less incentive to optimise there.
If it's slow on OS X, perhaps you view a lot of sites with Flash content. The OS X Flash Player is pretty awful (I use ClickToFlash [1] to block it), and a frequent harbinger of the beachball. I also tend to disable the "initial 3D eyecandy", because it does slow things down, and for me at least it's pretty pointless.
It would be interesting to do a study of which browser people think is fastest (based on a trial of all the browsers somehow perfectly skinned to look like other ones).
That was on OSX - I always ended up going back to Firefox, because it felt faster. I'm no longer using OSX as my primary, so it doesn't matter much anymore to me :) Just pointing out what I'd noticed.
I had a lot of problems with beachballs on Safari. Ten seconds of unresponsiveness were by no means exceptional. I upgraded my laptop HD to a SSD and these problems vanished utterly.
It's the same on OS X, imho. I've carried around my preferences and such for some years now, from an old PPC G4 Mac Mini, so I cleared out some Safari cache databases and that helped startup delays a bit, but Chrome's dev preview for OS X still eats every other browser's lunch. Chrome on Linux and Windows are just as fast in my experience but I haven't used it on Linux as much as the others.
Since Snow Leopard, I found the 3D eye candy page extremely slow and frustrating, to the point that I disabled it in favour of just about:blank. I wasn't using the bookmarks anyway.
In my opinion, this model is not the best for optimizing performance. First, it's expensive, performance regression testing is tricky as heck, to get accurate data you need to go to extremes. Second, it concentrates most of the effort on not falling behind which can distract people from the idea that it's possible to actually exceed previous performance by significant margins, even with additional features. Third, sometimes it results in wasting effort tracking down and "fixing" tiny performance regressions without touching the code most responsible for slow performance.
In my opinion it's better to invest in a robust performance profiling infrastructure that gives you the ability to find out the parts of code that make the biggest contribution to the performance (or lack thereof) of your product. From there you can allocate dedicated performance improvement time budget to improve performance as much as possible. I suspect that with such a model you'd be more likely to end up with better performance at less cost and on a more dependable schedule.
One of the projects I work on has close to a zero-tolerance policy on performance regressions, but this is applied per launch and not per changelist. Basically, by the time you launch your feature, you have to show that you're not causing more than a percent or two performance regression (ideally zero, unless you add a lot of value). However, it's up to you whether you want to achieve this by optimizing the hell out of your own feature or by finding code elsewhere in the project that's slow and cleaning it up. You'll know your own code best, obviously, but sometimes the low hanging fruit is in ancient stuff that hasn't been looked at in a while, and a profiler will find it for you.
Obviously this only works with collective code ownership, but you should probably have that anyway, since it makes refactoring so much easier.
Collective code ownership is a dangerous game though, I've seen its pitfalls from many angles.
Anywho, I wish there were more data for this sort of thing. Performance tuning is still somewhat of a black art, even performance testing is, to a lesser degree. My personal opinion is that a generally reliable, fast, unit-test-like performance regression testing system combined with solid profiling and making performance improvement (not just absence of regression) a priority are probably the best way to go about it. But there's so little data out there it's hard to back up any opinion with anything other than anecdotal evidence.
I've noticed that chrome is very fast at start up, but if I forgot it opened for some time, when I get back to it again switching tabs and repaint is very slow. Repaint take like 0.5 seconds. Could it be related to the handling on the many processes on Windows ?
In chrome each tab is its own process. If you don't have much ram then leaving a tab idle for a long time will cause its resources to be swapped out to disk so reactivating it might be slow. Same as any other application you leave idle on a machine with low ram.
The timing of this is funny. On the one hand, Google introduces a faster DNS service. On the other hand, one of the videos shows how DNS pre-fetching in Chrome means that DNS lookup speed is not that critical.
It does if they are using OpenDNS or an ISP like Comcast (if they still do DNS interception?).
These organizations redirect failed lookups to their own search pages and sell the traffic on for a price or show their own ads.
The alternative is a clean failure which gets handled by the browser. I believe that most browsers send you to the default search in those cases - which is usually Google.
There's a kickback involved in that case too (searches from Firefox earn Mozilla a small fee for example) but I wouldn't be surprised if it was cheaper.
Google is going for a belt and suspenders approach here. On the one hand they are encouraging people to dump DNS servers from crappy ISPs (of which there are many). This benefits everybody regardless of their browser. On the other hand google is implementing and promoting dns prefetching in the browser in order to sidestep the problem of page load overhead due to dns lookups. Though neither method is perfect they complement each other well.