While safari has a fast rendering engine, and most of its performance benchmarks makes it out to be competitive - I don't know what it is - but it simply /feels/ slower. Less responsive - from beachballs on link clicks to loading up the initial 3D eyecandy. Than Chrome on windows, that is.
Safari feels fast to me, but it's an open question how much our respective views are shaped by our preconceptions about our respective preferred browsers. Are you using it on Windows, or on OS X? I don't know (because I don't use it on Windows) what its performance is like on Windows, although as InclinedPlane pointed out upthread, Apple have less incentive to optimise there.
If it's slow on OS X, perhaps you view a lot of sites with Flash content. The OS X Flash Player is pretty awful (I use ClickToFlash [1] to block it), and a frequent harbinger of the beachball. I also tend to disable the "initial 3D eyecandy", because it does slow things down, and for me at least it's pretty pointless.
It would be interesting to do a study of which browser people think is fastest (based on a trial of all the browsers somehow perfectly skinned to look like other ones).
That was on OSX - I always ended up going back to Firefox, because it felt faster. I'm no longer using OSX as my primary, so it doesn't matter much anymore to me :) Just pointing out what I'd noticed.
I had a lot of problems with beachballs on Safari. Ten seconds of unresponsiveness were by no means exceptional. I upgraded my laptop HD to a SSD and these problems vanished utterly.
It's the same on OS X, imho. I've carried around my preferences and such for some years now, from an old PPC G4 Mac Mini, so I cleared out some Safari cache databases and that helped startup delays a bit, but Chrome's dev preview for OS X still eats every other browser's lunch. Chrome on Linux and Windows are just as fast in my experience but I haven't used it on Linux as much as the others.
Since Snow Leopard, I found the 3D eye candy page extremely slow and frustrating, to the point that I disabled it in favour of just about:blank. I wasn't using the bookmarks anyway.
In my opinion, this model is not the best for optimizing performance. First, it's expensive, performance regression testing is tricky as heck, to get accurate data you need to go to extremes. Second, it concentrates most of the effort on not falling behind which can distract people from the idea that it's possible to actually exceed previous performance by significant margins, even with additional features. Third, sometimes it results in wasting effort tracking down and "fixing" tiny performance regressions without touching the code most responsible for slow performance.
In my opinion it's better to invest in a robust performance profiling infrastructure that gives you the ability to find out the parts of code that make the biggest contribution to the performance (or lack thereof) of your product. From there you can allocate dedicated performance improvement time budget to improve performance as much as possible. I suspect that with such a model you'd be more likely to end up with better performance at less cost and on a more dependable schedule.
One of the projects I work on has close to a zero-tolerance policy on performance regressions, but this is applied per launch and not per changelist. Basically, by the time you launch your feature, you have to show that you're not causing more than a percent or two performance regression (ideally zero, unless you add a lot of value). However, it's up to you whether you want to achieve this by optimizing the hell out of your own feature or by finding code elsewhere in the project that's slow and cleaning it up. You'll know your own code best, obviously, but sometimes the low hanging fruit is in ancient stuff that hasn't been looked at in a while, and a profiler will find it for you.
Obviously this only works with collective code ownership, but you should probably have that anyway, since it makes refactoring so much easier.
Collective code ownership is a dangerous game though, I've seen its pitfalls from many angles.
Anywho, I wish there were more data for this sort of thing. Performance tuning is still somewhat of a black art, even performance testing is, to a lesser degree. My personal opinion is that a generally reliable, fast, unit-test-like performance regression testing system combined with solid profiling and making performance improvement (not just absence of regression) a priority are probably the best way to go about it. But there's so little data out there it's hard to back up any opinion with anything other than anecdotal evidence.