Yeah, I wasn't sure whether to link that instead. The comic is delightful, but not too discoverable.
Anyone remember what it was like to open Chrome for the first time in '08 or so? It was incredible. The design was the embodiment of fresh, wide-eyed optimism.
Yeah, it also felt blazingly fast. Chrome is still fast now, but sites and web apps have added so much bloat that they cancel each other out. Back then, on the other hand, Chrome was lightning compared to the standard experience with any other browser.
May be the browser should institute a 'lite' mode which kills js execution after some 60ms. If the content doesn't show up by then, an error is shown (rather than a broken page).
> A multi-process design means using a bit more memory up front. Each process has a fixed additional cost. But over time it will mean LESS MEMORY BLOAT.
Unfortunately, it hasn't quite worked out that way.
Someone pointed out that websites have grown by ~10x since 2008, so a lot of the memory usage may have been unavoidable. It'd be interesting to know what the hypothetical lower-bound is.
I think, the hypothetical lower-bound would put webpages onto the hard drive whenever possible, therefore not really being dependent on webpage size...
As an IE fan I was annoyed that they stole porn mode (IE announcd InPrivate before chrome existed). I still know quite a few people who think Chrome invented incognito and that IE ripped it off.
Now that I look it up, apparently the idea started with Safari, but it didn't get much press.
I think the biggest innovation in chrome was how rapidly they shipped. Chrome copy-catted on hardware accelerated rendering and porn mode, but shipped first in both cases.
Every browser borrows from other browsers. But really, browsers are mostly technology, and Chrome was a few notches above all the other browsers in terms of performance, security and crash-recovery. The browser industry owes a lot to Chrome for really pushing the boundaries back then.
I remember having a discussion with a co-worker the day they announced, and he swore Chrome would go nowhere, and while I definitely thought it would be an uphill battle, I brought up the fact that their home page is their greatest advertising opportunity (now, they've very smartly moved that role to the less valuable, but still insanely high traffic, properties). So I thought they definitely could pull it off, but a year before chrome, I stupidly thought WebKit should just fold up shop and adopt Gecko. So obviously I can't really predict much.
But then I actually tried it as a secondary browser for about a week, and fell in love.
Not long after, I finally realized two things:
1. Speed is the killer feature, not just of chrome, but of any software.
2. The auto update feature, on every platform (such as iOS' app store) and browser, was the most ingenious of the two. As a web dev, my biggest worry was the potential chaos, some of which has happened, but waaaay less than I could have ever hoped.
They truly did make the updates a first class experience from the developer to the different cases of users.
Wow, I remember the first time I used chrome all the way back in 2008. It was a little buggy (it didn't support flash properly at first iirc) but it sure was fast.
Now it has all the features, is still fast, and just generally is more awesome than the other windows browsers, and the other mac browsers, and from most of the Linux desktops I've seen, the other Linux browsers.
I know it's a privacy issue and I don't particularly like boosting Google's numbers, but I don't see myself switching away anytime soon.
> but as time goes on, fragmentation results -- little bits of memory still get used even when a tab gets closed
Don't you still have the same problem with multiple processes, just moved to the kernel for them to deal with? It seems like if you kept everything in-process, you could use custom allocators and apply a sufficiently smart architecture so nothing stayed longer than necessary.
Not really because the kernel maps memory in paged-sized chunks. You do have variable page sizes on modern architectures but it's still just a very small number of block sizes for allocations.
If they are non-contiguous in the kernel it is no barrier to making them contiguous in a process.