Hacker News new | past | comments | ask | show | jobs | submit login

> Internet Explorer uses processes but may opt to pack more than one tab into a single process to reduce system load by too many processes.

What is it about having too many processes that causes extra system load?




Every process consumes additional kernel resources for things like memory mappings, data structures for parent/child relationships, etc. There is also likely to be per-process overhead for libraries that allocate and manage global data structures. For example, the HTML parser likely has some global data structures that can be reused between threads but don't get shared between processes.


Every process consumes additional kernel resources for things like memory mappings, data structures for parent/child relationships, etc.

True, but we should be able to assume that that is negligible.

There is also likely to be per-process overhead for libraries that allocate and manage global data structures. For example, the HTML parser likely has some global data structures that can be reused between threads but don't get shared between processes.

That's what somebody else said, too. I'm surprised that's enough to matter, but I guess it must be.


Windows internals are very different than Linux and 32bit DLL files are not SO files. 32bit DLLs are typically relocatable (as opposed to .so files being position independent), which means they are compiled with a with a preferred load address they must be loaded at. If that address is not available at link time, the code must be moved to a different address and jumps must be fixed up to compensate. Because of that, in practice, loaded libraries are often not able to be shared between processes in memory. The reason for this decision is that pic code requires an indirect jump through an offset table which adds extra processing overhead. 64bit Windows is implemented closer to Linux style .so files due to the addition of new pointer relative addressing.


Thanks for the info! Today I learned.


I don't see why we should be able to assume that the kernel's per-process data structures are negligible. Maybe they are, but you'd have to measure it, you can't just assume it.


I've done enough kernel work to feel that it's a safe assumption in general, but that there may be some exceptions.

For example, I wouldn't be completely shocked if somebody said, "We really need to support a particular version of an old OS that had unusually high per-process overhead in some particular corner case."

If anybody knows how much kernel memory a basic process needs in, say, modern-day Linux, please chime in. I tried looking it up, but didn't find it. Probably it's just sizeof(task_struct), which I can't be bothered to check right now, plus a few KB for stack stuff.


If you try to assume something like that is trivial when I accidentaly spawn over 400,000 or more python processes by accidentaly in a spawning loop (guess why I couldn't count them?), you are gonna get a big shock.


It's a rendering engine and JS engine per process, I guess. Those tend to consume memory (as overhead apart from the actual stuff needed for the page), for example.


I'm surprised that's enough to make a meaningful difference.


Keep in mind that IE8 goes back to the Vista days. That was when there were still computers with just 512 MiB of memory (which might be fine with Windows 8 but Vista would chomp on it hungrily). At least this could be a reason.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: