This philosophy is embraced by the developers working on WebKit: in fact, the code responsible for rendering a typical web page averages just 2.1 effective C++ statements per function in WebKit, compared to 6.3 for Firefox - and an estimated count of 7.1 for Internet Explorer.
What is an "effective C++ statement"? That's a really odd measure and I can't get my head around it.
It's marketing bullshit. He probably wrote a statement in C++ and measured the time it takes to execute, I have no idea (also impossible to know what sort of statement). numbers suggest around 2% accuracy (7.1), that's pretty impressive given that we measure "code responsible for rendering a typical web page" per "function" over "effective C++ statement". All of those well defined in his textbook I bet.
It's more likely he just rounded a number of results and rounded to the first decimal place to display it better, not to suggest some level of accuracy.
Last I heard IE was closed source ... I'd love to know how he got his hands on the IE code base to make that measurement... All his posturing about - design decisions being a business need of google make me wonder whether this was a M$ sponsored article...
They say 'estimated'. A way to do that is to look at the disassembly of IE code, assuming an on average constant number of assembly instructions per C statement. I don't think that is a bad assumption, but it does ignore potential differences between compilers.
Also, IE source likely is available through Microsoft's Shared Source Initiative (http://www.microsoft.com/en-us/sharedsource/default.aspx). After all, Microsoft has vehemently argumented that IE is inseparable from the OS. I think it would require some lenient interpretation of that license to use it for this purpose, though.
Well you could instrument the source code and measure the number of calls, and compare that to the rough number of non-call ops per function. Not a perfect metric but it's something. His claims do seem far fetched though.
They mean that each function does relatively little work, so to render a given web page, the number of function calls (and therefore vtable lookups) is higher than in other browsers. This translates to more overheads and slower execution.
MSVC with PGO will do it, by inserting a guard on the vtable pointer and inlining the most common implementation. But you still take the possible cache miss of reading the vtable pointer, of course. And the other commonly used compilers (clang and gcc) don't do anything like this, even with PGO.
This is based off general programming knowledge, as I have never used C++, but I think it is referring to a highly-modulated programming style that uses many general functions.
I.e., instead of writing a few quick lines of code to perform bisection search in the middle of logic flow, you create a general bisection search function and implement that.
This leads to high productivity "a statement per function", and makes code cleaner and easier to update, but can substantially increase overhead costs.
No, C++ is particularly well suited to that since it's possible to implement those with zero (and in practice less than zero) overhead. The problem is overridable functions; i.e. not generic implementations that work on various structures by compiler specialization, but methods on classes whose implementation varies at runtime by dynamic dispatch. E.g. a getLayoutWidth method with a different implementation for blocks and inline runs, and where it's possible to call the implementation without knowing at compile time which it is.
What is an "effective C++ statement"? That's a really odd measure and I can't get my head around it.