I am claiming that people keep making this claim and it no longer holds true because software is already losing too much performance for the value we get back.
Your claim assumes that a small loss in performance in networking will lead to a loss in performance of the overall web browser, which is only true if networking is the bottleneck while browsing. And it usually isn't.
Ah, so you think the only thing I do with a computer is use the browser? That's weird, I was just making an example of something that is so slow that is literally unworkable in the modern day already.
Impacting networking affects the entire machine, especially in so far as a computer is increasingly just a dumb terminal to something else.
Look, If you make network requests potentially 20% slower then the browser performance will be impacted too, it's so obvious that I'm not sure how I can explain it simpler.
By how much? I am not sure, but you can't say it won't be slower at all unless we're talking about magic.
Pretending that it's trivial amounts of performance drop without evidence is the wrong approach. Show me how you can have similar performance with 20% increase in latency and I will change my stance here.
As it stands there are two things I know to be true:
Browsers rely on networking (as do many things, btw) and software is increasingly slow to provide similar value these days.
The point is that most users and use-cases of networking don't have high requirements on bandwidth or latency that warrant a network stack design focused on high performance. Let the ones who want to live on the edge do so if they want, but don't force your high performance, one-bug-away-from-total-disaster network stack design based on your own (probably overblown) requirements on everyone else.
Grandma doesn't care if her tablet can't saturate a WiFi 6 link. Grandma doesn't care if her bank's web page takes an extra 75µs to traverse the user-land network stack. But she will care a whole lot if her savings are emptied while managing her bank account through her tablet. Even worse if her only fault was having her tablet powered on when the smart toaster of a neighbor compromised it because of a remotely exploitable vulnerability in her tablet's WiFi stack.
Or are you suggesting that grandma should've known better than to let her tablet outside of a Faraday cage?
> Pretending that it's trivial amounts of performance drop without evidence is the wrong approach.
Amdahl's law begs to differ. If it takes 5s for the web site to arrive from the bank's server, spending 5µs or 500µs in the network stack is completely irrelevant to grandma. Upgrading her cable internet to fiber to cut these 5s down to 500ms will have much more positive impact to her user experience than optimizing the crap out of her tablet's network stack from 5µs down to 1µs.
What an incredibly weak argument, I'm disappointed to read it.
We're not talking microseconds, we're talking a fundamental problem in computer science for 30 years which is no closer to being solved.
We're talking about a classification of bugs which are solved by other means rather easily that do not take an unknown performance penalty on one of the slowest to improve component of modern computers.
Grandma isn't losing anything due to this, heartbleed: this ain't. Spectre: this aint. and crucially we have the tools to ensure this never happens again without throwing our hands up in the air and saying "WELL COMPUTER NO GO".
If you're actually scared, I invite you to run OpenBSD as I did. you will learn very quickly that performance is a virtue you can't live without, a few extra instructions here, a lack of cache on gettimeofday() and suddenly the real lag of using the machine is extremely frustrating.
And again, for the final time I will say this: we can fix this and make it never happen again without any loss in performance.
that you keep advocating a loss in performance tells me that you've spent a career making everyones life worse for your own experience, I am not a fan of that mentality.
or maybe I've worked in AAA Game Dev too long and we don't get the luxury of throwing away performance on a whim.
Extrapolating your position makes me think your ideal operating system wouldn't be an offshoot of the Linux kernel. It would be a general-purpose, fully asynchronous, MMU-less, zero-copy, single address space operating system secured through static program analysis, where the web browser and the NIC driver are but a couple of function calls away. Kinda like Microsoft Research's Singularity, but probably without the garbage collection.
Maybe one day every phone, tablet and laptop will run such an operating system, but I doubt that we'll have this as a viable alternative anytime soon. In the meantime, I think there's a reason why Google with Fuchsia OS and other companies are hedging their bets mainly through micro-kernel-style approaches for their next-gen general-purpose operating systems.
You are assuming copying around buffers won't consume any cpu? Maybe it's perfectly fine, maybe it's not. But it needs some experiment before we can handwave it.
This publication (http://www.minix3.org/docs/jorrit-herder/asci06.pdf) claims that MINIX3 could saturate a 1 Gb/s Ethernet link with an user-space network stack, with separate processes for the stack and the driver, on a rusty 32-bit micro-kernel that can't do SMP. In 2006.
I am claiming that people keep making this claim and it no longer holds true because software is already losing too much performance for the value we get back.
That's my whole thesis.