Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most applications are IO bound, not CPU bound. So wasted cycles aren't always that big of a deal.


Most applications are not intrinsically I/O bound, it is only an artifact of the scale of wastefulness and inefficiency in how they use I/O. Even databases are no longer I/O bound except at the extremes -- memory bandwidth is likely a bigger bottleneck if the I/O is handled competently.

Software being "I/O bound" stopped being a legitimate excuse for the vast majority of applications many years ago. Hardware and software has moved on.


> Even databases are no longer I/O bound except at the extremes -- memory bandwidth is likely a bigger bottleneck if the I/O is handled competently.

> Software being "I/O bound" stopped being a legitimate excuse for the vast majority of applications many years ago. Hardware and software has moved on.

You know, i'm not sure about that.

You can write really performant code, but oftentimes in our world that's increasingly infected by SaaS solutions, you'll be bottlenecked by network calls, be it to an external service or a DB instance.

Sure, your DB driver might be really performant and the actual progress made in the last decades on the DB engines has been amazing, but you can't beat the speed of light.

And, since many still choose external DBs when something like a local SQLite DB would suffice (sometimes a form of cargo culting, sometimes choosing a particular risk management profile/failure mode with multiple nodes), that cannot exactly be ignored anyways.


As you say, it could be done simpler. What we have is a manifestation of wastefulness.


It's not a big deal if you're only running one thing on the computer. The wasted cycles could have been used by a different process to do something else. Multiply this mindset by many services and applications and suddenly the computer is slow when it doesn't have to be.


Most applications do far more IO than what is necessary for the task.

Anyway with a modern SSD I don't think you are right. My disk can do something like 3 GB/s and 100k random accesses per second. At that rate most desktop programs should have the binary and all dependencies loaded to memory in approximately 0.1s. That seemingly doesn't stop a lot of them from taking several seconds to boot. Likewise games, despite usually consuming less than 10 GB of video and main memory combined, can take much longer than 3.3s to load.


Though that includes possibly decompression, parsing, and more importantly loading data to the GPU and even compiling shaders.


Decompression is fast if you do it right (using anything from the Oodle pack is a good first step towards doing it right).

Hopefully a game doesn't need to parse a whole lot of things, however stuff needs to appear in memory, it can be stored that way on disk, possibly compressed with aforementioned fast compression libraries.

There is nothing special about loading data onto the graphics card, if you know what you are doing it is just copying data, and no point on the path is slower than disk.

I know there are games that somehow rely on thousands of shaders. Like most games you can just not do that, or you can store the compiled shaders so that the compilation only needs to be done on first run.


While this is most certainly true, I'm willing to bet the ratio between them has dropped substantially in recent years.


What do you mean by „most applications“ and what is the IO you talk about? Do you have data to backup your claim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: