Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wasteful of computing resources, yes, but for a long time we've been prioritizing developer time. That happens because you can get faster hardware cheaper than you can get more developer time (and not all developers time is equal, say, Carmack con do in a few hours things I couldn't do in months).

I do agree that we'd get fantastic performance out of our systems if we had the important layers optimized like this (or more), but it seems few (if any) have been pushing in that direction.



But we've got far more developers now, and by large, we aren't producing more capable software than we did 20 years ago. It's gotten very complex, but that is by large just a failure to keep things simple.

Spotify is slower to launch on my Ryzen 3900X than my operating system, and lacks many of the basic features that WinAmp had in 1998. Now you're thinking "Aha! But WinAmp just played mp3s, it didn't stream over the internet!", Yes it could. It was also by large developed by one guy, over the course of a couple of months.

I don't know where this superior developer productivity is going, but it sure doesn't seem to be producing particularly spectacular results.


Back when I was younger we had to develop a few simulations in university, and we spent half the semester coding the building blocks in C. I was slightly good at it, and having seen my brother stumble a couple of years before, I knew I had to be careful with the data structures and general code layout to keep things simple and working.

As this was a group project, there were other hands on deck. One weekend I caught a nasty cold and I couldn't assist the weekly meeting to work on the project. Monday comes and we have to show our advances. The code was butchered. It took me a day to fix what had been broken (and keeping egos at bay, it would've been easier to just throw everything away and implement things from my last stable version).

Now I can fire up python, import numpy and scipy, and make far more complex simulations in a couple of minutes and a few lines of code. Sure, back then python and numpy did exist, I just didn't know about them. But you know what didn't exist 10-15 years ago? Pytorch, TensorFlow, JAX and all the deep learning ecosystem (probably scikit-learn did exstist, it's been around forever!). Thanks to those, I can program/train deep learning algorithms for which I probably wouldn't be able to code the lower-level abstractions to make them work. JAX comes with a numpy that runs on hardware "for free" (and there was PyCUDA before that if you had a compatible GPU).

That's the programmer productivity you're not seeing. Sure, these are very specific examples, but we have many more building blocks available to make interesting things without having to worry about the lower layers.

You can also complain about Electron being a bloated layer, and that's OK. There's your comparisson about how Spotify is slow and Winamp is/was fast.


That's kinda like saying Bob built a go-kart in his garage over a couple months, it moves you from A to B, I don't see why a Toyota Corolla needs a massive factory. Spotify's product isn't just a streaming media player. It's also all the infrastructure to produce the streams, at scale, for millions of users.

How often are you actually launching spotify? I start it once when I boot and that's it until my next reboot, weeks/months later.

Now you might of course ask, "why isn't the velocity 6554x that of winamp, even when correcting for non-eng staff, management, overhead, etc". Well, they probably simply aren't allocating that much to the client development.

Also often times one dev who knows exactly what he is doing can be way more effective than a team; no bikeshedding, communication, PRs, etc. What happens when they get hit by a bus?


But you can't get faster hardware cheaper anymore. Not naively faster hardware anyways. You are getting more and more optimization opportunities nowadays though. Vectorize your code, offload some work to the GPU or one of the countless other accelerators that are present on modern SOC, change your I/O stack so you can utilize SSDs efficiently, etc. I think it's a matter of time until someone puts FPGA onto mainstream SOC, and the gap between efficient and mainstream software will only widen from that point.


You are precisely telling me the ways in which I can get faster hardware: GPU, accelerators, the I/O stack and SSDs, etc.

I agree that the software layer has become slow, crufy, bloated, etc. But it's still cheaper to get faster hardware (or wait a bit for it, see M1, Alder Lake, Zen 3, to name a few, and those are getting successors later on this year) than to get a good programmer to optimize your code.

And I know that we'd get much better performance out of current (and probably future) hardware if we had more optimized software, but it's rare to see companies and projects tackle on such optimization efforts.


But you can't get all these things in the browser. You don't just increase CPU frequency and get free performance anymore. You need conscious effort to use GPU computing, conscious effort to ditch current I/O stack for io_uring. Modern hardware gives performance to ones who are willing to fight for it. Disparity between naive approach and optimized approach grows every year.


I'm not arguing against your point at all. I'm just pointing at the fact that there are many who are happy to wait a year, buy a 10-20% faster CPU and call it a day, or buy more/larger instances on the cloud, or do anything but optimize the software they use. Some couldn't even if they wanted, because they buy software rather than develop in-house, and aren't effective in requiring faster performance from their vendors.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: