I wish more developers thought like that. As someone who usually uses hardware that is several years old, I'm always annoyed that my system seems to get worse and worse with every software "update" I do.
The opposite effect always feel like magic. I have fond memories to see one CGI software making my old Pentium2 350 do real time physics simulation over nurbs surfaces while the other couldn't even pull correct ones in batch mode (at 3 times the cost).
The first Mac OS X upgrades were similar, they made your old iMac do things faster.
I don't know how to make devs create non bloated programs, I often think we should give them really old systems to write on. Constraint drives creativity etc etc.
Please don't use this as an excuse to cripple development environments. I have to run other people's bloated software, too.
Just give devs antiquated test/production rigs. That's entirely reasonable. A particularly frugal employer of mine refused to pay for a web/application server with more than 512MB of RAM. That was a fun job, actually.
It's really simple, you need to convince the people holding the purse strings it's worth their money to spend inordinate amount of time on a problem affecting a small portion of their desired demographic. Getting Devs to work on antiquated machines will just make their lives miserable.
IMO it's a dynamic equilibrium, with too large resources you'll accept sub optimal situations. Think Google c++ build times. I agree living below acceptable conditions is unnecessary drag but it's good to wave back and forth. An exercise in awareness.
Not for me, I don't get extra time to fiddle around shaving milliseconds, my environment is so slow it's all I can do to get working software out the door in the timelines I'm given. I have solutions to improve the speed of my software and environment, but they take time to implement, time that I could be working on features that make tangible differences to the bottom line. Guess what gets priority?
If it were up to me, I'd spend this entire year refactoring, but it's not.
One pattern I've seen often is neglecting the cost of crossing layers. You'll have some tool or service that is used and then some client code will end up looping through hundreds or thousands of uses of it. Meanwhile, the otherwise negligible setup/teardown costs that come with crossing that layer adds up, and you end up with horrendous performance because of it.
> A coworker, learning about this blog concept, remarked that “accidentally quadratic” is less exciting than, say, accidentally factorial. But I’ve never seen one of those in production software, either.
Not production code, but when learning about recursion at uni someone implemented fibonacci like
fib (a) { return fib(a-1) + fib(a-2) }
(with a base case, obviously). So fib(6) would generate calls to fib(5) and fib(4), which would spawn calls to fib(4), fib(3) and fib(3), fib(2). Those 4 would again spawn 2 new calls each, many of them overlapping.
I'm a front-end developer and when I run into software that is slow, I assume it is because it includes a 10-pound bag of libraries while only using a minute percentage of their functionality. (And that functionality could be implemented with 10 lines of vanilla JavaScript.)
>> I assume it is because it includes a 10-pound bag of libraries...
Yes. The biggest source of this problem is that someone calling a function doesn't know how that function actually works and what its time complexity is. Sometimes it's that a function is asymptotically slower than it needs to be, but often it's that writing a composite of the calling and called functions could result in combining an inner and outer loop.
When you use large libraries that do more complex things, your opportunity to optimize across levels goes away because you can't know all the code and where this issue will come up.
In general code that doesn't get used doesn't increase your run time ( though in the case of JS you will obviously have to download it once, and parse the file on load ).
In a sort of masochistic way, I'm looking forward to the end of Moore's Law, because it might mean that we get better at programming to get more power out of limited hardware, rather than the sort of "rising tide lifts all boats" approach that we have now.
Ironically enough, trying to open this blog's main page crashes my Safari (on an older iPad) every single time. I guess that cute little 200ms header animation is too accidentally quadratic for my 'outdated' hardware...
I've heard about but never seen 'accidentally factorial' before. It was at a well known SV company - apparently it was rarely noticeable in prod because the arguments were usually fairly small.
I wish more developers thought like that. As someone who usually uses hardware that is several years old, I'm always annoyed that my system seems to get worse and worse with every software "update" I do.