On Linux, the "powertop" utility will show you processes that are waking up regularly (such processes can cost you battery life on a laptop or phone by preventing the CPU entering and maintaining low power idle states).
Back around 2004, when I was using QNX for our DARPA Grand Challenge vehicle, I discovered that the machines on the vehicle were about 20-30% loaded when idle. Although they had no keyboards or displays, the embedded CPU boards we were using did have a minimal VGA controller. We were using the default QNX install, which has the full desktop environment, and it was bringing up a screen saver. The minimal VGA controller ran the screen saver very slowly, because the screen saver was reading from the display memory to scroll, a very slow process.
All this was at low priority, so it didn't affect the real time tasks at all. So it wasn't hurting anything. The QNX people were quite embarrassed when we reported this, though.
I would speculate that even if hardware progress were to stop today, we would still see advancement in the speed of computer systems for quite some time. All just from optimizing what we are already doing, or (as others have pointed out) not doing unnecessary things.
There are so many things I'd like to fix if I had the time...
You're clearly an optimist. I find that the tendency is for software to get slower -- and generally worse -- over time.
Five years ago I was using Chrome and it was fast, lightweight, and stable. Now it's slow, uses 500 MB of RAM, and the renderer crashes on a regular basis.
A couple of things might influence this observation:
1) Your definition of what is fast changes. Downloading an MP3 in 5 minutes felt really fast in 1998, but really slow today.
2) What we expect chrome to do keeps increasing. Since Chrome was so fast, people started creating webpages that had to do more (because they could!) It is a form of Induced Demand (https://en.wikipedia.org/wiki/Induced_demand)
I think there may be a counterintuitive situation where incremental improvement in hardware is worse than no improvement.
Consider how much extra performance game developers are able to wring out of a console after a few years' experience. If the hardware is absolutely stable, and there's no expectation that it can get faster, then people do make a big effort on software.
The biggest example of this is the demoscene, where some absolutely astounding things have been done with old hardware; e.g. a Commodore 64, 1MHz 8-bit CPU with 64KB of RAM.
In relation to this, user interfaces seem to be designed to have some response time less than some value. For games maybe it's 100ms, for the game console's dashboard, maybe 500ms, for an ATM, it's like 4000ms.
I suspect the target response time is what determines how slow things are, not the speed of the hardware. As people have pointed out, hardware has gotten faster and many user interfaces have gotten slower, so it seems more likely that: the target response times have been rising (people tolerate higher response times in exchange for more features or whatever), or people with older and slower computers observe much higher response times than the original target (iphone 3gs vs iphone 5 for a given app, for instance).
Stable hardware solves the second issue at least, but not sure about the first one. I think over time people will notice that everything else responds instantly to touch, and computers should too, so they will start rejecting apps that have a high target response time.
As an example of that, I chose my last monitor by looking at http://www.displaylag.com/ to get the minimum latency there. If other people start caring more about the latency of the things they use, hardware and software developers will prioritize that.
It used to be a browser just went to the destination and showed you what was there. This was fast. Now every time you visit a site, a giant war goes on behind the scenes between what the site wants to force you to see and what you actually want to see. This may have something to do with it.
If hardware speeds didn't change anymore there would be more effort spent on speed. Right now it's to easy to add stuff and not feel much of an impact if you follow all hardware upgrades (ruining it for everybody else who doesn't do so), if people had to squeeze more performance out of their code they would. (See the iteration of games for old platforms, where people did the craziest things over the years to get results that were thought impossible)
Well, yeah, it is certainly possible for software projects to go the other way. It depends on what the focus is.
If people knew that their own choice for speed improvements was optimization (and to a certain extent, that is true already), then they might focus more on that instead of adding features.
I'm still hoping for the whole stack to flatten significantly with PL guys managing to allow for multistage/multilayer JIT optimization over evaluation towers.
On very fast local systems and networks, 'rough idling' can go on for a very long time, until it comes time to scale.
One time I found the cause of a program that was run at regular intervals and was taking a few minutes to complete running. It would run approximately 42,000 SQL select calls every time it dropped into a new directory to parse some files. It only took 10 minutes to process the entire directory structure. So the program was actually incredibly fast, considering all the work it was doing.
The catch? It only needed to run SQL select 42 times per directory; the other 41,958 calls were not necessary. As far as I know it was never fixed, because it only took 10 minutes to run.
The first time encountering a certain JS library's $.noop function bent my brain — "why would there be a function for not calling a function?"
Later, wisdom visited me and I realised the costs of `typeof isThisAFunction` and the savings of optimising out an empty function call in the first place.
Does anyone know under what circumstances changes to software can significantly impact energy consumption? I'm thinking of this from the perspective of the environment and of impacting climate change. How often is that considered, outside of mobile platforms? What are the largest opportunities, if there are any big ones?
For example, TCP is so widely used and affects so many machines with each transaction (i.e., the endpoints and everything in between) that I'd guess that a tiny increase could make a large impact globally. Or there might be other software where there's an opportunity for a large improvement.
There is a post on Random ASCII about programs raising the Windows(tm) timer from 64/sec to 1000/sec for no good reason wasting power and wrecking battery life.
I work on stuff that needs to run for years off of small batteries. An issue has been previously just like Windows(tm) all RTOS depended on a heartbeat interrupt. Which means the RTOS wakes up every 1-10ms to check if a timer has expired, see which task should be running, etc. The truth is, 99% of the time there is absolutely nothing to do. Or in my case 99.9% nothing to do.
I've had to roll my own routines to keep track of how long till the next timer expiration and use that set a hardware timer that wakes up the uP. And also for timers that don't require low latency, marshal the events and do them all at once every so often.
I was immediately skeptical, but then I looked around and found some articles claiming that as of 2013 computing in general accounted for fully ten percent of global electricity usage. So maybe we should be putting some thought into this issue after all.
It should be pointed out that we do in many places. Mobile has significant incentive to be efficient. Cloud has significant incentive to be efficient. Mobile efficiency creeps into a lot of other home tech because it's cheap to use that highly-scaled mobile technology now, because it's off-the-shelf. The worst is probably stuff headed for the home that is too big to use SOCs, like desktops and game consoles. But it's not as if the entire industry is oblivious to this issue... Performance-per-watt is an already-important metric, and it will only get more important.
A huge amount of electricity is wasted in non-cloud datacenter deployments which service limited geographies (time zones). For example, the US stock market is open from 4 AM to 8 PM, leaving one third of the day with literally no trading activity. Do you think the servers involved in this market sleep? They mostly do not, and moreover they typically have all power management disabled, including dynamic underclocking (Intel SpeedStep). Figure 200-300W per server and maybe 100K machines (not including desktops which also suffer most of this, but at least consume fewer watts). I guess that's at least 10 MW wasted, or about five wind turbines worth, just for the US stock market.
Since OS X Mavericks, Apple's been cracking down on programs that wake up the CPU too much. Thus the infamous hall of shame in the battery status menu.