Hacker News new | past | comments | ask | show | jobs | submit login
Rough idling (tedunangst.com)
174 points by ingve on Sept 21, 2015 | hide | past | favorite | 50 comments



On Linux, the "powertop" utility will show you processes that are waking up regularly (such processes can cost you battery life on a laptop or phone by preventing the CPU entering and maintaining low power idle states).


Back around 2004, when I was using QNX for our DARPA Grand Challenge vehicle, I discovered that the machines on the vehicle were about 20-30% loaded when idle. Although they had no keyboards or displays, the embedded CPU boards we were using did have a minimal VGA controller. We were using the default QNX install, which has the full desktop environment, and it was bringing up a screen saver. The minimal VGA controller ran the screen saver very slowly, because the screen saver was reading from the display memory to scroll, a very slow process.

All this was at low priority, so it didn't affect the real time tasks at all. So it wasn't hurting anything. The QNX people were quite embarrassed when we reported this, though.


I would speculate that even if hardware progress were to stop today, we would still see advancement in the speed of computer systems for quite some time. All just from optimizing what we are already doing, or (as others have pointed out) not doing unnecessary things.

There are so many things I'd like to fix if I had the time...


You're clearly an optimist. I find that the tendency is for software to get slower -- and generally worse -- over time.

Five years ago I was using Chrome and it was fast, lightweight, and stable. Now it's slow, uses 500 MB of RAM, and the renderer crashes on a regular basis.


A couple of things might influence this observation:

1) Your definition of what is fast changes. Downloading an MP3 in 5 minutes felt really fast in 1998, but really slow today.

2) What we expect chrome to do keeps increasing. Since Chrome was so fast, people started creating webpages that had to do more (because they could!) It is a form of Induced Demand (https://en.wikipedia.org/wiki/Induced_demand)


#2 is the big one. If you could surf 2010's web with Chrome 45, I bet it would be a better time than 2010's web on the Chrome of the day.


I think there may be a counterintuitive situation where incremental improvement in hardware is worse than no improvement.

Consider how much extra performance game developers are able to wring out of a console after a few years' experience. If the hardware is absolutely stable, and there's no expectation that it can get faster, then people do make a big effort on software.


The biggest example of this is the demoscene, where some absolutely astounding things have been done with old hardware; e.g. a Commodore 64, 1MHz 8-bit CPU with 64KB of RAM.


In relation to this, user interfaces seem to be designed to have some response time less than some value. For games maybe it's 100ms, for the game console's dashboard, maybe 500ms, for an ATM, it's like 4000ms.

I suspect the target response time is what determines how slow things are, not the speed of the hardware. As people have pointed out, hardware has gotten faster and many user interfaces have gotten slower, so it seems more likely that: the target response times have been rising (people tolerate higher response times in exchange for more features or whatever), or people with older and slower computers observe much higher response times than the original target (iphone 3gs vs iphone 5 for a given app, for instance).

Stable hardware solves the second issue at least, but not sure about the first one. I think over time people will notice that everything else responds instantly to touch, and computers should too, so they will start rejecting apps that have a high target response time.

As an example of that, I chose my last monitor by looking at http://www.displaylag.com/ to get the minimum latency there. If other people start caring more about the latency of the things they use, hardware and software developers will prioritize that.


It used to be a browser just went to the destination and showed you what was there. This was fast. Now every time you visit a site, a giant war goes on behind the scenes between what the site wants to force you to see and what you actually want to see. This may have something to do with it.


If hardware speeds didn't change anymore there would be more effort spent on speed. Right now it's to easy to add stuff and not feel much of an impact if you follow all hardware upgrades (ruining it for everybody else who doesn't do so), if people had to squeeze more performance out of their code they would. (See the iteration of games for old platforms, where people did the craziest things over the years to get results that were thought impossible)


Well I have finally come to realize that this is indeed the reality.

"Innovations in hardware are to give programmers freedom to build shitty software and get away with it."


Well, yeah, it is certainly possible for software projects to go the other way. It depends on what the focus is.

If people knew that their own choice for speed improvements was optimization (and to a certain extent, that is true already), then they might focus more on that instead of adding features.


I'm still hoping for the whole stack to flatten significantly with PL guys managing to allow for multistage/multilayer JIT optimization over evaluation towers.



One of my favorite quotes (wish I knew where it originated): "I can't make it run any faster, but I can make it do less."


Brings to mind optimisation of grep. Covered previously at HN.




And here's some discussion on making OpenBSD's sleep more efficient (starting with a proposed patch from tedu): http://marc.info/?l=openbsd-tech&m=144280057027331&w=2


On very fast local systems and networks, 'rough idling' can go on for a very long time, until it comes time to scale.

One time I found the cause of a program that was run at regular intervals and was taking a few minutes to complete running. It would run approximately 42,000 SQL select calls every time it dropped into a new directory to parse some files. It only took 10 minutes to process the entire directory structure. So the program was actually incredibly fast, considering all the work it was doing.

The catch? It only needed to run SQL select 42 times per directory; the other 41,958 calls were not necessary. As far as I know it was never fixed, because it only took 10 minutes to run.



Reminds me of the best optimization advice I ever got, "If you want to go faster, do less."


No code is faster than no code.


Very nice self referential quote. Who coined it ?

ps: LMGTFM https://en.wikipedia.org/wiki/Kevlin_Henney


The first time encountering a certain JS library's $.noop function bent my brain — "why would there be a function for not calling a function?"

Later, wisdom visited me and I realised the costs of `typeof isThisAFunction` and the savings of optimising out an empty function call in the first place.


More complicated these days, since you can do more overall, but in shorter time, if you do it on more cores.


Or if you manage to fit it all into cache.


One time a guy in Key West told me "If you want to get there faster, run."


Related, a famous example of how "doing nothing" can be a surprisingly difficult problem: https://en.wikipedia.org/wiki/IEFBR14


I like the story of how Grep is fast because it tries to do as little work as possible.

https://lists.freebsd.org/pipermail/freebsd-current/2010-Aug...


Your example doesn't obviously support this, but reminded me of Mark Twain's

"I didn't have time to write a short letter, so I wrote a long one instead."


Actually Blaise Pascal's, from Provincial Letters XVI.


Does anyone know under what circumstances changes to software can significantly impact energy consumption? I'm thinking of this from the perspective of the environment and of impacting climate change. How often is that considered, outside of mobile platforms? What are the largest opportunities, if there are any big ones?

For example, TCP is so widely used and affects so many machines with each transaction (i.e., the endpoints and everything in between) that I'd guess that a tiny increase could make a large impact globally. Or there might be other software where there's an opportunity for a large improvement.


Well, there's Bitcoin. Its very existence encourages a lot of energy to be spent on a fairly small transaction volume.


There is a post on Random ASCII about programs raising the Windows(tm) timer from 64/sec to 1000/sec for no good reason wasting power and wrecking battery life.

I work on stuff that needs to run for years off of small batteries. An issue has been previously just like Windows(tm) all RTOS depended on a heartbeat interrupt. Which means the RTOS wakes up every 1-10ms to check if a timer has expired, see which task should be running, etc. The truth is, 99% of the time there is absolutely nothing to do. Or in my case 99.9% nothing to do.

I've had to roll my own routines to keep track of how long till the next timer expiration and use that set a hardware timer that wakes up the uP. And also for timers that don't require low latency, marshal the events and do them all at once every so often.


I was immediately skeptical, but then I looked around and found some articles claiming that as of 2013 computing in general accounted for fully ten percent of global electricity usage. So maybe we should be putting some thought into this issue after all.


It should be pointed out that we do in many places. Mobile has significant incentive to be efficient. Cloud has significant incentive to be efficient. Mobile efficiency creeps into a lot of other home tech because it's cheap to use that highly-scaled mobile technology now, because it's off-the-shelf. The worst is probably stuff headed for the home that is too big to use SOCs, like desktops and game consoles. But it's not as if the entire industry is oblivious to this issue... Performance-per-watt is an already-important metric, and it will only get more important.

https://en.wikipedia.org/wiki/Performance_per_watt


Consoles do use SoCs, mainly because they're cheaper than discrete processors. At least the current ones do. They're just not mobile SoCs.


How often is that considered, outside of mobile platforms?

Never.

Flash Player should be a crime against humanity based on its energy usage alone.


I frequently joke that Gentoo (really any source based Linux distribution) promotes climate change.


What about source-based distributions makes them so power hungry?


Compiling is cpu, and therefore power, intensive. The joke being that Gentoo users spend all their time compiling packages.


A huge amount of electricity is wasted in non-cloud datacenter deployments which service limited geographies (time zones). For example, the US stock market is open from 4 AM to 8 PM, leaving one third of the day with literally no trading activity. Do you think the servers involved in this market sleep? They mostly do not, and moreover they typically have all power management disabled, including dynamic underclocking (Intel SpeedStep). Figure 200-300W per server and maybe 100K machines (not including desktops which also suffer most of this, but at least consume fewer watts). I guess that's at least 10 MW wasted, or about five wind turbines worth, just for the US stock market.


10 MW (constantly running) is closer to 20 wind turbines, given that the average capacity factor in NY is ~24%

https://www.wind-watch.org/documents/u-s-wind-capacity-facto...

(The average wind turbine capacity factor in the Texas Panhandle is closer to 35% because it's windier there)


Be sure to check out the powertop tool, it can be very useful in measuring how often something is waking up the processor when it should be idle.


What's the language at the bottom of the post, the one that is able to allocate custom `structs` and has some OOP stuff in it?


That's Lua, with LuaJIT FFI and the ljsyscall library.

* http://luajit.org/extensions.html

* http://myriabit.com/ljsyscall/


Very neat. Thanks for the info!


Since OS X Mavericks, Apple's been cracking down on programs that wake up the CPU too much. Thus the infamous hall of shame in the battery status menu.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: