Hacker News new | past | comments | ask | show | jobs | submit login
3x faster linux boot with e4rat (sourceforge.net)
190 points by jewel on May 6, 2012 | hide | past | favorite | 100 comments



It's an interesting project. I wish we had it 5+ years ago when it mattered.

The only place I'd use a spinning disk as my primary drive is on a server and that's something you rarely boot. If you're not using an SSD skip a month of Starbucks and buy one. Disk IO is far and away the biggest bottleneck for most users. An SSD will increase your productivity profoundly.


> I wish we had it 5+ years ago when it mattered.

You know... Some people still use spinning metal. Also, my notebook is also my main work machine and it only fits one 2.5" device inside it (albeit I'm tempted to experiment with running the OS off an SD card, if not for speed (which is unlikely) for battery life. I don't care if Emacs comes up in 2 seconds instead of four.

Even with SSDs, by keeping the files in contiguous blocks (I would assume this program may even purposely not do it - as it would make sense to interleave blocks of files that are opened simultaneously during the boot process in the order the blocks - not the files - are called in) you may get some extra mileage out of your storage - you'd have a simpler structure for each file - instead of a list of blocks (or extents) you'd have the whole thing in a single extent.


Switching my laptop disk to SSD was probably the best investment I made (laptop related anyways). 64GB or even 96GB SSDs are quite cheap . You can buy an external, USB powered casing for the (presumably) big platter disk and use it like that.


Many recent laptops have mini-PCIe slots. You can put a credit card sized SSD in that slot as your boot disk, and then use the 2.5" drive bay for spinning media with lots of space. Here is an example one from Intel with pictures: http://www.newegg.com/Product/Product.aspx?Item=N82E16820167...


You cannot put an msata drive in a pcie slot without a pcie to sata controller. You can get these eg http://www.sunmantechnology.com/system-ip/usb-i2c-fml.html but the fact that they are the same slot does not mean they are compatible.


There are indeed important details. This link shows the compatibility and potential issues if you use a Lenovo laptop: http://forum.notebookreview.com/lenovo-ibm/574993-msata-faq-...


If your laptop has an optical dive which you don't use, you may be able to get the best of both world.

Put your HDD in the optcal drive bay, and the SSD in the main bay. Adapter frames exist, at least for Apple laptops.


Good idea, though general consensus seems to be to put the SSD in the optical bay, and keep the HDD in the main bay, which (at least in Apple laptops) is the only one that supports the sudden-acceleration drop sensor.


My old MBP had a PATA optical dirve bay connector. I didn't have the choice if in order to get the best performances.


Nope. No optical drive in there. It's a Vostro v131.

It would be lovely to fit both an SSD and an extra battery in all that space.


I bought a 64 GB Crucial M4 SSD for a boot drive for my Linux desktop about a year ago. About a month ago, it started giving read/write errors eventually leading to kernel panics. (Didn't lose any data.) I switched back to the hard drive.

I don't see a huge difference in productivity for normal desktop use. An SSD is definitely quieter, and somewhat faster, but caching means most of us are already not hitting the disk that hard. (Obviously if you're doing intensive work with big data that doesn't cache well, that's a different story.)


For most people, I would say not to get a flash drive until the memory has been maxed out. At that point, a flash drive will probably help more than the price difference on a faster CPU.


That's excellent advice. Its pointless to spend money on an expensive disk before you max out your RAM. Unless the OS is really brain dead, it will cache reads fairly frequently and that is still much faster than any SSD can be.

Most of the time, that is. There are situations when you'll have write-heavy workloads or a dataset that's larger than your RAM can be. But then I'd also assume you are out of the personal computer league anyway.


I would disagree that most people won't benefit from an SSD. If I were building a computer, I think I might set aside money for one before even considering the other components. Thankfully, SSDs are now relatively cheap. It would cost me $200 to get 32GB of RAM. A good 60GB SSD can be had for around $75.

Fast writes do make a different for normal desktop workloads, and fast reads are noticeable despite OS caching. If nothing else, you always have to read something from disk at least once, and you always have to write your dirty pages to disk eventually. The difference becomes more noticeable as the contention for disk access increases. No matter what I'm doing on my computer with a SSD, I never feel like I have to wait. On my other computers, even browsing the web while performing an `apt-get upgrade` can feel unbearably slow.


60 GB is not that much storage space if you consider the amount of music and photos an average user can generate. My photos alone are over 20 GB right now.

Then there is the frequency with which you reboot your computer. It's true every item must be read at least once, but, with enough RAM, it's read only once per boot - if you keep your machine on for a month at at time, reads to /bin will hit the disk only once every month or so. Disk writes can be trickier, since waiting for the physical write may halt the thread for a while, but, unless you specify writes to be synchronous, there is no reason not to trust the OS with the data and let it flush the cache when it's more convenient. And subsequent reads to the data you wrote won't hit the disk again until the memory is needed for something else. Reads from RAM are still orders of magnitude faster than reads from disk.

I agree that once you have more RAM than your usual workload and the size of the most frequently used files, adding more memory will have little effect and, when you get there (say 8 GB or so) you are better off spending your money on a good SSD. Given the failure modes I keep reading about, I suggest getting a smalled SSD, large enough to fit your software, and a larger hard disk where you rsync it from time to time.


I did buy one, then it broke.

Once you factor in the re-installs the latency of spinning rust looks a bit better.


All drives can fail. Not including servers I've managed I estimate I've had more than 10 hard drives fail, all of which were well below the manufacturer's stated MTBF.

My first SSD (low to mid range) failed within the first 3-6 months but the replacement has been going for almost 2 years now.


Sure, but there's no doubt that SSDs on average wear out faster, which makes them that much more expensive a choice.

It may be worth it in the long run, but you have to keep that in mind when making the choice.


The replacement is always better... somehow.


Is this similar to how lost things are usually in the last place you look for them? It does make sense that people would continue replacing something until it worked!


Which SSD did you buy?

I'd only trust an Intel at this point, though even they're not much more reliable than a traditional hard drive on average (now that they've shipped a few buggy firmware revisions).


My SSD makes for a nice drink coaster.


Don't buy garbage SSDs, and make sure your system is utilizing them correctly. Putting swap on an SSD = horrible idea.


Actually, putting swap on an SSD == great idea. If you need swap, there aren't any better places to put it. I quote from [0] (for Windows), because it is fairly large wall of text and the relevant point might be missed:

_Should the pagefile be placed on SSDs?_

Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well. In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that

> Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,

> Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.

> Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.

In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.

On the other hand, attempting to defragment an SSD is truly a bad idea, and pointless at that. Unless you are defragmenting free space, which does get important as the SSD fills with data.

[0] http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-...


This deals with performance, but not durability. The system would pump data into swap at a fast rate during a thrashing situation, wearing out the SSD.


It's not a good idea for an MLC drive, but a decent SSD should fail into read only mode when enough sectors are lost.


It wasn't even the flash that failed, the controller stopped my bios from posting until the sata polling timed out.

Yes it was a cheap SSD, but the difference between the high end and low end should be performance. Not "breaks vs !break". When that is true I'll buy another SSD.

That is not to say that SSDs are "bad", they are just not for me, yet. I like my computers to be low maintenance. I stopped fiddling with hardware for performance a few years ago.


almost all of my spinning disks in the last 20+ years have failed. Some right away, some in a few months, some in a few years but they all have failed.


I wish [I] had it 5+ years ago when it mattered [to me].

Most people don't drop 75+ dollars on Starbucks a month, and rent, food and healthcare expenses are a tad more important than buying faster, more expensive hard drives. Free software optimizations would actually be very much appreciated by a great deal of people with scarce disposable income.

This reminds me of that article in a New York publication a while back written by a woman marveling at how much money she saved by not eating out every meal. Like the advice given wasn't an obvious necessity already for 95+% of people.


SSD is still expensive for the mass. Fast boot for Linux is still very much relevant for the few who can afford SSD. It's great for embedded devices using Linux.


There actually were projects which did this years ago. One of them was called fcache (by Jens Axobe), which had the same goal but achieved it in a filesystem independent fashion by using an extra partition and placing all data read during boot linearly in the extra partition.


Well, I only boot my home/work computer once a day. I only do hibernate/dehibernate during the day, which is quite fast.


Ten years ago, BeOS booted in ten seconds without any special tricks. That was on late 90's and early 00's hardware. Certainly it didn't access hundreds of megabytes of libraries and files in order to boot as a modern Linux does, but it was also specifically designed that way even by the contemporary standards.

And don't get me started about the 16-bit era... Machines were slower then but using them was generally faster.


30 years ago, my Apple II booted into a REPL in less than one second. 3 if you needed disk support.

And that was on a 1 MHz 8-bit processor.

There were people who could toggle a bootstrap loader into an Altair in less time than a modern x86 PC takes to boot.


That's why people love tablets and smartphones. You're much faster using them to check your mails or weather, etc than using a topnotch PC. [Now of course that doesn't include booting but when did you reboot your tablet or smartphone the last time?]


Laptop: open lid and resume. Server: RAID startup dwarfs OS startup.

Does super fast booting matter much these days? It's just about ceased to matter to me completely.


I try to avoid using sleep/resume when I'm away from home, because it partially defeats the purpose having full-disk encryption on my laptop. A thief who steals it when it's powered off has no access to my files. On the other hand, a thief who steals it when it's asleep might be able to get around the login once it wakes up.

So yes, it sucks to wait 30-40 seconds for a reboot.


Wouldn't the ideal solution then be to modify the OS to purge the disk encryption keys from memory on sleep? If you're concerned about unencrypted file contents in memory, purge the page/buffer cache while you're at it.

Then ask the user to re-enter the key on resume and get back to business...am I missing some obvious problem here?

I guess depending on one's level of paranoia, there might be sensitive non-file data sitting in memory...you could then quit the applications you're concerned about, and have the kernel wipe any unallocated memory before sleeping (I think by default it doesn't wipe pages until they're reallocated to something else, on Linux at least).

Obviously with flushing caches and quitting applications and so forth you're trading off some of the benefit of keeping the system alive, but presumably it still beats a cold boot every time you come back to your laptop.


I've been reading guides on getting lion to do just that, snow-leopard supposedly supported it, with filevault.

Unfortunately, lion/filevault 2 no longer supports it, and if you try to force the options, the computer simply crashes on resume.


FileVault 2 does support purging keys on sleep:

  sudo pmset -a destroyfvkeyonstandby 1 hibernatemode 25
From the pmset man page:

  destroyfvkeyonstandby - Destroy File Vault Key when going to
  standby mode. By default File vault keys are retained even when
  system goes to standby. If the keys are destroyed, user will be
  prompted to enter the password while coming out of standby
  mode.(value: 1 - Destroy, 0 - Retain)
and

  hibernatemode = 25 (binary 0001 1001) is only settable via pmset. The
  system will store a copy of memory to persistent storage (the disk), and
  will remove power to memory. The system will restore from disk image. If
  you want "hibernation" - slower sleeps, slower wakes, and better battery
  life, you should use this setting.
So, under Lion, turn on FileVault, run that command and always sleep your Mac (close the clamshell, Apple Menu > Sleep, or Option-Command-Eject) when you want to be secure.

If your computer crashes under resume after having done so, something's amiss. Remember that you'll need to auth twice on wake-from-sleep if you are logged in – once to unlock the volume, and again to unlock your user's session.


Which operating system? On OS X Lion, you can make the system hibernate when the lid is closed, writing encrypted memory to disk. Slower wakeup times than suspend, but quicker than a full startup.

https://news.ycombinator.com/item?id=3785762


If hibernate is quicker than a clean boot on OS X 10.7, you either A) need an SSD, or B) need more RAM.

Hibernate takes a full 48 seconds on my laptop. A clean boot takes 8-9 seconds.


How long does re-opening all of your apps and files take? People who dislike rebooting usually tend to have plenty of things open.


I wish my Ubuntu desktop booted in 40 seconds. Thanks to btrfs, a reboot is a 30-minute affair.


Is it doing a full fsck every time? Or is this just... the beta tax?


Since my SSD-equipped, btrfs using laptop (Thinkpad X60s running Debian testing, kernel 3.2.15) boots from power on to graphical login in 27 seconds (13.5s of which is the time taken to get through the BIOS boot sequence) I suspect it's something specific to the parent poster's system.


I have btrfs (with lzo compression) on a rotating disk, and the boot feels a little slower (one or two minutes total?) for reasons I haven't really examined. I'll have to check if something messed with ureadahead.


It's a 2TB drive doing a full fsck each time, though I don't know why.


While I like a fast boot, you've go to get these things a little into perspective, a minute to the desktop isn't that bad.


But 20 seconds is even easier. When I switched to an SSD in my laptop I saw boot times from drop almost one minute to just about 15 seconds. I no longer dread having to reboot after system updates.


A minute to the desktop is exactly why the tablets are so useful.

If I'm sitting on the couch the instant on nature of my tablet is the main reason I'll reach for that to look at something rather than my laptop.


It's not bad, but spread over X million workers on desktops 5 days a week means a lot of wasted time, and maybe energy.


If 5 million people save 1 minute every day for 200 days a year. Then you have saved 1 minute every day per person, not 5 000 000 * 1 * 200 minutes in total per year.

That kind of math just doesn't work.


Yeah but you don't have to just sit there and twiddle your thumbs while your machine boots. You can just do something else. I'm sure we waste far more time during the day doing other things. We don't necessarily obsess over those types of time inefficiencies. If you wanted to save time you could brush your teeth in the shower etc.


Linux still has lots of problems with suspend/hibernate/resume on laptops (Oneiric broke hiberanate on my laptop for example), so booting is still kind of important for that reason.

Resume from hibernate is another area where I'd like to see improvements. I hibernate instead of suspend because I never know if I'll make it back to an outlet in time.


Sounds like a problem with Ubuntu, really, not Linux.


It's more of a problem with hardware more interested in coding to "works on Windows" than "complies with spec."

And honestly, Startup is as fast as resume these days - the problem is applications that don't remember last known state and window managers that won't remember the rest. Why don't we fix that instead rather than chasing down all the suspend bugs?


Suspend-to-RAM, as opposed to suspend-to-disk, is way faster than booting.

Why chase down suspend bugs? because they are bugs.

I'm with you on fixing window managers, generally speaking.


Restoring app last state? Like terminal emulator recreating tmux session wrapping screen session with package manager working under sudo? SSH unHUP-ing processes on remote machine? Python VM resuming all scripts in a precise place and state they were stopped?

The problem is that there is no such thing as isolated "application" on Linux or any other real OS.


Restoring is hard. Let's go shopping!


Well suspend/hibernate/resume are kernel functions so it is a Linux problem.


I'm aware that they're kernel functions, but it's not necessarily that simple.


I've never had it work reliably with Fedora. Reliable sleep is about the only reason I use Windows now.


I've had some problems with Intel graphics on my DELL in Fedora 14. Sometimes video, sometimes full screen flash, and sometimes sleep caused it to crash. On my older IBM with ATI, everything worked just fine. Anyway, in Fedora 16 the graphics and sleep works just fine for me.


I definitely could be wrong in my conjecture. I've used sleep reliably under Arch Linux in the past. (Not using it now because my laptop doesn't work anymore without being plugged in.)


On the same hardware that Ubuntu (and friends) didn't work on? I've been having problems with resuming, and wondered if going to Arch would help, but couldn't think of a real reason why it would.


Newer drivers, for one. Also, it's a lot easier to mess with Arch's internals than Ubuntu's.


It's not a fair test; haven't tried a recent Ubuntu, but: suspend to ram and suspend to disk were broken on Ubuntu 10.04 and 10.10 with my Eeepc 900. Arch + TuxOnIce has worked great for suspend to disk for about a year now.


Note that Ubuntu disables hibernate by default in the current release 12.04 ("Precise Pangolin").

https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/UbuntuD...


As has been mentioned full disk encryption loses a lot of its efficacy if you just put your laptop in standby all the time. I'd add that as long as a faster boot doesn't compromise your system in other ways, why wouldn't you want it?


It doesn't - if done right.

When hibernating my laptop it writes the memory contents into swap - which is also encrypted. Yes, de-hibernating is slower if contents need to be read from disk. OTOH it's still faster than booting from disk.


I do believe standby and hibernate are different as standby keeps memory contents hot whilst hibernate actually shuts the computer completely down. I was talking about the former in my post above.


I this bizarre problem that whenever my fridge turns on, it wakes my laptop from sleep, even if the laptop's not plugged in.

The joys of student life!


Just did this on an old Inspiron 1545 with Ubuntu 12.04 on it and I'm already noticing major improvement in both boot time and the apps I open frequently (also with the help of preloader and zram)

I followed this guide: http://www.howtogeek.com/69753/how-to-cut-your-linux-pcs-boo...

Although note, at the part where it says push CTRL+ALT+F1 to get to a new terminal login, that didn't work for me. I had to go to the default one (CTRL+ALT+F7) and type "logout" and then go to CTRL+ALT+F6.

I have a web page I keep track of common things I do to my Ubuntu installs (shameless plug: http://ubuntu.mindseeder.com) and I'm definitely going to add this so I don't forget!


I wonder how much improvement that brings over standard ureadahead (see yason in this thread for how they differ). For zram, no need for a ppa, a zram-config package has been added in precise.


The easiest way I've found to increase boot speed and application load even under heavy system load is to use the pf kernel patchset. With a properly configured kernel using BFS, BFQ, and LZMA compression, my system is amazingly fast. Even when compiling and both cores at my laptop are at 100% my system is fully usable. If you have a Macbook Pro my kernel configs are here: https://github.com/meinhimmel/kernel-configs


Note that if you're using Ubuntu you're using ureadhaead, which does something similar to minimize seeks during boot.

https://wiki.archlinux.org/index.php/Ureadahead


Then why are there deb files for Ubuntu? I don't think they're doing exactly the same thing, since e4rat is using specific features of the Ext4 filesystem.

Edit: Actually, the directions say to remove Ureadahead when installing e4rat, so maybe they are quite similar.


AFAIK, ureadahead just keeps track of which files/blocks are touched during bootup/startx and pre-reads them into a cache at the very beginning of the boot sequence so you will effectively boot from cache.

This still requires seeking to several semi-random areas of the disk while prereading which e4rat is fixing by physically moving the needed blocks adjacent to eachother.


Since ureadahead looks at blocks and not files, it has a potential speed advantage if the boot process is opening some large files but not reading them in full.

Edit: except the ureadahead packfile only points to the blocks and files, it does not provide a way to inline them. So e4rat is almost certainly faster. It's a shame it is ext4 specific.

Apparently Scott James Remnant, the ureadahead developer, considered feeding the collected info to a defragmenter: http://ubuntuforums.org/showthread.php?t=1434502 ; this would be nice, as it means a single package is responsible for the feature, and filesystems perform to the best of their ability whether or not they have ext4-like fine-grained control of defragmentation.


So it sounds like the filesystem-reorganizing features of e4eat (but not the profiling features) would be complementary to ureadahead.


For some reason, I had the idea that ureadahead's pack files actually contained the contents of the blocks that needed to be read during boot, turning readahead into a sequential operation. After reading the manpage today, I see that I was mistaken.


It would be nice if someone installed e4rat on Ubuntu (with ureadahead) and reported the performance.



So when will we see this in our smartphones? I don't understand and, frankly, find it ridiculous that I have to wait about a minute or up to 90 seconds for my smartphone to boot, a device that is completely flash-memory based and that has no variability in hardware. I still remember the article about that industrial Linux PC that booted in less than one second, including initializing video4linux and two cameras etc., simply by optimizing kernel parameters, boot order, and driver timeouts. That must have been 2005 or so. Now it's 2012 and my phone takes longer to boot than my laptop (late 2010 MBP w/ SSD).


Would it make a difference with SSD drives?


Sequential reads are faster than random even on SSD drivers, so it could help but I am not sure how much.


Oh ? Why is that ? SSD still need to address every memory block, don't they ? Or do they have a faster DMA-like mode where you can send the instruction "give me the next 96KB starting at address X ?


SSDs exhibit the same sort of cache locality RAM does. When a region is accessed, a larger chunk is fetched and cached just in case it's needed later.


Yes, and the OS readahead also helps.

You can see the random vs sequential difference in any ssd review, eg http://thessdreview.com/our-reviews/ocz-vertex-3-240gb-max-i... shows 18 MB/s in random reads vs 500 MB/s sequential reads.


I'm not sure if a SSD drive have any moving parts and therefor will not benefit from E4rat.


No, it moves the files used in the startup to a closer sector and won't speed up your booting time if you're using SSD discs.


By placing the files together in the memory you may get some extra performance, but it depends on how your SSD is internally organized - you may have some locality effects if the drive pre-fetches more than the block you called for into a cache faster than the flash memory.


With an SSD, the bottleneck for booting is usually hardware detection and initialization, not reading data off the disk. My system takes about 4-5 seconds from the time the bootloader hands off control to the kernel to the time the kernel starts executing the initrd, and another 6-7 seconds to mount the SSD and hard drive, establish the network connection, start system services, and present a login prompt (though starting X and changing resolutions takes another second or two on top of that). I probably can't make that more than 20% faster without getting a faster DHCP server or tweaking various delays and timeouts that exist for good reasons.


I saw a demo of this at Intel Labs on a Windows machine about 15 years ago and it was very impressive. I don't think they optimized startup, but application launch was incredibly fast with their disk layout optimization.

Isn't this essentially what DiskKeeper does on Windows?


I don't know about DiskKeeper, but I believe it's part of the "Lenovo Enhanced Experience" on ThinkPads. Amazingly, my factory install with crapware booted from pressing BIOS to Windows desktop in just over 7 seconds. I haven't been able to get my clean install below 10 seconds, I think because I'm missing the filesystem tweaking.


Is this similar to OSX's Hot File Adaptive Clustering, then?


The OSX feature apparently notices files which are slowly appended to (downloads) and defragments them. It does not reallocate a set of files (such as the ones touched at boot) to be adjacent on disk, which is what e4rat does.


Does the debian package work in Ubuntu?


Yes, you might need to remove ureadahead though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: