Hacker News new | past | comments | ask | show | jobs | submit login
6 months later, the Intel SSDs are still massively better (anandtech.com)
136 points by blasdel on March 19, 2009 | hide | past | favorite | 56 comments



Wow. AnandTech has always been great, but this was especially well-written and informative. If you think 31 pages is too long, well, it's worth it. The article gives really good technical explanations of how SSDs work and how they compare to disk drives, tells about an interesting back-and-forth with OCZ (a Taiwanese SSD manufacturer), and gives you the info you need to make an SSD purchasing decision. And it's a shining example of tech journalism at its best (which we rarely see).


Agreed, I'm getting ready to buy some SSD's for my production servers, and after reading this I know exactly what I want to buy and why and what to avoid. This was a great article.

I wish I'd read this before buying my SSD for my workstation, an impulse buy while at Fry's with my boss. I've largely avoided the write stuttering that is problematic for these low end MLC SSD's by using my old HDD for all my data and the SSD for OS/Programs so writes are fairly rare to the SSD; it still kicks the shit out of the old HDD.

For a database server however, buying a non intel MLC SSD would have been a huge mistake that I'm now well informed enough to avoid.


There was a blog post recently about the performance improvements of using the very same SSD drive for Gemstone/S http://gemstonesoup.wordpress.com/2009/02/28/approaching-the....


Yea, that's the post that made we get the SSD to begin with.


Yea but I didn't see him mentioning the ioDrive at all, or did I miss it in the haystack ?!


The ioDrive is targeted at people who already have a million+ dollars invested in their database servers.

Nobody's going to use it in their workstation, much less one running Windows, and there's no way to stuff it in a laptop.


We're seriously considering putting them in developer desktops, but are waiting to test the ioXtreme (a slightly downmarket product from the same company). 2 grand to dramatically improve developer productivity isn't that big a deal (the other top contender is 1 or 2 X25-Es, so the upcharge is at/under 2K)


The ioDrive isn't that expensive though... you don't need a million dollars. Their 80gig model is around $2500 so anyone who is building one of those "ultra-fast" gaming rigs probably wouldn't mind dropping $2500 to speed up the slowest part of a computer (usually)


But what is the performance delta between a $2,400 ioDrive and a $400 X25-M for an "ultra-fast" gaming rig? I actually own an ioDrive (not for personal use) and it's hard to find a real workload that can saturate it. I think some people here are on the wrong end of Amdahl's Law.


How big is the build directory? You can get a ANS-9010 and 8 gig of DDR RAM to put in it for something like 1/3rd that price. Plus, it will never slow down on you.


You also have to consider tools used to build and test, and any DLLs used. Theoretically for something used frequently, it should stay in memory, but I have no idea how well that works in practice when doing a large build.


ANS-9010 is a RAM drive, that presents DDR RAM as a SATA drive. You can configure it up to 64 gigabytes, but at that price it's prohibitive. You can buy its little brother, the ANS-9010B, which only has 6 slots for memory and tops out at 24 GB.

Many shops will be able to fit the most frequently used 20% of their toolset onto a small disk drive like this. if the most frequently used 20% of their toolset represents 80% of their development wait, this is still a big win.


It will be in my next workstation too. computers are cheap, even the io drive is cheap compared to what fast components used to cost.


While I was cooking dinner after posting this, I came up with an awesome solution to OCZ's dilemma. They're driven to boost streaming large-block bandwidth so that they have big numbers to advertise, but they're doing that at the expense of crippling random small writes. So they should do do this:

  * Offer two different lines with the same hardware but differing firmwares optimized for each.
  * Sell the IOPS-targeted one as 'Enterprise Grade' for %50 more.
  * Release regular firmware updates for both, and ensure that only a trivial (but *WARRANTY VOIDING*)
    modification keeps you from flashing your consumer drive with the 'enterprise' firmware.
  * Bask in the glow of heavily-dugg tutorials about how to 'mod' your consumer drive for low-latencies
  * Have vendors purchase your cheaper-than-intel 'Enterprise' drive for CYA reasons.


That's an acceptable "solution" from OCZ's point of view, but it's a much worse solution from my POV.

In order for me to get around the simplistic marketing of MB/s, that leaves me paying 50% more or voiding the warranty.

I'd rather they fought simplistic marketing with honesty rather than profiteering.


All they need to do is not enforce voiding the warranty, or make it possible to undetectably undo the change.


The main point of the 'warranty voiding' would be to get a ton of hype from "hack your ssd" blogspam, which would instantly distinguish them in a crowded market.

Being able to extract more money from PHBs is just gravy.


Watch out. My 80 gig Intel X25m just crapped out on me. All data gone. Lost about a weeks worth of data (I now back up more often).

However I bought a 160gig Intel X25m in its place. Going back to a spinning disk was unacceptable (almost 10x slower). The drive f*$&ing rocks, even if it is unreliable (hopefully that was a fluke)


That's worrying, I thought one of the advantages of SSDs is they're supposed to fail gradually...


Thanks for the data point. In theory Intel SSDs should fail about as often as a motherboard or stick of RAM, excluding the graceful failure you talk about.

The graceful failing is where there are no free sectors left for wear levelling or bad block remapping. When it fails this way you should still be able to read your data off it.

EDIT: I've just received an anecdotal comment from a friend who works at Intel that high temperatures affect MLC flash life rapidly in an negative way. I can't find any data to back this up.


Thanks, I didn't go through all of it, but I really enjoyed the back story on OCZ. It's great that they would fix their firmware and work with this guy to build a better product. I like seeing that business is really done like that outside of the small local businesses.


Massively better? No.

If you create an imaginary baseline SSD with the X25-M's 80 GB and the Vertex's 9836 PCMarks and then compare both drives against the baseline you get:

Intel: 20% faster

OCZ: 50% larger

Both cost $350. Given that the slow one is faster than the fastest desktop HD but the big one is only 1/3 the size of a standard laptop HD, I think I prefer the extra space.


Ignore the 'PCMark' bullshit benchmarks, the point of this article is random write latencies, which are ridiculously awful on a lot of common SSDs, several orders of magnitude worse than common spinning rust!

The only non-Intel drive that's not awful at random writes is the OCZ Vertex, and it's a few times better than an HD. The X25s are still an order of magnitude quicker than the OCZ Vertex once used.


On random writes the Vertex is 48% faster than the VelociRaptor, a 10k rpm hard drive. Right now I have a 5.4k rpm hard drive in my laptop. That kind of performance would make me ecstatically happy.

Even if the X25 was overall 100% faster than the Vertex, and according to Anand it isn't, I would still trade that to have 120 GB rather than 80 GB.

This is Anand's conclusion:

"with the Vertex I do believe we have a true value alternative to the X25-M. The Intel drive is still the best, but it comes at a high cost. The Vertex can give you a similar experience, definitely one superior to even the fastest hard drives, but at a lower price."

Similar experience. I can't find the part about massively better.


The X25 comes in 160GB varieties too, but I understand the trade-off at comparable prices. With a technology that changes so quickly, I set myself a couple requirements before I'd buy an SSD:

-Performance had to be equivalent or better than a WD Velociraptor across the board (the OCZ Vertex achieves that with firmware updates, the Intel is slightly slower on max sequential writes, but makes up for it in every other benchmark).

-It had to be at least 120GB for my personal laptop.

-It had to be around $300. I assume any SSD purchase made now is going to be replaced in 1-2 years, so purchasing anything more than necessary is hard to justify.


OT, sort of, but thanks for linking to the printer-friendly version.


Anandtech doesn't even have prev/next links, you have to use the <select> every time!


Quote "SSDs make Vista usable."

I guess the best thing to do right now is to have your OS (or multiple OS's) in an 32GB SSD for instance, and a big HDD as second drive for multimedia and general storage. Right?

Edit: Like other people say, thanks for linking to print version :-)


I have a NEC w/ a solid state HD. It was dying under Vista - couldn't even load an image file in less than 10 seconds. Think about it - a miniscule 40kb jpeg would take 10 seconds to load.

I switched to Ubuntu and most of my problems went away.


A tiny jpeg takes 10 seconds to load? That's weird. Does Vista have issues with your SSD's drivers or something?

Nice to hear that it worked as it should with Ubuntu


That's exactly what I'm doing, 64gig SSD for OS/Programs and 756gig SATA HDD for data/temp files/cache. Seems to work great, loving the speed of program launching on the SSD, no delay, click an icon and bam.. app is launched.


There's a very important general lesson here: optimise for real-world usage patterns.


Thanks for the article.

Just as a completely useless data point, I suffered two different disk crashes in rapid succession a couple of months back, and out of frustration got one of the Transcend SSDs that were cheap and available at the time (ATA for an old PBG4).

It works great, I love it, and I no longer have to fear data loss when stupid software drives me to banging my fists on the desk.


Something I don't understand:

Have some free space on the drive. When you need to write to the drive, write it to the pre-erased, free space. Then merge in the pages from a different block, and then erase that block.

Wouldn't this ensure that you would always write to pre-erased area? And avoid the slowdown completely?


Yes, that's how better FTLs work, but the details are difficult. You have to have a logical-to-physical mapping table and keep it consistent.


I understand the problem in the article but the solution doesn't necessarily seem to be better hardware (though, that would help).

From what is described, it sounds like the problem is with the poor block provisioning algorithms that the drive controller uses. Where is the operating system in this? Why can't frequently updated files be put on blocks of their own? Why can't block deletes be done online? Why must file modification modify the block(s) where the file currently resides and block for delete instead of writing to empty space and deferring delete until the controller is idle?

Anyone?


Has anyone worked out how long 10,000 erases works out to in a server or workstation "typical" environment? I expect developers to reach it sooner than non-developers...?


I was wondering exactly this last night and did some napkin calculations.

Assuming the "10,000 writes" commonly claimed for SSD's is accurate, 5 year lifespan = 4.37 hours between writes on one page.


Well, I've seen numbers stated at 100gigs write per day for 5 years guaranteed. That's a shitload really, more than enough to that I wouldn't be concerned with it.


Please, HN people, stop linking to the print only versions. It takes me work to navigate back to the version that shows the publisher navigation conext, proper reading width, and makes the publisher happy that you linked to them. If I want to get an unformatted, often less tested version with missing images, I'll choose to click on 'printer version' myself.


Not when the navigation is so absurdly awful, and with all the informative images present in the print version.

There's no fucking way I'm going to link to something that requires 62 clicks to page through the article.


I voted his comment back up as while I personally like to read printable versions, it's more socially responsible to link to the regular version so the publisher at least get's a chance at making some money from their ads (which sustains them and brings us kick ass articles like this in the future).


For proper reading width, check out http://lab.arc90.com/experiments/readability/


In the article he mentions Super Talent drives. Did this guy try the old Super Talent SSDs or the ones they just came out with today (32GB, 64GB, and 128GB UltraDrive LE and 32GB, 64GB, 128GB and 256GB UltraDrive ME)?

Because Super Talent put a bunch of benchmarks in their whitepaper, comparing very well to Intel X25-M.


They use the same Indilinx controller that the OCZ Vertex uses. It would be interesting to see a comparison between the OCZ Vertex and the Super Talent drives, it would essentially be a comparison of firmware.


According to the article, the same Indilinx firmware is available to all vendors, so there's probably no difference.


I'm a little confused about why SSD's aren't a massive performance boost. Regular HDD's are limited by mechanical interaction. SSD's have no such problem so why can't they read/write everything in parallel rather than sequentially?


They are parallel, but an individual flash chip is so slow that even eight of them in parallel is less than ludicrous speed. Also, Intel claims that they are limited by SATA in some cases.


I learn a lot about SSD's from Storage Search

http://www.storagesearch.com


I've found Storage Search to be the best information hub for enterprise-class SSDs; first saw the ioDrive there, and it's the only site I've seen mention SAS drives in development.


one bit that's kind of funny is that with flash based memory you want data for any given application spread out over as many blocks as possible so you can take advantage of parallel access. In the future we may see a "fragment drive" maintenance option instead of "defragment". :)


SSDs perform striping internally, so storing data on "contiguous" sectors will still give you parallelism.


Also: http://www.engadget.com/2009/02/19/intel-x25-m-ssds-slowing-...

Not sure if its FUD or not, as I havent experienced any problems. But its a potential red flag


They talk about that and explain why it happens in the article....



well that took my entire morning "reading time", but it was WELL worth it!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: