Hacker News new | past | comments | ask | show | jobs | submit login

> Is that actually true for SSDs?

Not precisely. The logical view of a page living at some address of flash is not the reality. Pages get moved around the physical device as writes happen. The drive itself maintains a map of what addresses are used for what purpose, their health and so on. It’s a sparse storage scheme.

There’s even maintenance ops and garbage collection that happens occasionally or on command (like a TRIM).

In reality a “write” to a non-full drive is: 1. Figure out which page the data goes to. 2. Figure out if there’s data there or not. Read / modify / write if needed. 3. Figure out where to write the data. 4. Write the data. It might not go back where it started. In fact it probably won’t because of wear leveling.

You’re right that the controller does a far more complex set of steps for performance. That’s why an empty / new drive performs better for a while (page cache aside) then literally slows down compared to a “full” drive that’s old, with no spare pages.

Source: I was chief engineer for a cache-coherent memory mapped flash accelerator. We let a user map the drive very very efficiently in user space Linux, but eventually caved to the “easier” programming model of just being another hard drive after a while.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: