Hacker News new | past | comments | ask | show | jobs | submit login

Flash based storage is better than disk because there is no seek latency (the time for the head to find the location on the rotating platter where the data in interest are stored), which currently is the major bottleneck of databases. Also the transfer rate is faster than disks the prices are falling (not as fast as the prices of disk based-storage, but faster than RAM)

Today a gigabyte of NAND costs less than 1/3rd as much as a gigabyte of DRAM and the gap between the two is growing. ... By the end of 2012, when a gigabyte of NAND costs 1/19th as much as a gigabyte of DRAM, the optimum balance of flash/RAM will be very different.

http://www.storagesearch.com/ssd-ram-flash%20pricing.html




Disclaimer: I'm the author of Redis.

The Q is, why don't directly jump to RAM instead to take this intermediate step?


When you turn off the computer, you loose what's in your memory. Solid state disks don't loose data when power is turned off.


You could keep data in ram and dump it occasionally to disk (which is way faster than committing every transaction for itself). Replicate among few machines to deal with failure scenarios.

But RAM is still more expensive than Flash.


ok thanks now I got it.


i Don't. why exactly this must be done in the DB, against, say, in kernel or filesystem space?


My "I understand now" was just for fun. It's like LOLWUT... given that I'm the author of an in-memory snapshotting DB I belive I know at least the difference between RAM and SSD. So I stopped the thread this way.

That said, seriously, I think that what applies for SSD applies for RAM: that it's going to be cheaper and cheaper, and bigger, super fast, and unlike SSDs the writing and reading latencies are comparable, so even if as today it's a psychological barrier to hold your data in RAM, I think it is going to be much more common in high load applications in the future.

Actually most people are doing it already, with memcached. Sometimes the total memcached memory used could be enough to store the whole dataset well organized given that when you use a K/V cache a lot of space is wasted compared to using it to hold data.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: