Hacker News new | past | comments | ask | show | jobs | submit login




Nope, they can demo it but nobody is sticking 25,000$ worth of SSD off of the tiny bandwidth you get from a single drive. So, they are not selling it night now.


What can they fit in a 5.25" enclosure?

I always wished a 5.25 mass storage HDD would come back on the market. How much would those hold relative to this?

Hm, I'm guessing about 2.5x more data per platter, about 50% more platters... 45GB?


Just put more drives into the box. You don't want bigger disks because of how fast they spin.

The only 'obvious' improvement you can make is having more arms or figuring out some way of micro-aligning multiple heads at the same time on a single arm. There's not much benefit from larger single units.


SSDs spin?


The post I replied to was about hard drives.

You can already fit way too much SSD into a 3.5 inch drive, and nobody will ever buy it because they want more ports and performance per dollar. There's no benefit for either technology to go up to 5.25 inches.


I'm guessing there are practical issues for not building 5.25" drives anymore. Vibrations become a bigger issue, power requirements are higher, and seek times would suffer as well.


This dock [1] plus 8 of these 4TB SSDs[2] is 32TB in a 5.25" drive.

If that's to pricy, you can still fit 16TB in a 5.25" drive bay with the same dock and 8 2TB rotating drives[3] and 20TB with this dock[4] and the 5TB version of the same drive[5]. Note that all 4 and 5TB rotating drives I can find are 15mm thick, so the 8x drive-bay is a no-go.

1: http://www.icydock.com/goods.php?id=192

2: https://www.newegg.com/Product/Product.aspx?Item=N82E1682014...

3: https://www.newegg.com/Product/Product.aspx?Item=N82E1682217...

4: http://www.icydock.com/goods.php?id=184

5: https://www.newegg.com/Product/Product.aspx?item=N82E1682217...


Same here. I suspect we won't see spinning Rust in that form factor because tooling costs would just eat it, but flash can scale with volume almost perfectly since it's just chips on a PCB. It's need some kind of hi bandwidth interface though, not sure what speed SAS is at, or if it's competitive to pcie nvme stuff. I haven't had to look at that stuff in a while


Bandwidth is already starting to be an issue with drives this size, at 2.5x the data simply reading or writing everything would take over a month.


You can read or write these entire drives in under a day. It wouldn't take a month.


The month was exaggerated for effect.

Best case is 12TB / 254 MB/s ~12 hours, * 2.5 = over a day.

However, random reads are a lot slower.


Random reads don't care how long it takes to read an entire disk.

Very few things do, really. It's pretty much just rebuilding a RAID or reorganizing your data storage hardware that care about full-disk transfer speed, and in those cases two days isn't a big deal.


There are great ways to mitigate the problem, but disks still get fragmented. So, at best we can say it's probably not an issue the majority of the time.


> I always wished a 5.25 mass storage HDD would come back on the market. How much would those hold relative to this?

Google presented a paper at FAST16 about the possibility of fundamentally redesigning hard drives to specifically target hard drives that are exclusively operated as part of a very large collection of disks (where individual errors are not such a big deal as in other applications) – in order to even further reduce $/GB and increase IOPS: https://static.googleusercontent.com/media/research.google.c... .

Possible changes mentioned in the paper do actually include new (non-backwards-compatible) physical form factor[s] in order to freely change the dimensions of the heads and platters. The only market for spinning rust in a decade or so will be data centres (or anywhere else that needs to store a shitton of data) -- everything else will be flash.

Other changes mentioned in the paper include:

* adding another actuator arm / voice coil with its own set of heads

* accepting higher error rates and “flexible” (this is a euphemism for “degrades over time”) capacity in exchange for higher areal density, lower cost, and better latencies

* exposing more lower-level details of the spinning rust to the host, such as host-managed retries and exposing APIs that let the host control when the drive schedules its internal management tasks

* better profiling data (time spent seeking, time spent waiting for the disk to spin, time spent reading and processing data) for reads/writes

* Caching improvements, such as ability to mark data as not to be cached (for streaming reads) or using PCIe to use the host’s memory for more cache

* Read-ahead or read-behind once the head is settled costs nothing (there’s no seek involved!). If the host could annotate its read commands with its optional desires for nearby blocks, the hard drive could do some free read-ahead (if it was possible without delaying other queued commands).

* better management of queuing – there’s a lot more detail on page 15 of that PDF about queuing/prioritisation/reordering, including the need for the drive’s command scheduler to be hard real-time and be aware of the current positioning of the heads and of the media. Fun stuff! I sorta wish I could be involved in making this sort of thing happen.

tl;dr there is a lot of room for improvement if you’re willing to throw tradition to the wind and focus on the single application (very large scale bulk storage) where spinning rust won’t get killed off by flash in a decade.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: