Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Coughlin: SSDs will not kill disk drives (blocksandfiles.com)
20 points by HieronymusBosch on Aug 15, 2023 | hide | past | favorite | 76 comments


Man who runs "Entertainment Storage Alliance" writes white paper claiming that people need lots of storage, so HDDs aren't going anywhere, makes charts to illustrate his feelings, presents no analysis.

Whitepaper store link (I have to assume the description isn't updated for each new issue...?) https://tomcoughlin.com/product/digital-storage-technology-n...

Entertainment Storage Alliance: http://www.entertainmentstorage.org/

I agree that HDDs aren't going anywhere, but this is a nothing article on an apparently nothing whitepaper.


The Pure storage perspective -- that HDD sales will be zero by 2028 -- appears to be similarly motivated. Pure Storage went all-in on flash for Enterprise storage and doesn't have an answer to the market segments better served by HDD. So they would very much prefer for HDDs to be on the cusp of obsolescence.


Maybe not yet, but eventually?

HDD manufacturers do seem to see trouble brewing. Eg, one interesting thing to me is that hard disks are getting more complex. We now have:

* Helium

* HAMR

* SMR

* Dual actuators

Helium and HAMR improve density, but add cost. SMR improves density but results in drives that need special handling. Dual actuators improve performance, but increase points of failure.

Things like HAMR and SMR seem to be a sign that there's not that much more density to squeeze out without taking special measures.

One problem is that as drives get larger and larger, RAID sync times keep on growing. Even assuming faster RPM drives and multiple actuators it seems we're getting quite closer to where this becomes a serious problem.


I don't get your argument.

Hard drive technology pretty much always got more complex as time goes on. Do you think bits shrink themselves?

20+ years ago, we had "Get Perpendicular" explaining why perpendicular bits were a good idea. https://archive.org/details/get-perpendicular

All future technologies are more complex than our past techs. I mean, should we say that future SSDs are doomed because they're going for 4-bits per cell and 300+ layers of silicon rather than the single-layer single-bit simple designs from 10 years ago?


I mean they're not straightforward improvements, they all have some sort of tradeoff.

SSDs are getting more complex but at least from my point of view unnoticeably so. Yeah, it has 4 bits per cell and 300 layers, but as an user I don't have to think about it. I went from 240 GB to 2 TB and it was an improvement by all parameters. Speed got faster, size got bigger. No downsides whatsoever.

SMR on the other hand is absolutely infuriating when you bump into its limitations. It's not automatically better, it's extra capacity with a tradeoff.


TLC and QLC has far worse durability than SLC or MLC.

There's tradeoffs everywhere. IMO, I stick with the "last generation", because the new stuff almost always has stability and/or data-loss issues. Even top-end Samsung drives on the newest tech seems to lose data.

Or the SanDisk loss-of-data problem on their SSD drives.

---------

https://www.theverge.com/22291828/sandisk-extreme-pro-portab...

The only reason why TLC became usable was because new load-balancing algorithms would keep track of which bits were being erased (each erasure uses up your drive's limited durability).

It would then have a mini-computer analyze which cells were least used (aka: load-balancing) and move your data over to the less used sections of the drive. Furthermore, drives would compress the data to minimize writes (and therefore erasures).

Its not exactly a simple technology, and a lot of things have gone wrong in practice.

--------

When your software says "Write to SSD-location #10,000", it _DOESN'T_ write there. It enters the load balancing algorithm, it finds a new location that has least been used. It pretends that new location is #10,000. It sets the "old Location-10,000" to the TRIM state, and then the OS runs a garbage collector over your data at some future point to keep this whole scheme working.


> It would then have a mini-computer analyze which cells were least used (aka: load-balancing)

I'm imagining a PDP-11 dedicated to controlling and wear-leveling an array of flash chips, and smiling.

The idea is not quite as silly as you might think; there are real, actual PDP-11s still running RTOSes and "embedded" applications like nuclear power plant monitoring/control... some of which will be supported until the 2050s at least.


Wear-levelling ins't new, though. It has been a thing as long as flash has. Translation layers for wear levelling were not new when I last worked on drivers for a flash device 25 years ago.


Specific implementations, errors, and data-loss associated with those wear-leveling algorithms change every new firmware version of drives.

Each generation adds new features to the SSD firmware that I don't necessarily need, that will change wear-leveling (and other such details).

Then 6 months later, we learn that Sandisk Extreme was the drive with the messed up firmware. Or the "Slow Reads" from the Samsung 840.


Yes, but the point is that this is "business as usual" with flash, nothing new. Wear levelling methods were patented at least as early as 1991, and FTL drivers were available for Linux at least by the late 90's to handle "dumb" flash that didn't have their own wear levelling built in.

I absolutely agree with your other point of sticking to older, well tested drives,


Surely they reduce cost. Otherwise they'd not be used. A donkey and cart's cheaper than a haulage truck, but donkeys aren't used much in the haulage business anymore.


> A donkey and cart's cheaper than a haulage truck

Not when you measure the cost per unit of cargo delivered.


That was their point.


Look at the lifetime cost and the capabilities you're buying.


People who do data storage/archiving for a living, do SSDs have predictable durability? I know that with magnetic media you can plan to copy them every so often to prevent loss, but is that information well known for TLC/QLC flash? I also know that SanDisk made an archival worm SD card that was supposed to be durable for 100 years, but haven't heard of rates like that for SSDs.


I work(ed) on testing storage products. So, here's a probably more precise answer to your question: nobody knows.

There are at least these two reasons: proper testing of endurance of a device is not possible, you can only test that in a pretend kind of way. We are talking about years of service that have to be somehow emulated in at least weeks... and, of course, you cannot really account for physical behavior of materials from which the disk is made by simply running multiple I/O workloads. Now, add to this that the larger the storage capacity, the harder it is to check durability because throughput becomes the bottleneck. I.e. if you emulate device wear by running more I/O workloads, then, proportionately to the size of the device, you will be able to run fewer of them per unit of storage, because you are bounded by the throughput.

Any real (good) tests so far rely on previous generation of devices, and don't necessarily reflect the current situation. I.e. if your SSDs survived for only five years, who's to say that the next generation you will buy will last more or less? They are very likely not the same kind at all...

Also, it's really silly to measure disk durability in units of time... I mean, most tests that intend to measure durability model it by running I/O workloads, so, they'd typically measure durability in something like "how many times can a unit of storage be written over". The usage patterns vary dramatically across different kinds of workloads. So, if, eg, you are running a build server, you will wear your storage a lot faster than if you are running a (well-configured) database server, and still much faster than if you were running a video streaming service, and still much faster if that's a (well-configured) Web server... and the difference could be an order of magnitude between these.


> SSDs have predictable durability?

If it is not plugged in, i.e. unpowered, don’t count on anything more than three months. Thread with sources: <https://news.ycombinator.com/item?id=27573332>


I don't understand. The only source I find in what you link is an AnandTech article which literally contains this:

"Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs."

It is kinda obvious that if you pass the numbers considered safe then bad things shall happen.

But a SSD sitting on a shelf starting to lose bits after three months seems incredibly low.

People don't know about that and they'd be screaming by now.

I don't think it's that bad.


> The only source I find in what you link is an AnandTech article

In that thread is also this post:

<https://news.ycombinator.com/item?id=27573332#27573720>

Which contains this link:

<https://web.archive.org/web/20210502042514/http://www.dell.c...>

Which in turn contains this text and table:

I have unplugged my SSD drive and put it into storage. How long can I expect the drive to retain my data without needing to plug the drive back in?

It depends on the how much the flash has been used (P/E cycle used), type of flash, and storage temperature. In MLC and SLC, this can be as low as 3 months and best case can be more than 10 years. The retention is highly dependent on temperature and workload.

  ┌────────────────┬─────────────────────────────────┐
  │NAND Technology │ Data Retention @ rated P/E cycle│
  ├────────────────┼─────────────────────────────────┤
  │SLC             │ 6 Months                        │
  ├────────────────┼─────────────────────────────────┤
  │eMLC            │ 3 months                        │
  ├────────────────┼─────────────────────────────────┤
  │MLC             │ 3 Months                        │
  └────────────────┴─────────────────────────────────┘


If it's eMLC, kept at elevated temperature (40C), and unpowered. Consumer drives use higher floating gate voltages so they'll keep data for a year while unplugged.


A year is still much shorter than most people assume. In my experience, most people think they can buy SSDs, put data on them and then shelve them for 20 years without issue.


One year is the worst case scenario. Drive with used up total bytes written rating, very hot environment, and unplugged.

While SSDs are inferior for long term data persistence if it’s irreplaceable data you should be bring the file system online on a schedule and let it run checksums. If something errors restore from another copy. The magic of digital storage is perfect copies at low cost. The loss of any one copy is fine as long as it has been copied forward elsewhere.


I recently recovered data from a 2012 Macbook Air SSD that was powered off for 7 years with no data loss.


The source does say “best case can be more than 10 years.”. I would still not use off-line SSDs as long-term archives.


What makes me nervous about SSDs is related to this -- I've heard many tales of SSDs simply dying, making data recovery impossible. I've rarely had HDDs die, but the couple of times that I did, they gave plenty of warning that it was coming.


HDDs have limits, too - look at http://www.wdc.com/wdproducts/library/other/2579-772003.pdf

Modern hard drives have warrantee limits on how many bits you can read/write during their service lifetime, since they lower the head from about 10nm to 1-2nm during reads and writes, and head lifetime is correlated with the number of hours it spends at that 1-2nm height.

With workload specifications in the range of 500TB/year (IIRC - I took a quick look and couldn't find any recent specs) that works out to less than QLC levels of endurance. It's not the same, though - if you read/write every byte of a hard drive 300(?) times the failure rate rises and the vendor gets nervous; if you overwrite QLC 3000 times it's on its last legs, and 1 or 2K more writes will almost certainly kill it.

SSDs have quite predictable durability, mostly because the unit that fails is less than 1GB, so the law of large numbers kicks in.

Note also that "durability" is a soft target - the normal failure mode is that blocks retain their data for shorter and shorter times before hitting the ECC error limit, so if your storage system moves data around every few months you can push the flash harder than in e.g. a laptop, where you don't want to risk losing all your data if it sits powered down on a shelf for half a year.


>SSDs have quite predictable durability, mostly because the unit that fails is less than 1GB, so the law of large numbers kicks in.

You're forgetting the controller, which has no qualms with dying 100% unexpectedly. I don't know why consumer SSDs have such poor quality controllers that they can randomly die, something that HDDs seemingly haven't struggled with in decades.


I've had half a dozen or more SSDs die in my lab.

Without access to vendor tools (like a JTAG debugger and source for the controller) I'm not sure it's possible to tell whether the controller itself failed, or it just decided not to wake up because the flash was dead.

But yeah, it sucks.

Finally, I'd note that a lot of earlier SSD vendors came out of the USB device market, where they were used to making things that had the reliability requirements of your average Happy Meal toy. There are only 2.5 hard drive vendors or so at present, in large part because the ones who weren't fairly good at reliability are dead now.


You're still at risk for EMP related data loss, and the sun flipping bits is definitely not unheard of. There's a reason ECC memory is popular. One of the most available options you have is archiving all your data onto bluray disks. EMP safe, and going to be around for well past your lifetime.


I own a few SSDs, but for me, HDDs still make the most sense for the majority of my storage. It's not even close. High-capacity HDDs are half the price of high-capacity SDDs.


My HDDs tend to last longer than my SSDs, too. For bulk data storage that isn't being transported frequently, it's a no-brainer. For anything else, though, I think SSDs still have the advantage despite their shorter lifespans because of their better tolerance for physical impact and higher read/write speeds.


They can also be left unpowered "forever" without data loss, while SSD lose bits after a few weeks.


Really? Do you have some evidence for that?


I feel like this would be easy to test, at some cost. Buy N SSDs from X manufacturers. Fill the entire disk with random bits (/dev/urandom), read it back and compute a hash. Wait A, B, C, and D durations. After each duration, read the drives again and compare the hash.


Bit rot is definitely a problem with HDDs, especially as their bit densities increased. I've heard that bit rot on SSDs also became a thing as well after they started moving from SLC to MLC.

I'd be interested to see a study on how prevalent either of them are.


I recently powered up a 2012 Macbook Air with SSD after 7 years and had no data loss.


In addition to the cost advantage, HDD firmware is simpler and more reliable.


Even stone tablets have to be periodically refreshed. https://www.biblegateway.com/passage/?search=exodus%2034&ver...


Both make good arguments. If the total cost of an SSD is indeed cheaper than spinning rust in five years, then sure, I can see them getting phased out for enterprise.

But what about consumers? They don't consider TCO the same way. They don't look at energy or space constraints. They only look at cost per megabyte. As long as SSD is more, they will keep buying spinning rust for backups.

The fact that Costco always has backup disks on the shelves tells me that it's a pretty popular item amongst consumers.


I am a consumer, and I look not only at cost per megabyte. I also need to sleep at night while my laptop gets backed up and while my torrents get downloaded and seeded. Therefore, due to their noise, HDDs in the same room are not an option, no matter the price. For local backups and torrents (i.e. something that I can lose) I nowadays rely on an 8TB QLC SATA SSD connected to the router via USB adapter (in 2016, there was a 1.92 TB SSD), and will buy another one if it gets full.


I guess it's down to personal preference. The noise of data being written to a HDD is soothing like a lullaby to me. I can rest easy knowing that the hard drive is working away through the night, filling sector after sector with new surprises for me to find in the morning and carefully copying my old data keeping it safe and warm so that it's always there when I need it.


> The noise of data being written to a HDD is soothing like a lullaby to me

Username checks out.

But yeah, there are a bazillion ways to store the hdd somewhere where it wouldn't be a noisense and don't care about the wear level


No one deletes data: HDDs are useful for tiered storage and bulk bits.

A number of file systems also support transparent tape usage: a stub is left in the directory structure and if anyone tries reading it the bits are fetched from a robot.


It would be nice if more communications protocols achieved feature parity with Ethernet. Or even USB. Give me a hard drive with a connector I can plug into a hub or switch and just keep daisy chaining the oldest hardware deeper and deeper, instead of needing a robot to physically move them around. It’s not like adding a couple microseconds to response time will even be noticed on a crappy old hard drive.

Add a deep sleep mode and bam, you got a stew goin’.


As I'm sure you're aware, there was such a thing. But market segmentation was used to turn it into a higher super-enterprise tier. Unless you care a lot about latency variance, there is no reason why those two sets of protocols (networking and storage) shouldn't have converged a long time ago. Admittedly, 20 years ago there were some bandwidth issues due to run lengths, but those are long gone.

The market is simultaneously the hand that giveth and taketh away


Are you talking about PCIe fabrics? There was a moment there where I thought a shiny future was ahead of us and then… nothing. Still bitter about that.

Or was this tried another way? I’m hoping that open firmwares create an environment where we can start treating the computer as a network. It’s a slim hope but not entirely insane.


I was taking about Fibre Channel...same drive with a different controller would cost 2x, and the switches and management software were just obscenely priced.

there was also iscsi. I tried to use to it in a couple jobs and it was really poorly performing, and not very reliable. I _suspect_ that's just because it was never paid attention to properly? maybe because it was using TCP instead of a reliable but unordered protocol.

thinking about it, I'm off on the bandwidth numbers. SSDs move out quite a bit...so maybe you wouldn't replace your primary OS drive. But for secondary and tertiary storage ethernet drivers would really be nice

PCIe fabrics are still kicking around. I agree that's another enabling technology that just hasn't found a home. You would think with this USBc stuff that would suddenly be of great interest.


just to round it out, there was another short-term effort around..2000 I think to extend infiniband to support remote and aggregate devices. TopSpin was a big name before they got acquired by Cisco. I think that would have been a great direction...especially since IB got pretty cheap


> Give me a hard drive with a connector I can plug into a hub or switch […]

So basically a storage area network (SAN):

* https://en.wikipedia.org/wiki/Storage_area_network


Nope. That’s consolidated storage usually mediated by a device that manages things. I’m talking about a more ad hoc system. Synology is great until I need one more drive than I have bays.


What file systems support transparent tape loading?

That’s a neat feature, but I’d expect a lot of software to behave badly if local file system operations suddenly started exhibiting tape-level latency.


IBM Z/OS supports it. You'd try to open a file and it would pop up a "retrieving" message and you'd know you could go and get a cup of coffee while you waited for your file.

(That's obviously at the OS layer rather than the filesystem layer. But the application didn't know where the file was until it asked for it.)


Windows Storage Spaces. IIRC, a Tiered Storage Space means that evicted-to-tape files that get OpenFile'd, get copied in full back to the hot disk before the OpenFile completes, such that any IO then proceeds from the hot disk, not from tape.


> What file systems support transparent tape loading?

In the enterprise/HPC space, GPFS and Lustre.


NTFS, if I remember correctly.


This article seems like it is an AI summary of another document. It doesn’t offer anything new, just restating someone else’s analysis. And that analysis comes down to… storage demand will outpace SSDs price point enough that the demand will be met by both SSDs and traditional spinning platters.

Which isn’t surprising as even today there is demand for tape. The question is how long the demand will remain vaguely mainstream vs when it will become more niche. The 2028 estimate of storage being dominated by SSDs seems vaguely reasonable to me.

Pure’s bravado of staying zero hard drives will be sold after 2028 seems silly but inline with what a flash storage company would say. But from a directional standpoint it probably is right that many use cases will get further eroded by SSDs. One big challenge with hard drives is access speed (throughout and latency) compared to nvme, and hard drives being used more as cold and near line storage is definitely going to continue. Write once access never in many cases.


> Pure’s bravado of staying zero hard drives will be sold after 2028 seems silly but inline with what a flash storage company would say

Or what HDD companies would themselves. I recently was forced to buy WD JUCT, which are SATA-2 (!), 5400rpm, 16MB. There were literally no other non-SMR 2.5 drives at tgat moment, even WD REDs.


Flash is down to "only" 2x price of regular harddisks for several TB. Sure new HDDs are large, but they're expensive, and it seems unreasonable to me the new HAMR tech will be different.

Simple extrapolation puts the crossover a few years into the future last I checked, though there are other aspects as well.

Will HDDs die next decade? Surely not. But given flash can go 3D surely its just a question of time until HDDs are like tapes now.


SSDs had vast improvements in cost over the past decade due to (a) the switch from 2-bit MLC planar NAND to 3-bit (TLC) 3D NAND, and probably (b) the growth of the SSD market - 10 years ago most flash went into iPods, phones, etc., so pretty much all flash chips produced were optimized for that use case, and it cost more to make SSDs out of them.

Hard drives spent a decade in the wasteland, with little growth in per-platter capacity (google "superparamagnetic limit" for more info) and gains mostly due to packing more platters by filling the enclosure with helium to reduce aerodynamic drag. (fun fact - cast aluminum is porous to helium, which caused no end of trouble)

With HAMR and other recent energy-based workarounds to those density limits, hard drives have a number of years of major TB/$ gains ahead of them, while flash is in the optimizing stage - going from e.g. 96 to 144 3D layers, and diminishing returns (at the expense of big performance costs) going from 3 bits per cell to 4 bits ("QLC") for a 33% boost in capacity. (QLC to PLC only gains an additional 25%, at an even higher cost)

If you're managing an exabyte of data or more, HDD is probably an important part of your storage mix for a good number of years going forward - devices are currently maybe 5x cheaper than big QLC drives at the moment, the ratio is probably even better when you price out the systems that house them[*], and HDD $/TB is probably going to improve faster than SSD $/TB for a number of years.

I started my career working on Asynchronous Transfer Mode networking; ever since then I'm highly skeptical of anyone saying "our technology will eliminate <entrenched technology X> in five years". Unlike ATM vs Ethernet I think it's quite possible that HDDs will fade to irrelevance in 10-15, but 5 years is ridiculous. If you narrow the statement to "no one will sell 3rd party HDD-based storage systems", it might be true - HDDs will go into AWS, Google, big Ceph deployments, etc.

* it's actually a bit difficult to spec a system that doesn't add close to 100% to the cost of the drives themselves for either HDD or SSD, and it also depends on how CPU- and memory-efficient your storage system needs. (e.g. Ceph can be a pig, and note DRAM prices didn't fall in the 2011-2021 decade) There are a number of competitive capacity HDD systems on the market - typically 4U 60-drive machines - while the EL.1 SSD format has just come on the market and you'll probably pay a big premium for systems optimized to house them.


I know spinning disks are fragile, but I've had more problem with cold storage of SSDs. They just seem to forget information or flip bits when they're not kept powered up. Don't ask me why.

Since I need cold storage, I hope that hard disks continue to be around.


I don't see disk drives going away. They serve a different purpose. As much as I love SSDs, I feel more comfortable storing data on a disk drive for the long term. I don't need performance on network storage either.


It's kind of ridiculous that you can get 2TB of SSD storage now for $60 (even if it's not the fastest)

https://www.newegg.com/intel-2tb-670p-series/p/N82E168201674...

For the majority of consumer use cases even 1/4 of that is pretty generous, and Apple just bumped up the MacBook Air from 128GB to 256GB.

Even our fancy pants enterprise storage arrays have a 30:1 or 40:1 ratio of HDD to SSD (cache). But of course, those prices aren't coming down anytime soon!


The introduction of SSDs leading to SSD only personal computers accidentally set computing back about a decade. It's crazy that top tier computer manufacturers are actually selling computers with un-upgradable storage of only 256GB (and only 8GB of ram) in 2023. Then these users end up having to juggle a handful of external drives which are far more unreliable and likely to cause weird problems. The vast majority of my tech support is helping people with tiny SSDs deal with the problems of external drives and cloud storage.


It's a feature, not a flaw. You get to sell people on cheap NAND, and your cloud service. Windows will always nag you to use OneDrive, your OEM may have their own bloat/nagware, etc.


It's a trade off. For many years PCs would make sure to include Intel i5 or i7 processors but boot off a HDD because they could list a big 1000GB storage bullet point. Big numbers on boxes sell! This meant slow PCs. Data hoarders should look for a PC with a hard drive or upgrade aftermarket.


There are still a lot of workloads that perform mostly sequential writes and reads that aren't a huge win for SSD. Even if you look at random write workloads many of those start by journaling to a WAL now, so it is possible for the WAL to be on a separate HDD.

I assume a lot of this comes down to "object storage": S3 and similar services. As I understand it, these would do actual data storage in HDD (or other cheaper medium for glacier and slower access storage). Metadata caching could use SSD.


When I recently upgraded my setup, I had to choose whether to convert my local backup processes to SSDs or stick with HDDs, albeit newer and faster HDDs than their snail-like predecessors. I did the usual research into which option made more sense when strictly for backup purposes rather than frequent “live” data access, and decided HDDs were my better long-term choice. YMMV.


As a terabyte SSD is enough for everything I need random access to, and hard drives are not particularly reliable for backups storage, I wish we just had cheap reliable tape storage to put backups on. But small-scale tape storages appear to be more expensive per terabyte than HDDs are.


The price of a single HDD is (very roughly) constant, regardless of capacity; the price of an SSD is mostly determined by the amount of flash. In other words, you can cut an SSD in half, but you can't do that to a hard drive.

Flash replaced mini-drives in iPods when the price of "enough" flash (2GB or so) dropped to that of a mini-drive.

SSDs replaced HDDs in laptops when the price of a "big enough" SSD (256 MB?) became competitive with an HDD.

Every reduction in flash cost after each transition was a win for the vendor, since they could keep storage constant and reduce the price. (unlike HDDs, where the price stays constant and the amount of storage goes up)

In each case some users (over-represented here) wanted more storage, and vendors didn't care, or were happy to sell more flash for a price premium.

Enterprise systems aren't single drives, but I believe they still have a concept of "enough" - if the savings from HDD are marginal, or the performance loss from concentrating data on fewer and fewer HDDs becomes too much, they'll switch to pure flash. Anecdotally that's already happening. (also, to be honest, another driver is probably because people like to spend their employer's money on high-tech shiny things, which often pays off better career-wise than saving money)

For the Googles and Amazons of the world there will probably never be a value of "enough", and HDD won't fade until the IOPS/TB ratio becomes ridiculously bad. Maybe not even then, as HDD may still be the best way to store data for a few decades.


> In each case some users (over-represented here) wanted more storage, and vendors didn't care, or were happy to sell more flash for a price premium.

Oh yeah, it really boggles my mind Apple still offers new MacBooks with 256 GB drives (unupgradably soldered on-board!). I can't imagine how insane it takes to be to pay the money it costs for it. Also RAM - 64 GB SODIMM can be bought almost dirt-cheap and they put 8 GB, again - built in the CPU, practically impossible (although some crazy hackers managed) to upgrade.


> hard drives are not particularly reliable for backups storage

This is true when you compare them to media designed for long-term storage. But HDDs aren't all that terrible at it. I recently needed to dig out a 30 year old hard drive to recover some data from it, and it worked flawlessly.


I did the same last weekend. One of 3 old 120 GB HDDs (of one the very best models available ~17 years ago) I last accessed about 4 years ago just didn't come up (it sounds like it tries repeatedly but fails).


Magnetic tape is still being used for niche applications.

https://en.wikipedia.org/wiki/Magnetic-tape_data_storage#Via...


We've got a 100PB tape robot. It's not dead, although it's close enough that almost no one noticed when a patent dispute meant that you couldn't buy the highest density tapes for a year or so.


They absolutely will if they ever reach similar $/GB price points. That's the only reason I can think of to buy spinning disk storage in 2023.


Well, that and as other commenters have pointed out, HDDs have benefits in terms of longevity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: