Hacker News new | past | comments | ask | show | jobs | submit login

> i actually put HDDs on some workstations as SSD just die after 2-3 years of active build on them

That sounds very low for modern SSDs, even consumer-grade. Have you tried different vendors?




If spending hours a day at 100% utility, SSDs will rarely last 5 years.


If your SSD would be at 100% utilization it’s going to take a lot of HDD to reach that kind of bandwidth. To the point where for high bandwidth loads SSD’s actually cost less even if you have to replace them regularly.

100% utilization and 30x the bandwidth = 30x as many HDD. Alternatively, if HDD’s are an option you’re a long way from 100% utilization.


SSD have hard time sustaining 200-400Mb/s write where is 4 HDD do is easily. Our case isn't that much about IOPS.

Anyway, reasonably available SSDs have up to [1000 x SSD size] total write limit, so doing couple of 400G builds/day would use up the 1TB drive in 3 years. At worst times we had to develop&maintain 5 releases in parallel instead of regular 2-3.


4 HDDs can do 200-400MB/s _sequential_ IO, 1 modern SSD can do 150-200MB/s _random_ IO and 400MB/s sequential IO while 4 HDDs would have a hard time doing IIRC more than 8MB/s random IO


i don't argue that. It is just random IO isn't that important for our use case, especially compare to capacity and sequential IO which are important for us and which HDD is ok for.


Urm what? A modern NVMe drive will sustain ~2 GB/sec write.

(See e.g. https://cdn.mos.cms.futurecdn.net/Ko5Grx7WzFZAXk6do4SSf8-128..., from Tom's Hardware)


Few SSDs can sustain such a speed for a long time. After they exhaust their temporary SLC cache they drop to much lower speeds. SSDs that have accumulated a large amount of writes might also make large pauses at random times during writing, for garbage collection. Good SSDs remain faster than HDDs even in these circumstances, but they are nevertheless much slower than when benchmarked for short times while they are new.


Note how, in the graph, even the worst-performing SSD stays above 500 MB/sec sequentially for an indefinite amount of time, while the parent post claimed SSDs couldn't even do 200–400.


You can get SSDs without such issues. At least for the kind of long sequential writes followed by deletes where HDD are are even vaguely comparable, good SSD’s can maintain very close to their theoretical maximum speeds indefinitely.

They only really slow down in usage patterns that trash traditional HDD’s, which have much worse disk fragmentation issues.


And that's why you buy 2-bit MLC, which can sustain these writes forever. Like e.g. the 970 Pro.


Depends on the kind of SSD. If it's using SLC, the write endurance is much much higher. If you're going with cheap SSDs (TLC or QLC), your write endurance will suck.

see: https://www.anandtech.com/show/6459/samsung-ssd-840-testing-...


SLC seems to be going away pretty quickly, if it hasn't already been phased out. It just can't produce the price / GB of the better tech. Also, that article you linked is almost 10 years old.

You're best bet for long-term reliability is to buy much more capacity than you need and try not to exceed >50% capacity for high write frequency situations. I keep an empty drive around to use as a temp directory for compiling, logging, temp files, etc.

Also, my understanding is that consumer-grade drive need a "cool down" period to allow them to perform wear leveling. So you don't want to be writing to these drives constantly.


I recently bought an external 32GB SLC SSD (in the form factor of an USB pendrive). Its random read/write speeds are quite insane (130+ MB/s both) while consumer SSDs like the Samsung 850 Evo barely manage 30 MB/s read/write. It's also advertised as very durable.

I plan on using a couple of those as ZFS metadata and small block caches for my home NAS, we'll see how it goes but people generally and universally praise the SLC SSDs for their durability.

> You're best bet for long-term reliability is to buy much more capacity than you need and try not to exceed >50% capacity for high write frequency situations. I keep an empty drive around to use as a temp directory for compiling, logging, temp files, etc.

That's likely true. I am pondering buying one external NVMe SSD for that purpose exactly.


> I recently bought an external 32GB SLC SSD (in the form factor of an USB pendrive). Its random read/write speeds are quite insane (130+ MB/s both)

Which brand/spec?


https://www.aliexpress.com/item/32846479966.html?spm=a2g0s.9...

Vtran / eVtran. They have favourable reviews, too.


It's actually not better tech, instead it's more complicated, more error prone and less durable ways to use the same technology that produces more space for a lower price. MLC is pretty much okay but TLC is a little too fragile and low performance in my opinion. I prefer spinning HDD's over QLC since the spinning drives have predictable performance.


Some QLC drives perform quite well. And of course for any workload that is primarily reads, they're totally fine. I use one to hold my Steam collection.


Better can mean a lot of different things. From context, I was using better to mean higher storage capacity for a given price.


What kind of workload will do that?


a build server recompiling multiple branches over and over in response to changes.


And logging all of those unit tests associated with all of those builds (and rolling over those logs with TRACE level debugging).

Every build gets fully tested at maximum TRACE logging, so that anyone who looks at the build / test later can search the logs for the bug.

8TBs of storage is a $200 hard drive. Fill it up, save everything. Buy 5 hard drives, copy data across them redundantly with ZFS and stuff.

1TBs of SSD storage is $100 (for 4GBps) to $200 (for 7GBps). Copy data from the hard drive array on the build server to the local workstation as you try to debug what went wrong in some unit test.


My mind was blown when I had to send a fully logged 5 minutes of system operation to a friend for diagnostics (MacOS 11 on an M1 Mini). He wasn't joking when he said don't go over a few minutes because the main 256GB system drive almost ran out of space in that time. After getting it compressed down from 80GB and sent over I got my mind blown again when he explained he had to move to his workstation with 512+gb of ram just to open the damn file.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: