Hacker News new | past | comments | ask | show | jobs | submit login

> to inflated advertised capacity? You decide

Because that is how things have always been measured. There is no big conspiracy to con us on drive size.

Storage measurements like comms rate measurements and other scientific measurements are in 1,000s.

Programmers and some types of system engineers found it more convenient to work in powers of 2 and 2^10 is conveniently close to 1000 so the redesignation of what K (and M, G, etc.) meant just sort of happened for us. We are the oddity, not the drive manufacturers.

People first made a big noise about it (that I heard) around the time 40Mb hard drives were common place. They were stealing nearly 2 whole megabytes! I knew someone who swore blind he had smaller drives (of the 10 & 20Mb range) that were officially counted in 1024*1024s in their documentation, but the proof he was going to show me never materialised.




RAM/ROM is always measured in binary because of the nature of their addressing. Powers of 2 are widely used in computing.

Flash memory is a bit of an oddity --- early drives with SLC flash had true binary sizes (although the flash itself actually had extra capacity on each sector/page for error correction), then later MLC/TLC/QLC flash required so much extra error correction data that the drive sizes went down and coincidentally became close-to-decimal, despite the fact that a lot of solid-state media is still sold in binary sizes (or small multiples) --- 4GB, 8GB, 16GB, 80GB (5x16), etc. I have a 16MB USB drive that truly contains 16,777,216 bytes (unformatted).

Apparently SD cards have dropped below even the decimal sizes(!) that a "32GB" one may contain less than the 32,000,000,000 bytes which an HDD should have:

https://www.eevblog.com/forum/chat/reduced-capacity-on-sandi...

The proof of early drive sizes being in binary is here: https://news.ycombinator.com/item? id=19589230


RAM/ROM was commonly not measured in binary in the early days, thought it did soon became ubiquitous that they were.

I know it wasn't unheard-of for hard-drives and other large storage (tapes etc.) to be measured/sold in binary units, but I'd say it was most common by the time that the public at large cared at all what a hard-drive was for them to be measured in 10^3s. There was also the concept of formatted and unformatted capacities, which is not presented to users these days (the drive in my old IBM XT was 13MB unformatted, an odd figure in both binary and decimal units).

Floppy and disc-cartridge formats were a bit of an odd mish-mash, often even mixing 10^3 and 2^10 units.

Flash was usually measured in 2^10s early on as partly as it was thought of more as non-volatile RAM than anything else and partly because it was manufactured that way. And as you mention, modern ware levelling and bad sector masking means that there will be a difference between visible capacity and actual chip capacity at which point it becomes difficult to predict what any particular controller can be or will be configured to do.

Getting less than the decimal amount might miff me a bit, but if making images it has always been best practise to drop a couple of % in volume size to account for one drive not having quite the same exact size as another. The same best-practice recommendation holds for volumes for RAID too: always limit the size to a little smaller than the maximum (either by using a partition as the block device or (if supported) telling your RAID controller/software to make the volume artificially smaller) in case your replacement drive when one fails in X months/years time isn't quite the same capacity.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: