Hacker News new | past | comments | ask | show | jobs | submit login

If you read in any other article something like the following: "Taking Product X as having a baseline compression ratio of 1, Product Y had a compression ratio of 0.5 and Product Z had a compression ratio of 0.3", I'm pretty sure 99.9999% of the HN population would interpret that as Products Y and Z having worse compression than X, not better. That's my point.



This academic-looking paper (first hit I tried from Wikipedia) gives the standard definition of "compression ratio" as compressed/uncompressed size (section 4.2), consistent with the linked article.

I'm pretty sure you're impression of 99.9999% of the HN population is wrong.


Link?

OK, found this: http://en.wikipedia.org/wiki/Data_compression_ratio

Which includes this section on "Usage of the term": "There is some confusion about the term 'compression ratio', particularly outside academia and commerce. In particular, some authors use the term 'compression ratio' to mean 'space savings', even though the latter is not a ratio; and others use the term 'compression ratio' to mean its inverse, even though that equates higher compression ratio with lower compression."

So, my bad, however in my practical workplace experience the above (in italics) has been the case, hence the confusion.


Simple rule: If it's under 1.0 or expressed as a percentage under 100%, it's a compression ratio. If it's over 1.0, it's a compression factor.

Otherwise, it's not compressed. :-)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: