Hacker News new | past | comments | ask | show | jobs | submit login

I remember seeing an example of using zip to classify languages. You take a set of documents of equal size where you know the languages, then individually concatenate and zip them with the unknown text. The smallest compressed output is likely to be the target language.

I can't find the original blog, but there's a note about it here - https://stackoverflow.com/questions/39142778/how-to-determin...




Ideally, you'd take all the documents in each language, and compress them in turn with the unclassified text, to see which compresses it better. But this won't work very well with gzip, since it compresses based on a 32KB sliding window. You might as well truncate the training data for each class to the last 32KB (more or less). So to get any performance at all out of a gzip-based classifier, you need to combine a ton of individually quite bad predictors with some sort of ensemble method. (The linked code demonstrates a way of aggregating them which does not work at all).


How much better would that get if you append all but one of the equal size documents? (or other combinations like 2 of the top results after using a single one)


Better, if the compressor can use all that extra context. Gzip, and most traditional general purpose compressors, can't.

It's hard to use distant context effectively. Even general purpose compression methods which theoretically can, often deliberately reset part of their context, since assuming a big file follows the same distribution throughout as in its beginning often hurts compression more than just starting over periodically.


Now that you mention it, I vaguely recall writing a language classifier based on character histograms as a youth. Good times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: