Hacker News new | past | comments | ask | show | jobs | submit login

Holy crap this would make a good image compression algorithm. Might be complex to encode, but super small and efficient to decode.

Can we have this, please? Someone?




I suspect that there would be a lot of edge cases where the algorithm wouldn't yield very satisfactory results. Think about images without broad similarly coloured areas, like white noise and such. Maybe further research will alleviate this. I'm thinking that maybe dividing the image in tiles, like JPG does, could help.

Another advantage is that the compressed format would be vectorial instead of raster, so it would provide smooth scaling.


The compression aspect was what I looked into. Aiming for 0.125 bits per pixel on 256x256 blocks (1k byte blocks) it does seem to be competitive. The lossiness at that level is quite high but other formats fare much worse. It might be reasonable to use it store higher resolution images for a given data size.

Storing 4 times the X and Y resolution than a 2 bit per pixel jpeg would yield the same compressed data size. (say a 4096x4096 image compared to a 1024x1024 jpeg. Both 256k). That could also be thought of as storing it as scalable 64x64 blocks.


The algorithm requires that you compare the current iteration to the source image, how does that constitute good compression? Not to mention the final image required 904314 generations to reach it.


When compressing, you always are using the source image in some sense.

What you end up with could be a fairly minimal way to represent the image.

Though yes, clearly, the amount of processing required to reach that compression is absurd. But then, most ultra-efficient compression algorithms have this problem at least initially.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: