How do ML based lossy codecs compare to state of the art lossy compression? Intuitively it sounds like something AI will do much better. But this is rather cool.
Agreed, although this bit is unclear - the compressed representations of the ML-based methods take up much less space in memory than traditional methods, but yes - the decompression pipeline is memory-intensive due to intermediary feature maps.