A lossless compression contest to encourage research in AI. It's lossless, I think just to standardize scoring, but I always thought a lossy version would be better for AI -- our memories are definitely lossy!
> A lossless compression contest to encourage research in AI. It's lossless, I think just to standardize scoring, but I always thought a lossy version would be better for AI -- our memories are definitely lossy!
Gwern posts about this when people say something like that on here, but I'll do it instead. Lossless encoding is just lossy encoding + error correction of some sort.
Hah. "error correction of some sort" is doing a lot of heavy lifting there. Bit level correction is what I generally consider under the "error correction" umbrella. Which we have no problem with -- curious the bit error rate that would make text unreadable. In the context of compressing a large part of the English language wiki -- I think lossy also includes loss so significant that you wouldn't be able to reproduce the exact text. So, well beyond what we would generally consider "error correcting". But intuitively understandable as equivalent by humans. Impossible to quantify that objectively, hence lossless only for the competition.
The way we learn exact sentences is usually by getting an intuitive sense and applying corrections.
For example, to memorize "Doggo woofs at kity", we first get the concept of "dog barks at cat", it compresses well because intuitively, we know that dogs bark and cats are common targets. That's our lossy compression and we could stop there but it is only part of the story. It is not a "dog" but a "doggo", and it goes well with the familiar tone, a good compression algorithm will take only a few bits for that. Then there is the typo "kity" vs "kitty", it will take a bit of extra space, but again, a good algorithm will recognize the common typos and compress even that. So it means the entire process to lossless matters, lossy is just stopping halfway.
And if there is pure random noise remaining, there is nothing you can do, but all algorithms are on an equal footing here. But the key is to make what the algorithm consider as uncompressible noise as small as possible.
> AI is just compression, and compression is indistinguishable from AI
Almost. Compression and AI both revolve around information processing, but their core objectives diverge. Compression is focused on efficient representation, while AI is built for flexibility and the ability to navigate the unpredictable aspects of real-world data.
Compression learns a representation from the same data it encodes, like "testing on the training set". AI models have different training and test data. There are no surprises in compression.
lets say AI is not-so-smart JPEG which has more parts missing, and there is more guesswork when producing restoration.
compression is most of the times about finding the minimal grammar that unfolds to the same original material.
interestingly Fabrice Bellard somehow found a way to use transformers for compression without loss, and beats xz by significant margin. https://bellard.org/nncp/nncp_v2.1.pdf. it uses "deterministic mode of PyTorch" to make sure both directions work alike which I guess means - it saves the random toss throughout compression, for the decompression to use. note: this paper is still on my to-read list.
A lot of current compression techniques use prediction followed by some set of correction data to fix mis-predictions. If the prediction is more accurate, you can have a smaller correction set.
But you're right the predictor does need to be reproducible - the output must be exactly the same to match encoder and decoder behavior. While I don't think this is big focus right now for many, I don't think there's a fundamental reason why it couldn't be, though probably at the cost of some performance.
How does that make sense? Compression is deterministic (for same prompt, same output is algorithmically guaranteed). AI is only deterministic in corner cases.
AI is always deterministic. We add noise to the models to get "non-deterministic" results, but if the noise and input is the same, the output is also the same.
It's a bit more nuanced than that. Floating point arithmetic is not associative: "(A+B)+C" is not always equal to "A+(B+C)". Because of that, certain mathematical operations used in neural networks, such as parallel reductions, will yield slightly different results if you run them multiple times with the same arguments.
There are some people working hard to provide the means to perform deterministic AI computations like these, but that will come with some performance losses, so I would guess that most AIs will continue to be (slightly) non-deterministic.
The compression competitions include the decompression program size in the size of the output. Must be a large series of movies compressed to win, then.
If one model can "compress"/"decompress" all movies and series, its fraction of the size becomes negligible, but yes I agree with you since it still has to be distributed.
If we get anywhere close to that, coming up with a new economics model is going to be the prompt we'll be giving the AGI when it's ready. We'll need it.
In what sense? How about reproducibility? Is stored memory, the connection between the prompt and the exact output, really compression or simply a retrieval of a compressed file then stored as factual knowledge ingrained in its Neural Network?
I like your sentiment, it is technically inspiring.
Given the same prompt and the same seed (and algorithm) the resulting movie/output will always be the same. This is the case for AI image generation now.
Meh, AI doesn't break information theory. The relationship between the prompt size to the "similarity" of the result will be such that it doesn't beat traditional compression techniques.
At best we might consider it a new type of lossy (or... replacey?) compression. Of course if storage / RAM / bandwidth keeps increasing, this is quite likely the least energy efficient technique available.
If the compression can take into account the sub-manifold of potential outputs that people would actually be interested in watching a movie about it can achieve enormously higher compression than if it doesn't know about this.
Or as a VR game. "Star Wars, but with the Empire and Rebel Alliance teaming up to defeat the latest threat to the galaxy: me as Jar-Jar Binks, Jedi Jester. My abilities include Force Juggle, Failed Comedic Relief, and Turn Into Merchandise. Oh, and Darth Vader is Morgan Freeman, and everyone else is Natalie Portman."
Ha, I've commented almost exactly this twice now on HN. We'll see how long before it's a reality -- probably better measured in months rather than years.
In the [Sloot Digital Coding System], it is claimed that no movies are stored, only basic building blocks of movies, such as colours and sounds. So, when a number is presented to the SDCS, it uses the number to fetch colours and sounds, and constructs a movie out of them. Any movie. No two different movies can have the same number, otherwise they would be the same movie. Every possible movie gets its own unique number. Therefore, I should be able to generate any possible movie by loading some unique number in the SDCS.
Guy named Borges already patented that, I'm afraid.
It sounds almost like someone explained content-addressed storage to him and he misunderstood (where you can uniquely identify a movie by number, down to some hopefully a negligible collision likelihood, but you're merely indexing known data)
Graphs (especially PSNR) aren't a good way to judge video compression. It's better to just watch the video.
Many older/commercial video codecs optimized for PSNR, which results in the output being blurry and textureless because that's the best way to minimize rate for the same PSNR.
Many older/commercial video codecs optimized for PSNR, which results in the output being blurry and textureless because that's the best way to minimize rate for the same PSNR.
Even with that, showing H.265 having lower PSNR than H.264 is odd --- it's the former which has often looked blurrier to me.
At equal bitrate H.265 typically is considered twice as efficient as H.264. The graphs look all wrong to me - they show "ours" at a lower PNSR compared to both H.264 and H.265.
It checks a reference video against an encoded video and returns a score representing how close the encoded video appears to the original from a human perspective.
The authors of the metric found some cases where it works better is not the same thing as it being widely considered to be better. When it comes to typical video compression and scaling artifacts VMAF does really well. To prove something is better than VMAF on video compression it should be compared on datasets like MCL-V, BVI-HD, CC-HD, CC-HDDO, SHVC, IVP, VQEGHD3 and so on (and of course Netflix Public).
TID2013 for example is an image dataset with many artifacts completely unrelated to compression and scaling.
- Additive Gaussian noise
- Additive noise in color components is more intensive than additive noise in the luminance component
- Spatially correlated noise
- Masked noise
- High frequency noise
- Impulse noise
- Quantization noise
- Gaussian blur
- Image denoising
- JPEG compression
- JPEG2000 compression
- JPEG transmission errors
- JPEG2000 transmission errors
- Non eccentricity pattern noise
- Local block-wise distortions of different intensity
- Mean shift (intensity shift)
- Contrast change
- Change of color saturation
- Multiplicative Gaussian noise
- Comfort noise
- Lossy compression of noisy images
- Image color quantization with dither
- Chromatic aberrations
- Sparse sampling and reconstruction
Doing better on TID2013 is not really an indication of doing better on a video compression and scaling dataset (or being more useful for making decisions for video compression and streaming).
Back in 2005 there was a collegue at my first job writing video format converters software. He was considered a genius and the stereo type of an introvert software developer. He claimed that one day an entire movie could be compressesed on a single floppydisk. Everybody laughed and thought he was weird. He might be right after all.
Well, as a reality check, even the soundtrack of a 1hr movie would be 50x floppy size (~50MB vs 1MB) if MP3 compressed.
I guess where this sort of generative video "compression" is headed is that the video would be the prompt, and you'd need a 100GB decoder (model) to render it.
No doubt one could fit a prompt to generate a movie similar to something specific in a floppy size ("dude gets stuck on mars, grows potatoes in his own shit"). However, 1MB is only enough to hold the words of a book, and one could imagine 100's of movie adaptations (i.e. visualizing the "prompt") of any given book that would all be radically different, so it seems a prompt of this size would only be enough to generate one of these "prompt movie adaptations".
I used to work with a guy like that in 1997, during the bubble, Higgins was his name. He'd claim you could fit every movie ever onto a CD-ROM, at least one day in the future it would be possible. Higgins was weird. I can still recall old Higgins getting out every morning and nailing a fresh load of tadpoles to that old board of his. Then he'd spin it round and round, like a wheel of fortune, and no matter where it stopped he'd yell out, "Tadpoles! Tadpoles is a winner!" We all thought he was crazy but then we had some growing up to do.
As a casual non-scholar, non-AI person trying to parse this though, it's infuriatingly convoluted. I was expecting a table of "given source file X, we got file size Y with quality loss Z", but while quality (SSIM/LPIPS) is compared to standard codecs like H.264, for the life of me I can't find any measure of how efficient the compression is here.
Applying AI to image compression has been tried before though, with distinctly mediocre results: some may recall the Xerox debacle about 10 years, when it turned out copiers were helpfully "optimizing" images by replacing digits with others in invoices, architectural drawings, etc.
> [S]ome may recall the Xerox debacle about 10 years, when it turned out copiers were helpfully "optimizing" images by replacing digits with others in invoices, architectural drawings, etc.
This is not even AI. JBIG2 allows a reuse of once-decoded image patches because it's quite reasonable for bi-level images like fax documents. It is true that similar glyphs may be incorrectly groupped into the same patch, but such error is not specific to patch-based compression methods (quantization can often lead to the same result). The actual culprit was Xerox's bad implementation of JBIG2 that incorrectly merged too many glyphs into the same patch.
I believe they're using "bpp" (bits per pixel) to indicate compression efficiency, and in the section about quality they're holding it constant at 0.06 bpp. The charts a bit further down give quality metrics as a function of compression level (however, they seem to indicate that h.264 is outperforming h.265 in their tests which would be surprising to me).
It turns out that compression, especially for media platform, is trading off file size, quality, and compute. (And typically we care more about compute for decoding.) This is hard to represent in a two dimensional chart.
Furthermore, it's pretty common in compression research to focus on the size/quality trade-off, and leave optimization of compute for real-world implementations.
It's important to remember that any compression gains must include the size of the decompressor which, I assume, will include an enormous diffusion model.
Yes, absolutely, it's just important to keep in mind when thinking of these decompressors as "magic". If every laptop shipped with a copy of Wikipedia, then you could compress Wikipedia, and any text that looks similar to Wikipedia, really well.