The thing I'm most excited about for AV1 is film grain synthesis. I can't articulate why, but I really love the look of film grain in video. Unfortunately, on most codecs, film grain is a huge hassle which takes up a lot of bandwidth. But AV1 can automatically remove the grain, encode, and add it back in.
Curious about this. Does the grain it algorithmically adds back look the same as that it takes away? In other words, is it just a better way to compress existing film grain? Different film sources have extremely different grain profiles, and people who encode movies (both production companies and pirates) will have no interest in the aspects of a codec that remove the real grain in a film to replace it with a generic alternative. I suppose they might adopt AV1 with that feature turned off, but otherwise AV1 will mostly be adopted for web video which currently doesn't have grain due to low bitrate encoding.
Depends how good you are with an encoder. It's not compression, the grain does change, but since it's just random anyway, the human eye can't notice so long as it's the same type of grain.
Encoders generally have a few options to tune the grain that gets added back in to get it as close as possible to the original source.
I would be interested in seeing a post on these options and their results if you know of one. Even though grain is "random", it's not random noise and is very distinctive. No two film stocks (and probably no two movies) will have the exact same grain. It would require incredibly precise tuning to manually add back grain that matches the original in the film.
When I do that, I get (1) a Wikipedia page literally titled "film grain modeling", but it's only a redirect to H.264 and the only mention of "grain" on that page is in the redirect notification; (2) a patent; (3) some scientific paper supported by a few images that only show noise (is that what we're talking about? I expected grains in the sense of particle generators); (4-5-6) paywalled scientific papers; (7) a Facebook page...
A link would be nicer than an apparently very specific search term.
That's interesting and matches my a priori expectations. That (effectively) makes it a way of compressing grain, rather than just masking lower bitrates. You have to store information about the analysis for the decoder to work, and so there are the usual tradeoffs that come with compression.
I don't think it really counts as compression. It's like taking the statistics of a text and generating a new one with a Markov chain.
In other words, it is complete nonsense, texture synthesis, like the way some people go for printing a digital image on a fake "canvas" for a "textured, 3D look"... or maybe one of those flickering flame light bulbs.
Can anybody here imagine an audio codec that added simulated hiss and crackles to make it sound like you were listening to a vinyl record? That's what this is.
Some artists will intentionally put scratching or crackling into their music for atmosphere. Imagine an audio codec that eliminated all scratching and crackling from music. Some people may not mind but many artists would hate it.
Film is the same. Directors choose the film type and grain patterns that match the look and feel they want. A more grainy film feels darker and more serious. Splotchiness contributes to an unrefined or maybe even psychotic feel.
You don't want your encode removing that. You want the atmosphere of the visuals to match the original, because the director likely had that atmosphere in mind and made other decisions around it.
Well, the problem is that the encoding process in AV1 does remove grain, only to add fake synthetic grain back in. My objection is not to the idea of encoding grain, it's with replacing it with fake synthetic grain.
It's kitschy, ersatz, whatever you want to call it. I used to spend a lot of time with artist filmmakers (many of whom are very nostalgic about celluloid) and there is no way in hell they would accept this solution. It wouldn't meet their standards of authenticity. It also "fakes" a medium-specific property in a way that is unprecedented in audio or visual coding. Their solution would be to either code at high enough bitrate to capture the grain, or insist on screening on physical celluloid.
That might sound extreme and unrealistic—it's what artist's film people are like—and even a bit snobby. After all, a lot of the options out there for adding film grain effects (or scratches etc.) to video are not intended for use by professional filmmakers, depending on how you define professional. There is definitely an element of snobbery to the statement that film grain should never be faked, whether by compositing something onto your home video or in a hidden way inside AV1.
I hope you can see that I respect the director's prerogative in choosing grain, I just don't think this AV1 grain synthesis methodology is sound from an aesthetic/authenticity point of view. It's digital faking of an chemical/analog effect, which IMO makes it unavoidably kitsch for reasons to do with old modernist ideas of "medium specificity".
I couldn't easily pull up any screenshots, but I've been a video encoder for some years and I can tell you that psy-rd (the x264 option to add film grain) does wonders in terms of grain fidelity.
The problem with matching the grain perfectly without the randomization is that it drives the bitrate crazy. A movie with 99% fidelity grain and 0 psy-rd will have a bitrate of 30mbps+.
If you use psy-rd though, you can get to 99% fidely grain at closer to 20mbps. The two screenshots will be visually indistinguishable, even when you are rapidly switching between the source and encode, even though you know that the grain is being randomly generated you can't tell.
If I get a chance later today I'll drop some comparison screenshots for a film encode that used a lot of psy-rd, I think the results will surprise you.
https://jmvalin.ca/papers/AV1_tools.pdf has a brief technical description, but no results. It cites a DCC paper from the authors of the tool, but I couldn't find a copy online.
Why not keep the bitrate higher? Or are you re-encoding already over-compressed video? In which case re-encoding and adding noise only reduces the amount of detail and bloats the encoded size with, well, noise.
However, for a variaty of reasons you sometimes have to deal with overly compressed images. Adding noise on the client during play back disguises compression, making it nicer to the eye at no bandwidth cost.
I am probably kind of crazy, but I love grain and worry when movies lose it. Like the fine old "Predator" movie on Bluray. Nice colors, but the grain is kind of gone. The DVD has more of it, and it looks better.
I feel a ramble coming on, but I believe movies should be available in a version close to how they were viewed on theatrical release.
If you think about it, the first 100 years of this artform is always going to be special.
Is this similar to DNR schemes? Because that's a lossy process that reduces find detail, and is notorious for having been misused in many bluray releases of analog film remasters.
And adding digital grain "back in" doesn't sound very authentic to the original analog source.
Film grain is so expensive to encode that in bit-rate-constrained situations you would be forced to choose between obvious ugly compression artefacts, a reasonably good image without grain, or a reasonably good image overlaid with inauthentic grain.
Besides, "authentic" is such an overhyped pile of poo anyway. Everything is a reproduction; what matters should be that the artistic intent remains unaltered.