Standard MPEG-1 MPEG-2 MPEG-4 Visual MPEG-4 AVC MPEG-H HEVC
Bitrate vs previous 25% less 25% less 30% less 60% less
The table at the end is hiding one factor : the computation cost of each new codecs. Have we really found new better ways to compress videos, or is it just Moore's law in action ?
More advanced video codecs have higher visual quality at lower bitrates, but are more computationally expensive. All of these video codecs use the same underlying ideas of 2d transforms, intra- and inter-frame compression, motion compensation, entropy coding etc., but using more complex techniques to preserve video quality.
I wouldn't say it's a simple as "Moore's law in action".
Yes, the computation per pixel has increased ~2x per codec generation so codecs are taking advantage of (a fraction of) the gains provided by Moore's Law.
I remember that time when I tried to play a 4k video (x264 or x265, I am not certain) on my Mac Mini. I was pleasantly surprised that it actually ran smoothly, but the poor thing nearly melted.
Depends on what you mean by "just Moore's law." Certainly each standard was developed with a higher target decode complexity; H.264 and H.265 both targeted 2x greater than the previous standard, JVET is currently targeting 16x, and I think AV1 is aiming for less than 2x HW area.
So yeah, there were obvious gains left on the table for H.264 for complexity reasons that H.265 picked up; likewise for H.265/VP9 that AV1 picked up.
That said, there has been a lot of refinement that didn't necessarily increase complexity, and there have been various tools that saw reinvigorated research once complexity targets were raised.
I think it's because the major cost is bandwidth, so everyone is incentivized to optimized on size primarily, maybe at the cost of being more cpu/memory intensive on enc/dec.
Of course ! I was just wondering about the real technical improvement... Talking about bitrate/(video quality) is forgetting about (computation complexity) that's all