Indeed, I think writing a JPEG decoder is not really difficult, yet still challenging enough to be interesting, and something I believe everyone should try sometime. (I've written one before too.) The official standard itself (ITU T.81) is not hard to follow either, and contains a lot of useful flowcharts showing the decoding and encoding process.
Anyone think most JPEG implementations will ever move to arithmetic coding now that the patents are up, or is Huffman going to stick around in JPEG forever?
JPEG is a archival format at this point. There are a bunch of codecs moving forward that support arithmetic coding. FLIF (using MANIAC) and BPG (using HEVC) are the big ones.
That isn't a good justification for calling JPEG "archival". HEVC outperforms h.264 just as BPG outperforms JPEG but like HEVC, support for BPG is still extremely limited and adoption is ultimately what determines whether something can be discounted as "archival".
Sure there are JS polyfills but those aren't going to work well on mobile devices and until content creation tools like Photoshop and desktop operating systems like Windows and Mac support exporting and reading BPG, it's unlikely to see much adoption.
Archival as in has to support decades of old images. It's not a format that's being heavily modified or is going to see any hard transitions in coding/decoding.
FLIF and BPG are development formats and where you look towards the future, not the past.
JPEG is the main format in which images are produced and distributed these days. Even cameras that produce "RAW" natively will usually have their output converted to JPEG for distribution.
No one disagrees with you. But for that very reason it's locked in supporting those old files. I.e. archival.
JPEG isn't going to morph into some brand new format that supports arithmetic coding. New formats will adopt those, specifically because they don't have to support decades and petabytes of files.
Not to mention, if you've ever actual written a JPEG decoder/encoder; the format is not conducive to coding outside of it's HUFFMAN/quantization format. PNG can add those things since it's actually an extensible format, but even then...why? It's not like the libpng everyone has installed will magically support them.
JPEG2000 used Arithmetic coding. But, you'd have to re-write the standard and evaluate it to get JPEG to use Arithmetic coding (e.g. does the way JPEG coefficients are read out still make sense with Arithmetic coding)? There are also other algorithms for universal source coding that exist -- theres nothing to say that arithmetic coding is a good choice, esp. given its relatively complex compared to say Asymmetric Numeral Systems (Duda; arXiv:1311.2540 [cs.IT], though I don't remember where it was published) or (Recursive) Interleaved Entropy Coding (Klimesh and Kiely; NASA/JPL IPN Progress Report 42-146).
Arithmetic coding is just the one everyone knows since its taught in information theory 101 (e.g. Cover and Thomas).
My system libjpeg on Fedora has it, but most browsers ship without it. Given that from a compatibility point of view it's almost as bad as switching to a new format entirely, I doubt it will be embraced.
https://github.com/aguaviva/micro-jpeg-visualizer