So, there's actually no particular reason and if somebody cares to write one then yup, TIFF codec in WUFFS would in fact be safer and faster than your uh, approach.
Wait, you believe that somehow one of these approaches doesn't rely on competence from programmers? How do you figure?
Have you been imagining that sandboxes are some sort of fairy dust we just stumbled onto one day, supernatural in nature and not, in fact, just software written by people you're hoping are competent and haven't left any holes?
The point was... one is testing parser/OS integrity via a debugging interface over an expectation of an unchanging emulated environment state... there is nothing particularly special about the approach. Even Qubes OS and RancherVM is not perfect in this regard friend.
Or put another way, the available attack surface of a bare-minimum fixed environment is much easier to auto-audit, than a pile of daily permuted binaries and self-delusion approach. i.e. if it fails to behave in an expected way, or is modified in any way... the host audit process doesn't have to care why or how it is broken to maintain a service queue as the guest is culled.
Perhaps I am wrong about exchanging 15% of raw performance for reliability, but things can get complicated with licenses and multiple OS specific platforms.
You seem to be getting emotional about this subject, presenting secondary and tertiary straw-man arguments. So I'm going to go eat some Cheese Goldfish crackers... and just agree that your beliefs are interesting.
There's nothing special about it, but it doesn't work especially well. This is the strategy that's blown up on Apple twice in recent years and will keep burning them.
If you're Matt Godbolt the benefits of sandboxing outweigh the cost because Matt is interested in general purpose software. But WUFFS isn't for that, as its name says it's interested in doing one particular task well.
In this deliberately limited domain, WUFFS gets to sidestep Rice's theorem altogether and just prove the software meets the semantic requirements [technically you do the proving, WUFFS just checks your work].
I hope you enjoyed your goldfish crackers but I urge you to use the right tool for the job.
"the right tool for the job" is sometimes admitting the breadth of underlying dependencies and ambiguous format specifications are unfeasible to fix with your teams time budget.
The design in question currently only processes around 1.8M large image files a day, and does not require additional work/re-implementations to support the dozens of questionable user file-formats. i.e. the plain old ImageMagick lib does most of the heavy lifting at the end.
Would I trust such a solution for something like a native client side web-browser etc... absolutely not... but for the core-bound instance overhead, the resource cost was acceptable for almost a decade of uptime on those system instances.
Use-cases are funny like that, as there is no perfect solution... but rather a tradeoff of what features get the system functional and reliable. Part of that is admitting integration of 3rd party dependencies is a long-term liability, and domain specific languages almost always fade into obscurity.