It's a neural network, not a traditional compression algorithm. It would be difficult to implement this efficiently in an ASIC AFAIK, but if there are any hardware designers that disagree please chime in.
Traditional codecs also use a lot of “magic” tables with numbers (see e.g. AMR codecs used in GSM telephony).
I think this codec could be optimized to run relatively efficiently on the various AI accelerator chips modern phones have, which is “kind-of” doing it in hardware.