It's not a huge surprise that gzip as a general compression algorithm didn't compress this down any further. I do wonder about a format that was specifically trained on these specific characters though, and the patterns that tend to emerge from the weird compiler. Maybe the chunks at a certain scale would be predictable and thus compressible.
Of course at that point you're probably more interested in a common binary format, and should start thinking about wasm instead.
The information content of the weird program is the same as that of the original cleartext, so I wouldn't expect zipping a transpiled program to compress better than compressing the original.
Another comment pointed out that there is more information or entropy in the weird version if you don't happen to be a js interpreter.
The compression algorithm knows that 165427-165427 is really 165427-same, but does not know that it's really 0 and the same as all the other things that resolve to 0.
There must be a lot of similar things like that in this particular case that relies on knowledge of the rules of js.
I guess it's tempting to think next about adding ai to a compressor so that it could know the actual rules of js and refactor it, but I just think of the tragedy that is jbig that does OCR as part of the compression, except, gets it wrong, and the original data is lost forever without a trace. And it's built right in to some scanners and happens before the compressed output even leaves the device. The user never sees anything else. There is no better reference uncompressed version anywhere if the user did not know about the problem and override some default settings.
Of course at that point you're probably more interested in a common binary format, and should start thinking about wasm instead.