So, you're adding overhead of compression, decompression, parsing and serializing JSON at every step. All likely backed by a language where computing length of a string is O(n).
And people are surprised software keeps being slow despite increase in compute resources. This is insane.
There is an alternate universe where you comment how insane it is engineers waste time on trying to understand the nuances of every single bespoke byte encoding/decoding technique used between services. JSON is just fine for most tasks, otherwise the world would be in flames right now
We have much more compact and widespread generally available formats out there with libraries for many languages. Want unbounded JSON but more compact? msgpack or bson. Want stuff more efficiently packed based on the message structure? Use protobuf.
Yes, there is a little more effort needed for the engineers there. But ya know, if the inputs and outputs of a thing are actually DOCUMENTED and the schemas are available, it's not some massive reverse engineering feat :P
(Okay, maybe you're stuck doing something in a niche environment where handy protocol/format libraries aren't available to you; MATLAB for Microcontrollers or something. But if you're there, you're probably having fun dealing with all the nuances of implementing an efficient and safe recursive unicode text serialiser/serialiser for a format line JSON anyway :P)
Also if you're going the fairly standard route of "web API over HTTP", the protocols give us way more options readily available for much more efficient streaming of binary data.
It's not "wasting time" to teach devs that there are better ways of doing stuff. base64 encoding mp3s into JSON strings strikes me as "junior dev given 2 weeks to quickly implement something without somebody there to review and suggest alternative ways of doing stuff".
> msgpack or bson. Want stuff more efficiently packed based on the message structure? Use protobuf
Everything you listed is an external dependency in Javascript/Python, base64 is baked in, everyone gets it, everyone's done it.
If you want better interchange you should be pushing it to be included along b64 at the language level not trying to get every dev to include extra dependencies at either side of the exchange.
> If you want better interchange you should be pushing it to be included along b64 at the language level not trying to get every dev to include extra dependencies at either side of the exchange.
Oh, absolutely. JavaScript Object Notation became a defacto standard purely because js could parse it natively. Then `json` was adopted as part of the standard python libraries, within PHP, etc... Once upon a time, even stuff like base64 encoding/decoding required someone to write the code for it. Agreed, it requires pushing to get useful stuff into "batteries included" stdlibs.
We're using JavaScript Object Notation because.... Isn't the name quite telling? :-)
I mean the world is kind of in flames, if planes were as unreliable as most software is we'd have a hundred falling out of the skies every day. But the only reason it isn't worse, and I can't remember who the quote is from, is that the real Moore's law is the fact that talented hardware engineers keep improving hardware more quickly than software is getting slower.
There's an alternative world where everyone is performance oriented and those Rabbit devices don't run for five hours but for five days on modern batteries. Let's be real the thing is basically a Tamagochi, it should cost 50 bucks and run on chips from 2010.
And people are surprised software keeps being slow despite increase in compute resources. This is insane.