Eh. I much prefer to produce and consume line delimited JSON. (Or just raw JSON). Its easy to parse, self descriptive and doesn't have any of CSV's ambiguity around delimiters and escape characters.
Its a little harder to load into a spreadsheet, but in my experience, way easier to reliably parse in any programming language.
JSON(newline delimited or full file) is significantly larger than csv. With csv the field name is mentioned once in the header row. In JSON every single line repeats the field names. It adds up fast, and is more of a difference than between csv to parquet.
That is only if the JSON uses objects. JSON arrays map much better to CSV. In that case, only adding brackets to the front and end of each line.
The ability of JSON to do both objects and arrays is useful, for example the first line can be an object or array of objects describing the fields. Then there is less confusion between schema lines and data lines like there is with CSV.
Compression will make the size overhead disappear instantly.
I do see your point - I know that its less efficient, and its not the best format if you're handling it every day or using it for huge data sets. But for a quick and dirty handoff between programs its lovely. It takes ~5 lines to parse in just about any programming language. And you can do so without pulling in any extra dependencies.
Looking at the downvotes I can see that its a controversial choice. But I stand by it.
Its a little harder to load into a spreadsheet, but in my experience, way easier to reliably parse in any programming language.