Still the best option is the unix philosophy: "Write programs to handle text streams, because that is a universal interface." - Peter H. Salus in A Quarter-Century of Unix (1994)
It really isn't the universal interface. At some point, most of us need to transfer more than just "text", even if we end up representing the data as text. For example, if I need to describe the name, breed, and sex of a set of dogs, I can transfer that as JSON:
etc. The list goes on. The concept of "text" fails to capture both the higher-lever format (JSON, CSV) and the lower-level details (what the keys are in JSON, what values might be valid for enumerations such as "sex", etc.; what units for numeric types, how do we represent null (esp in CSV), etc.).
For JSON HTTP APIs, Swagger makes a not-that-bad (IMO) attempt at describing the format of the structure.
The Unix philosophy, while it works well in specific cases, does not lend itself well to the creation of robot solutions. Parsing text streams with tools like sed/grep which are not the right tool for the job, in that they cannot understand the corner-cases of JSON/CSV/etc., leads to brittle solutions. (e.g., a use of sed/awk on the above CSV might work, until we get a more complicated format in a later iteration with a field containing an embedded newline, and our awk script falls down b/c we're not using a real CSV parser.)
That quote is actually by Doug McIlroy, who also gave this longer phrasing of it: "Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input."