None of this sounds like a problem in a statically-typed system with proper data modelling. In fact for the complete refactor of the external source we would do something very similar to Clojure, i.e. update our decoders for the external data. In fact I'd argue it would be even easier because the compiler would help check all the data is accounted for.
I agree with you in the sense that these things “done proper” would be robust. It is meant as a pointer to what is specifically one strength of Clojure in comparison.
From my experience marshaling/deserialization and bubbling up errors as an introspectable useful message are things that are more involved in statically typed languages than in Clojure.
Typically statically typed languages are very good at internal consistency. But require more ceremony and maintenance regarding outside sources. This is a viable tradeoff in some cases but not so in others.
It all comes down to proper data modelling, whether in a statically-typed or a dynamically-typed codebase. If you were consuming a JSON API that gave you this data:
{"id": 1, "name": "Bob", "age": 55}
But your Clojure app only needed the 'id' and 'name', why would you write a parser function that also grabbed 'age'? E.g. (I don't know specific Clojure libs, so this is for illustration only):
Right, this makes a good point and is worth studying more closely. We have to distinguish these concepts. But from my experience this isn’t done in practice. And by that I mean libraries, frameworks, idiomatic use, culture and so on, which the post at least partially acknowledges (or hints).
This is also a reminder that type systems are not all equal at all. The range of expression matters especially in these kind of discussions.