While I agree with the author's sentiment, I think it is a dangerous practice to assert that "Nothing is any of 100,000 characters and anything between 0 and 2 billion in length." By all means, let's impose more rigorous structure on data items that require it, like HTTP headers, but why impose artificial restrictions on things that don't need it?
No single /thing/ might be any of 100,000 characters and between 0 and 2 billion in length, but a group of /thing/s that share enough similarities as to be functionally identical might very well subsume most of those 100,000 characters and have no intrinsic limits on length. If I have learned anything in my quarter century of developing software, it's that the moment I impose an artificial restriction on my data, I will find an item that violates the restriction and now requires special-case handling to do its job.
There's also a really obvious example of a thing that might be any of 100,000 characters and between zero and 2 billion characters in length: the contents of a plain text file.
Right, but is that a genuine use case? If you're writing an editor you probably want it to have a stronger notion of the data representations people might want to edit. If you're just considering "user-supplied free text" that's probably constrained away from certain characters.
No single /thing/ might be any of 100,000 characters and between 0 and 2 billion in length, but a group of /thing/s that share enough similarities as to be functionally identical might very well subsume most of those 100,000 characters and have no intrinsic limits on length. If I have learned anything in my quarter century of developing software, it's that the moment I impose an artificial restriction on my data, I will find an item that violates the restriction and now requires special-case handling to do its job.