I probably understand these concepts without knowing their official names. But man, sometimes I feel really dumb when I see phrases like "Queue Invariant" or "Amortised Complexity". It's nice that this article explains them simply.
At least in our field we didn't have to know these names to do well, and we don't need to know them to do our job. My commerce degree on the other hand, if I got the concept, but did not get the name, it's not worth anything to the examiner.
I used to scoff at the use of all these official names (to borrow your phrase), but over time I've found how useful they can be at precise communication.
Before I just thought they were pedantic or snobbish.
> I probably understand these concepts without knowing their official names.
It's been my observation that reading articles/posts written by Haskell programmers is a great way to be spend a fair amount of time being confused over things that you already understand, usually for this reason. In that sense, at the least, they're usually educational.
It's my suspicion that the most common path to becoming a Haskell programmer is to read too many of these sorts of things, to start using the terminology in your own writing and speech, then to find that no-one but Haskell programmers understand you anymore, leaving you no choice.
My fairly standard computer science education covered amortized complexity. You can't analyze the time complexity of a sensible vector type (i.e. "one step above raw array") without it.
Invariants are admittedly undersold in the curricula I've seen, but the idea comes up in many places. Most people ought to have at least encountered the idea in databases, where foreign key references provide basic invariants about referencing existing rows no matter what SQL you fling at them, for instance.
As jerf said, amortized complexity is discussed in a CS2-style course (in the second semester) when implementing array lists, but invariants are also discussed in the same course when implementing binary heaps or balanced trees (typically AVL or red-black).
It's also plausible that loop or class invariants are discussed in a CS1 course to ensure that your objects don't end up in invalid states after the execution of a loop/method. But maybe the exact word "invariant" is not used as such here.
Haskellers borrow terminology from abstract math and CS because the concepts are just plain useful (for code reuse, performance, and safety) and it’s kinda pointless to rename them.
Of course we’re going to talk about these ideas, because Haskell makes it easy to apply them in the real world. And if you have the choice, why would you go back to a language where you don’t have that advantage?
Perhaps a better way to think about it is that often when reading these sorts of articles, you realize that there's a name, and likely a corresponding rigorous definition and theory, for some concept which you had already understood intuitively. Once you've made that realization, you start to see the pattern yourself in other areas that you hadn't before, and to understand it more deeply.