I'm the author of the post, and just last year I wrote a long post about the difficulty -- both theoretical and practical -- of software correctness[1]. People who understand formal methods know this, and treat the task with humility and pragmatism. Nobody's claiming that we can "just" write "correct" software whenever we like with just a few tricks.
Maybe a simple metaphor for what TLA+ can be used for would be to think of a "game" consisting of a "board" and "moves", and then exploring what game states can be reached?
So the question in a specification is deciding what the board looks like, what the legal moves are, and what properties you're interested in. Then model-checking is exploring the game tree to some depth to find out if there are move sequences that will reach interesting states.
You wouldn't write a chess or Go-playing program in TLA+, but most discrete systems have smaller trees than that.
It seems like it would reasonable to compare this to QuickCheck-style testing or fuzzing, except that it's entirely done in a different language.
You are talking about model checkers. TLA+ is not a model checker. It is a specification language, like Coq or Isabelle. It's a mathematical language for describing programs. What you do with those descriptions may include using some mechanical tools. The tools for TLA+ include a mechanical proof assistant (like Coq and Isabelle) and it also has a model checker. But you can do more with TLA+ than use the model checker (although the model checker is often the most cost-effective use).
> Maybe a simple metaphor for what TLA+ can be used for would be to think of a "game" consisting of a "board" and "moves", and then exploring what game states can be reached?
Hmm... A "board" with "moves" sounds an awful lot like the first part of a Turing Machine...