Hacker News new | past | comments | ask | show | jobs | submit login

> (From the POV of static types, dynamically typed languages are just untyped languages, that is, languages that have exactly one type)

I believe the mistake you are making is assuming that TLA+ has some kind of runtime where things are "executed" (static vs. dynamic in the context of programming languages refers to, roughly, compile-time vs. execution-time). Model checking in TLA+ is actually analogous to "running a compiler", not running a REPL or whatever. It's definitely NOT a runtime.

TLA+ has no "dynamic" aspect at all, it's all "static" from the POV of language theory. And since it DOES provide a way (read: syntax) to create and check types (as the GP shows) and those types are checked statically (again, remember: there is no execution happening here), TLA+ is formally (and practically) equivalent to a language that has a different syntactic way to specify types, i.e. one that programmers are more familiar with.

What TLA+ is missing is any notion of executing code. That's what makes it a specification language and not an implementation language. Most programming languages are implementation languages, with a bit of "specification"—usually types—sprinkled on top. TLA+ is specification-only, no implementation stuff is written down.

So to recap, from a language-theoretic POV, TLA+ absolutely supports "static typing" in the way that it is usually understood and used by programmers and language theoreticians, but with a syntax that is unfamiliar (because it is a specification language).

For completeness, you can specify modern type inference algorithms like MLstruct/MLsub [0] in TLA+ and the model checker will happily apply them to your TLA+ specification (again, statically—at compile-time, which is really "model checking" time).

[0] https://lptk.github.io/files/[v8.0]%20mlstruct.pdf




I think you and Ron are confusing the relevant phase distinction the rest of us have in mind when we talk about type checking in this context.

What we mean is that quint, the language, is expressed on top of a many-sorted logic which allows our tooling to find and diagnose typing errors prior to *running* any type checker. The TLA Toolbox doesn't support this, and TLA+, like TLA itself, is untyped. TLC can (and will) raise errors while in the process of checking your model due to the kinds of problems we usually call "typing errors". Quint and apalache shouldn't do this.

I hope this helps clarify a bit.

We understand that, viewed purely theoretically, TLA+ has no dynamic aspect at all, etc. However, our team is very focused on the fusion of theory and practice, and we think it's important to ground our understanding and our discourse in the way the specification languages are actually used by real people given the existing tools. While it may be true in theory that

> What TLA+ is missing is any notion of executing code.

The file `TLC.tla` in the `StandardModules` includes these lines:

  ```
  Print(out, val) == val
  PrintT(out) == TRUE 

   (* This expression equals TRUE, but evaluating it causes TLC to print   *)
   (* the value of out.                                                    *)   
  ```
https://github.com/tlaplus/tlaplus/blob/master/tlatools/org....

In any case, our whole team thinks TLA is great, and we're happy people like you and Ron find it so useful and insightful. We also think it is a very insightful tool for guiding understanding :)


> which allows our tooling to find and diagnose typing errors prior to running any model checker.

Oh, I understand what you meant (and I think Quint is great!), but I'm trying to communicate to others how ineffective programming terminology is when it comes to things that aren't programming (I remember that many years ago, when I first learnt TLA+ I asked on the mailing list if TLA+ evaluates expressions eagerly or lazily because I was confused by the meaning of some expressions, and Leslie Lamport answered that there the question is meaningless because there is no program running and so no evaluation at all).

Quint's tooling can find typing errors by running a tool called a type checker, while in TLA+ the tooling can find "typing" errors by running a tool called a model checker. Neither of these tools actually runs the program expressed in the text (although a model checker usually does evaluate some expressions), and so they're both "static" checks. This is important to point out because some people try to think of a model checker as something that runs a program many times, and that's why I gave an example showing how a model checker can check something in milliseconds without possibly having run "the program" even once, let alone all of its possible executions.

But yes, the runtime efficiency of a type checker, i.e. a checker that performs its reasoning based on simple deductive rules is, is usually better than the efficiency of a model checker, which reasons in the semantic model, so even though in both cases you don't run "the program" to find type errors, the Quint tooling will report typing errors much more quickly (in wall-clock time) than the TLA+ tooling.


I'd like to add something about this:

> it's important to ground our understanding and our discourse in the way the specification languages are actually used by real people given the existing tools

I think it's also important to distinguish between "the use of model checkers" (or other automated verification tools) and "the use of specification language." Most specification languages used by real people don't have a model checker at all nor any other fully automated verification tool; rather they offer proof assistants. A fully automated verification tool, like a model checker, is very attractive; indeed it is probably the thing most people find most attractive about TLA+ when they first see it because its value is large and it's easy to understand even if you have no interest in advanced formal specification or even know what it is. Consequently, many of those using TLA+ use it not to specify well but as an input language to a model checker because that's what they want. They may then ask, but given that I'm not really interested in powerful specification but only in the model checker, is there a way to use one with a language that's more familiar to us even if we have to give up on specification power (which is not really what we want anyway), and the answer to that is, happily, yes!

So that is definitely one way that real people who really want model checking use a specification language like TLA+ in the real world, but not only is that not the only way (again, most specification languages don't offer a model checker at all) but we're clearly talking about people using a specification language not for powerful specification but just as a means to get to the model checker.

Other people in the real world use a specification language for its power of specification and also enjoy the available tools, but the tools are not the goal. There are probably fewer people in this group than in the former because learning how to specify well is a whole other skill, and the benefits are not something you see immediately because it's not an answer at the push of a button. Nevertheless, these people specify, in the real world and with existing tools, to achieve very concrete goals, such as the better design that is helped by a better specification.

What do those few who actually want to use a specification language for its primary purpose want? Same thing as anyone who wants to specify anything in the real world. For example, if you're a company in search of a conference table, you may want to specify its size, its colour, and its stability and nothing else. If you're a designer for a furniture company, you may specify a table's size, precise shape, and strength requirement, but not its colour or specific materials.

The point is that powerful specification means this: the ability to describe any property that is important to you and not describe any property that is not important to you. That is what TLA+ is. If you want, you can describe an algorithm in a way that is ready to be executed. Or you can describe what the result should be and what the computational complexity should be and nothing more. Or you can describe what the resource constraints and not describe timing constraints. You can describe anything and you can leave anything out. This is what people in the real world who actually want to use a specification language to specify things -- not just to get at the model checker -- want.

I agree everyone will be served if both groups got what they want, and I also agree that the first group is larger, probably much larger, than the second. It's perfectly fair to claim that because the first group is larger then helping them may have a greater practical value than helping the second group. But I think it's not fair to say that because there are people who want a model checker but not so much to specify, then being practical means addressing only those who use specification languages accidentally. A few people really want to specify, and they want to do it out of concerns that are no less practical than those who just want to run a model checker.


I mostly agree with this, with one caveat: There is a group of people that want to specify executable things, mainly for the sake of specifying it. They might use a programming language for that, but doing so with a specification language instead can bring some benefits, including model checking in the case of Quint and TLA+.

I don't think Quint users are only focused on model checker, as you imply. It is very useful to specify things. However, in the world of software, turns out that people want to specify things that actually execute, and won't need a way to express things that don't.


> but doing so with a specification language instead can bring some benefits, including model checking in the case of Quint and TLA+.

I was referring to benefits that only a rich specification language like TLA+ (or Coq or Isabelle) could bring, although TLA+ is more naturally expressive than the others. Model checking is nice and very useful, but just the rich formal specification is at least as useful in itself. Like most people, I was drawn to TLA+ for the model checking, but after spending a lot of time with it, I realised that model checking was its least interesting benefit even though it's its best marketing. It's really cute and useful that there are model checkers for TLA+, but I'd used SPIN and Pathfinder before, and they were cool, too. What set TLA+ apart for me was the way it let me specify and improve my designs, not the fact that it happened to have a model checker (although without the model checker I probably would have never given it a chance).

> turns out that people want to specify things that actually execute, and won't need a way to express things that don't.

Absolutely. And others want to specify things that cannot possibly execute, because if they could they would make for a lousy specification from a very practical perspective (it's of the utmost practical importance to be able to say: this distributed protocol requires O(lg n) messages, and yet it's not something you can easily model-check), and you can't have both. A model-checked specification is often not a good enough specification, and a good specification is not something you can model-check.

So in the practical world of software it turns out there are people who want to model check, and there are people who want to write good specifications, both groups are positively tiny but the former is larger. Neither is fully served by something made for the other. Something that's great at being fully model-checkable cannot be really good for specification and vice versa. I'm all for more languages that are more fully more checkable, but it's important to acknowledge that such languages are less practical for rich specification. Both specification and model-checking are practical concerns, but there's always a tradeoff.


> TLA+ is missing is any notion of executing code

Which is why you had to talk about TLC, not TLA+. And yeah, while the presence of printf-style debugging for specifications being model checked by TLC is technically executable-ish, I think in this case, it kind of proves the point that TLA+ itself isn't.

Regardless, I like the direction Quint is taking though I would quibble with this language in your reply:

> our tooling to find and diagnose typing errors [runs] prior to [...] any type checker

If you are diagnosing typing errors, then definitionally, you are running a type checker. It's great that it's fast and interactive, runs prior to the model checker, is easy for programmers to read/write, et cetera. But…it's still a type checker. :-)

Anyway, keep up the great work! I've been on your newsletter for a long time and love the work you guys are doing.


I think they meant "model checker" instead of "type checker" in that sentence. Otherwise, of course, we need to run the type checker to get the type diagnosis.


That's what I meant! Thanks for the correction :)


CORRECTION:

> find and diagnose typing errors prior to running any type checker

Should read

> find and diagnose typing errors prior to running any model checker




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: