Hacker News new | past | comments | ask | show | jobs | submit login
Quint: A specification language based on the temporal logic of actions (TLA) (github.com/informalsystems)
110 points by abathologist on Dec 20, 2023 | hide | past | favorite | 35 comments
I am on the dev team for this project and would be happy to answer any questions and/or take note of any critical feedback :)

Here's a bit more detail:

Quint is a modern, executable specification language that is a particularly good fit for distributed systems, such as blockchain protocols, distributed databases, and p2p protocols. Quint combines the robust theoretical basis of the Temporal Logic of Actions (TLA) with state-of-the-art static analysis and development tooling.




> We have covered all the aspects of our "Hello, world!" example. Actually, we could have written a much shorter example, but it would not demonstrate the distinctive features of Quint.

Ouch! When I am trying to wrap my head around a new tool, I want a series of examples starting with the absolute simplest possible example.

If I can see only one new concept demonstrated at a time, each in the simplest context possible, I can quickly develop a clear understanding. By quickly, I mean a solid understanding in minutes.

Anything else just generates questions (and am I even thinking the right questions?) at a faster rate than lightbulbs.


This is great feedback, thanks! Probably the first step tutorial should just be on something very straightforward, like an hour clock or a counter. I'll file an issue to introduce something more focused.

Thanks again :)

edit: https://github.com/informalsystems/quint/issues/1319


I'm trying to understand what is the difference between this and using TLA.

For example, what is the difference between "robust theoretical basis of the Temporal Logic of Actions (TLA)" and "state-of-the-art static analysis"?

Sorry, I'm not an expert in TLA, but I thought that static analysis was basically what it was used for.


Hey! We just changed the description (yesterday) to avoid this confusion - sorry! By static analysis there we actually mean things like type and effect checking.

With either a TLA+ spec or a Quint spec, you can run a model checker to verify properties or get counterexamples. That's the main similarity. As Quint is based on TLA+, we can atually use the same model checkers (that were originally implemented for TLA+).

The main differences between TLA+ and Quint are the surface syntax and the tooling (beyond the model checker). While TLA+ has an indentation-based hard-to-parse mathematical syntax (that looks quite pretty in LaTeX), Quint has a typed programming language styled syntax and a very simple parser, making it easier to develop tools around it.

As for tooling, first of all, Quint has type checking, which TLA+ doesn't. Our IDE support is also more similar to that of modern programming languages - with features like "Go to definition". With this, we hope (and have seen many reports of) programmers/engineers having an easier and better time writing Quint specs then they used to have with TLA+ tooling.

Quint also has support for execution of specs with random simulation, a testing framework and a REPL.

In contrast, TLA+ is a much more permissive language, and you can express more mathematical things that, for instance, could never be executed or are not even supported by TLA+ existing model checkers (TLC and Apalache). TLA+ has a proof system (TLAPS), which Quint does not.

Quint imposes many restrictions with the goal of preventing people to write things they don't really understand - which are possible in TLA+. Those restrictions are useful, just as type and effect systems are useful. But mathematicians that really know what they are doing and need more powerful expressivity will likely prefer TLA+ over Quint. Quint is aimed at programmers and engineers.

They are complementary, not direct competition.


> As for tooling, first of all, Quint has type checking, which TLA+ doesn't.

I wouldn't put it quite like that. It's not that TLA+ doesn't "do type checking" because TLA+ is just a language for writing mathematical descriptions of things. It doesn't "do" anything (do the formulas of Newtonian mechanics do type-checking?). It's more precise to say that TLA+ is an untyped language. But the model checker does check something like "typing" in the sense of set membership. I.e. the invariant □(x ∈ Int), i.e. "x is always an integer", can be checked with a TLA+ model checker just like many other invariants.

Furthermore, a model checker is "static" in the sense that, just like a type-checker, it doesn't "run" a "program". It's easy to see that a model checker doesn't run anything if you consider that a model checker can prove the following "type" on the right hand side of the implication:

    x ∈ BOOLEAN ∧ □[x' = TRUE ∨ x' = FALSE]_x ⇒ □(x ∈ BOOLEAN)
It proves it nearly instantaneously, even though every "execution" of the "program" on the left of the implication is infinite in its duration and there is an uncountably infinite number of such "execution", so clearly nothing is "executed" and the check isn't dynamic. So a TLA+ model checker does do something that's analogous-ish to type-checking.

However, it is true that by type-checking we normally mean an automated deductive process, i.e. one that applies inference rules, rather than an automated exhaustive analysis of the semantic domain, which is how a model-checker works. And yes, there are implications to the different ways "type checking" is done, especially when it comes to the algorithmic complexity of the checker.

This is just another example where comparisons to programming are unhelpful when describing mathematics that don't "run"; it just is.

It may be best to say that a language like Quint is inspired by the TLA logic (the TLA part of TLA+) and is similar to a programming language, whereas TLA+ is something else altogether (that requires learning something that is very much not programming) and leave it at that.

I admit that explaining the difference between programming (or something that's programming-like) and specifying a system with mathematics to people who are less familiar with the latter is difficult. It's a little like trying to explain the notion of a physics formula to a catapult builder in ancient Greece. There's clearly a relationship between the two (and some physics formulas may certainly be helpful when designing a catapult and make the catapult builder a better catapult builder), but they're also completely different things operating in two different domains (a formula isn't something that can fire a projectile).


> It's more precise to say that TLA+ is an untyped language.

That's the same thing as not having static types. (From the POV of static types, dynamically typed languages are just untyped languages, that is, languages that have exactly one type)

> I.e. the invariant □(x ∈ Int), i.e. "x is always an integer", can be checked with a TLA+ model checker just like many other invariants.

That's just checking a dynamic typing property (that is, something tha could in principle vary during the execution of the program, but you just proved that it doesn't). That is, this is checking that certain dynamic types stay the same.

(which is totally okay, and it's true that if you restrict dynamic types enough you can show there's a statically typed program that is equivalent)

> However, it is true that by type-checking we normally mean an automated deductive process, i.e. one that applies inference rules, rather than an automated exhaustive analysis of the semantic domain, which is how a model-checker works. And yes, there are implications to the different ways "type checking" is done, especially when it comes to the algorithmic complexity of the checker.

Yep! Static types are static _by construction_, you don't need any dynamic information to type check them


> That's the same thing as not having static types.

I didn't say it had static types. I was referring specifically to the notion of type checking.

> dynamically typed languages are just untyped languages

There are no dynamic types in TLA+ just as there aren't in the formula F = ma. There is nothing here that's running, and all the automated analysis, including of the sets that corresponds to types doesn't run anything.

> That's just checking a dynamic typing property

There is no "dynamism" here. The model checker checks that the "type" (really set membership) holds in any possible execution, each of which potentially infinite in duration, out of a countless infinity of executions. It's also a "static" check, it's just that it isn't syntactic.

> Static types are static _by construction_, you don't need any dynamic information to type check them

And neither do you need any "dynamic" information in TLA+. Dynamic types are a programming notion. TLA+ is not a programming language.


> (From the POV of static types, dynamically typed languages are just untyped languages, that is, languages that have exactly one type)

I believe the mistake you are making is assuming that TLA+ has some kind of runtime where things are "executed" (static vs. dynamic in the context of programming languages refers to, roughly, compile-time vs. execution-time). Model checking in TLA+ is actually analogous to "running a compiler", not running a REPL or whatever. It's definitely NOT a runtime.

TLA+ has no "dynamic" aspect at all, it's all "static" from the POV of language theory. And since it DOES provide a way (read: syntax) to create and check types (as the GP shows) and those types are checked statically (again, remember: there is no execution happening here), TLA+ is formally (and practically) equivalent to a language that has a different syntactic way to specify types, i.e. one that programmers are more familiar with.

What TLA+ is missing is any notion of executing code. That's what makes it a specification language and not an implementation language. Most programming languages are implementation languages, with a bit of "specification"—usually types—sprinkled on top. TLA+ is specification-only, no implementation stuff is written down.

So to recap, from a language-theoretic POV, TLA+ absolutely supports "static typing" in the way that it is usually understood and used by programmers and language theoreticians, but with a syntax that is unfamiliar (because it is a specification language).

For completeness, you can specify modern type inference algorithms like MLstruct/MLsub [0] in TLA+ and the model checker will happily apply them to your TLA+ specification (again, statically—at compile-time, which is really "model checking" time).

[0] https://lptk.github.io/files/[v8.0]%20mlstruct.pdf


I think you and Ron are confusing the relevant phase distinction the rest of us have in mind when we talk about type checking in this context.

What we mean is that quint, the language, is expressed on top of a many-sorted logic which allows our tooling to find and diagnose typing errors prior to *running* any type checker. The TLA Toolbox doesn't support this, and TLA+, like TLA itself, is untyped. TLC can (and will) raise errors while in the process of checking your model due to the kinds of problems we usually call "typing errors". Quint and apalache shouldn't do this.

I hope this helps clarify a bit.

We understand that, viewed purely theoretically, TLA+ has no dynamic aspect at all, etc. However, our team is very focused on the fusion of theory and practice, and we think it's important to ground our understanding and our discourse in the way the specification languages are actually used by real people given the existing tools. While it may be true in theory that

> What TLA+ is missing is any notion of executing code.

The file `TLC.tla` in the `StandardModules` includes these lines:

  ```
  Print(out, val) == val
  PrintT(out) == TRUE 

   (* This expression equals TRUE, but evaluating it causes TLC to print   *)
   (* the value of out.                                                    *)   
  ```
https://github.com/tlaplus/tlaplus/blob/master/tlatools/org....

In any case, our whole team thinks TLA is great, and we're happy people like you and Ron find it so useful and insightful. We also think it is a very insightful tool for guiding understanding :)


> which allows our tooling to find and diagnose typing errors prior to running any model checker.

Oh, I understand what you meant (and I think Quint is great!), but I'm trying to communicate to others how ineffective programming terminology is when it comes to things that aren't programming (I remember that many years ago, when I first learnt TLA+ I asked on the mailing list if TLA+ evaluates expressions eagerly or lazily because I was confused by the meaning of some expressions, and Leslie Lamport answered that there the question is meaningless because there is no program running and so no evaluation at all).

Quint's tooling can find typing errors by running a tool called a type checker, while in TLA+ the tooling can find "typing" errors by running a tool called a model checker. Neither of these tools actually runs the program expressed in the text (although a model checker usually does evaluate some expressions), and so they're both "static" checks. This is important to point out because some people try to think of a model checker as something that runs a program many times, and that's why I gave an example showing how a model checker can check something in milliseconds without possibly having run "the program" even once, let alone all of its possible executions.

But yes, the runtime efficiency of a type checker, i.e. a checker that performs its reasoning based on simple deductive rules is, is usually better than the efficiency of a model checker, which reasons in the semantic model, so even though in both cases you don't run "the program" to find type errors, the Quint tooling will report typing errors much more quickly (in wall-clock time) than the TLA+ tooling.


I'd like to add something about this:

> it's important to ground our understanding and our discourse in the way the specification languages are actually used by real people given the existing tools

I think it's also important to distinguish between "the use of model checkers" (or other automated verification tools) and "the use of specification language." Most specification languages used by real people don't have a model checker at all nor any other fully automated verification tool; rather they offer proof assistants. A fully automated verification tool, like a model checker, is very attractive; indeed it is probably the thing most people find most attractive about TLA+ when they first see it because its value is large and it's easy to understand even if you have no interest in advanced formal specification or even know what it is. Consequently, many of those using TLA+ use it not to specify well but as an input language to a model checker because that's what they want. They may then ask, but given that I'm not really interested in powerful specification but only in the model checker, is there a way to use one with a language that's more familiar to us even if we have to give up on specification power (which is not really what we want anyway), and the answer to that is, happily, yes!

So that is definitely one way that real people who really want model checking use a specification language like TLA+ in the real world, but not only is that not the only way (again, most specification languages don't offer a model checker at all) but we're clearly talking about people using a specification language not for powerful specification but just as a means to get to the model checker.

Other people in the real world use a specification language for its power of specification and also enjoy the available tools, but the tools are not the goal. There are probably fewer people in this group than in the former because learning how to specify well is a whole other skill, and the benefits are not something you see immediately because it's not an answer at the push of a button. Nevertheless, these people specify, in the real world and with existing tools, to achieve very concrete goals, such as the better design that is helped by a better specification.

What do those few who actually want to use a specification language for its primary purpose want? Same thing as anyone who wants to specify anything in the real world. For example, if you're a company in search of a conference table, you may want to specify its size, its colour, and its stability and nothing else. If you're a designer for a furniture company, you may specify a table's size, precise shape, and strength requirement, but not its colour or specific materials.

The point is that powerful specification means this: the ability to describe any property that is important to you and not describe any property that is not important to you. That is what TLA+ is. If you want, you can describe an algorithm in a way that is ready to be executed. Or you can describe what the result should be and what the computational complexity should be and nothing more. Or you can describe what the resource constraints and not describe timing constraints. You can describe anything and you can leave anything out. This is what people in the real world who actually want to use a specification language to specify things -- not just to get at the model checker -- want.

I agree everyone will be served if both groups got what they want, and I also agree that the first group is larger, probably much larger, than the second. It's perfectly fair to claim that because the first group is larger then helping them may have a greater practical value than helping the second group. But I think it's not fair to say that because there are people who want a model checker but not so much to specify, then being practical means addressing only those who use specification languages accidentally. A few people really want to specify, and they want to do it out of concerns that are no less practical than those who just want to run a model checker.


I mostly agree with this, with one caveat: There is a group of people that want to specify executable things, mainly for the sake of specifying it. They might use a programming language for that, but doing so with a specification language instead can bring some benefits, including model checking in the case of Quint and TLA+.

I don't think Quint users are only focused on model checker, as you imply. It is very useful to specify things. However, in the world of software, turns out that people want to specify things that actually execute, and won't need a way to express things that don't.


> but doing so with a specification language instead can bring some benefits, including model checking in the case of Quint and TLA+.

I was referring to benefits that only a rich specification language like TLA+ (or Coq or Isabelle) could bring, although TLA+ is more naturally expressive than the others. Model checking is nice and very useful, but just the rich formal specification is at least as useful in itself. Like most people, I was drawn to TLA+ for the model checking, but after spending a lot of time with it, I realised that model checking was its least interesting benefit even though it's its best marketing. It's really cute and useful that there are model checkers for TLA+, but I'd used SPIN and Pathfinder before, and they were cool, too. What set TLA+ apart for me was the way it let me specify and improve my designs, not the fact that it happened to have a model checker (although without the model checker I probably would have never given it a chance).

> turns out that people want to specify things that actually execute, and won't need a way to express things that don't.

Absolutely. And others want to specify things that cannot possibly execute, because if they could they would make for a lousy specification from a very practical perspective (it's of the utmost practical importance to be able to say: this distributed protocol requires O(lg n) messages, and yet it's not something you can easily model-check), and you can't have both. A model-checked specification is often not a good enough specification, and a good specification is not something you can model-check.

So in the practical world of software it turns out there are people who want to model check, and there are people who want to write good specifications, both groups are positively tiny but the former is larger. Neither is fully served by something made for the other. Something that's great at being fully model-checkable cannot be really good for specification and vice versa. I'm all for more languages that are more fully more checkable, but it's important to acknowledge that such languages are less practical for rich specification. Both specification and model-checking are practical concerns, but there's always a tradeoff.


> TLA+ is missing is any notion of executing code

Which is why you had to talk about TLC, not TLA+. And yeah, while the presence of printf-style debugging for specifications being model checked by TLC is technically executable-ish, I think in this case, it kind of proves the point that TLA+ itself isn't.

Regardless, I like the direction Quint is taking though I would quibble with this language in your reply:

> our tooling to find and diagnose typing errors [runs] prior to [...] any type checker

If you are diagnosing typing errors, then definitionally, you are running a type checker. It's great that it's fast and interactive, runs prior to the model checker, is easy for programmers to read/write, et cetera. But…it's still a type checker. :-)

Anyway, keep up the great work! I've been on your newsletter for a long time and love the work you guys are doing.


I think they meant "model checker" instead of "type checker" in that sentence. Otherwise, of course, we need to run the type checker to get the type diagnosis.


That's what I meant! Thanks for the correction :)


CORRECTION:

> find and diagnose typing errors prior to running any type checker

Should read

> find and diagnose typing errors prior to running any model checker


Those are great questions! Let me add a one addendum to bugarela's excellent explanation, to help disambiguate these three formalism:

1. TLA[0] is a logical calculus invented by Leslie Lamport.

2. TLA+[1] is a formal specification language, also invented by Leslie Lamport, with semantics based on and reducible to TLA.

3. quint is also a formal specification language with semantics based on and reducible to TLA.

TLA+ can be understood as TLA extended with syntax sugar to help describe systems. On top of the logical calculus it adds niceties like `LET` bindings, syntax to represent common data structures, `CASE` expressions, a module system, etc. But TLA+ also exposes the underlying logic directly, making the language extremely expressive. The flip side is that nonsensical formulas which may not be checkable are pretty easy to write.

quint is a (programmer friendly) syntax that, while also desugaring to TLA, gives up expressivity to add the guardrails that we've found most helpful when specifying the systems we work on.

Please let us know if you have any followup questions :)

---

[0]: https://en.wikipedia.org/wiki/Temporal_logic_of_actions [1]: https://en.wikipedia.org/wiki/TLA%2B


At very least it looks like it can do what TLA can do whilst being dramatically less painful to learn / work with. That is very much enough to be interesting.


One thing that was a demotivating factor (for me) about TLA+ is the syntax (I later realized that there's a more high-level language that looks more like a programming language and less like markup: PlusCal, but it was too late :). This looks a lot nicer.


I thought I read one time that Leslie Lamport didn't want TLA+ to look like a programming language, because it might confuse developers into thinking it IS a programming language.

I understand his logic, but at the same time, TLA+'s weird surface syntax put off a lot of people who otherwise could have used it. And PlusCal seemed like a halfway measure: more like a programming language, but still kinda... weird.

So, I welcome Quint. And it's great that they seem interested in making the tooling fit with the normal developer ecosystem, such as LSP support.


I can sympathize! We are aiming to maintain most of the expressive power of TLA+ -- ideally everything you need for a concise, high-high level specification, that can be simulated and/or verified -- but with surface syntax that is more approachable coming from a programming background. If you're interested in seeing how it maps to TLA+, you can find much of that in this document: https://github.com/informalsystems/quint/blob/main/doc/lang....

We still have lots of ways to improve, and -- we think -- lots of opportunities to improve our interop and complementary qualities in relation to TLA+ and TLC. But we have found the tools in their current state useful enough to be worth sharing :)


The syntax is needed so that simple mathematical substitutions and deductions could be applied. PlusCal is not "higher level", it's just similar to a programming language. In fact, TLA+ makes it easy to write higher level specifications, i.e. those that go into less detail. One of the nice things you can do is write a smaller, high-level spec in TLA+ (e.g. you can specify "a program that sorts integer arrays in O(n^2)") and then a lower level spec, closer to code, in PlusCal, and tie the two together, showing that the latter is an implementation of the former.

The simple mathematical syntax (and semantics [1]) of TLA+ has the advantage of making mathematical reasoning easier and it's more expressive; if it were like a programming language there would be too many things that TLA+ users would want to do but couldn't, certainly not as easily. Programming-language-like syntax and semantics has the advantage of being more familiar to programmers (many of which just want to run a model checker, and aren't interested in a more mathematical analysis of the specification (which actually makes specification easier, but only after gaining some experience). The two kinds of languages make different kinds of things either easier or harder.

Overall, being programming-like helps programmers who just want model checking and then maybe to start dipping their toes into specification.

----

[1]: What's more challenging for programmers is less the syntax and more the semantics. For example, the following function definition in TLA+ (using ASCII syntax) happens to look a lot like programming in this particular case:

    f[x \in Int] == -f[x]
it looks like a programming subroutine that recursively calls itself ad-infinitum, but obviously, as in mathematics, it simply defines a function of the integers that is zero everywhere as that is the only definition that satisfies the equation and allows simple mathematical substitution.

Another example could be (again, using the ASCII syntax):

    Inc(x) == x + 1
which seems like an increment operator, but

   3 = Inc(x)'
means something close to: "the system assigns 2 to x (or maybe something that's not a number)". In TLA+ things don't work like in programming -- they work like in maths -- and they must not work like in programming if you want to do things like high level specifications, refinements and more. The syntax tries to help and make that clear by trying to resemble maths rather than programming so that you wouldn't confuse the different meaning similar expressions have in these different domains. For those who do want to learn TLA+ (which is a language that's significantly simpler than Python but is not a programming language) is to forget any relationship it may have to programming and remember that even things that look similar to programming work in a very different way. I would say that the relationship between TLA+ and programming is similar to that between the formula h = 0.5gt^2 and someone dropping a golf ball off a cliff in California.

Those who aren't interested may still benefit from languages that are programming-like and offer model-checking. That's great, too!


Nah, I just honestly very much dislike the choices about syntax. Like why on earth would you make rules like "some number of repeated punctuation on a separate line means something" (I hate this about Markdown too).

There is no need for such ambiguity. It makes parsing harder. It makes it hard to recognize that it's actually an element with some semantics attached and not just a weird decoration.

Also, a lot of capitalization.

TLA+ programs, from typographic perspective, look really, really awful.

There are languages which just look bad due to weird typographical choices of their authors. Usually because they use symbols in the context the font author never expected them to be used. For example, any language that uses camel-case or pascal-case is an affront to typography because font authors don't anticipate capital letters in the middle of the word surrounded by minuscule letters. Similarly, nobody expects sequences of "=" signs or ":" written adjacent to the following word rather than previous one etc.

Some look really awful due to extreme abuse of punctuation (C++ would be a good example with all the &&::<<<*_>>,<**>> stuff), others look very repetitive to the point that they make you lose focus (eg. Pascal-style language which have "if-then-else-end if" going on where sometimes for many lines you just keep reading the "cargo train" of "end, end, end, end if, end". Lisps are also in this category because they usually look too uniform.

TLA+ looks like a Fortran program from the 60's formatted on a physical typewriter with all sorts of weird uses of the small number of available glyphs to substitute for the actual typographical thing that wasn't available to the author.


> Like why on earth would you make rules like "some number of repeated punctuation on a separate line means something" (I hate this about Markdown too).

That's not how it works, and Lamport has done a lot of exploration and explains it in detail in How to Write a Long Formula (1994): https://www.microsoft.com/en-us/research/publication/write-l... Writing it long hand, i.e. without TLA+'s special contribution to mathematical syntax, would mean a lot of parentheses that would be much harder to parse. From the conclusion: "The notations introduced here will be unfamiliar to most readers, and unfamiliar notation usually seems unnatural. I have used the notations for several years, and I now find them indispensable. I urge the reader to rewrite formula I of Figure 1 in conventional notation and compare it with the original. Having to keep track of six or seven levels of parentheses reveals the advantage of using indentation to eliminate parentheses."

But, if you prefer, you can use parentheses and dispense with indented lists altogether. The language supports that just fine. In other words, Lamport claims that indentation is clearer than brackets, but he doesn't force that on you in TLA+. You find brackets clearer? No problem! TLA+ allows you to use indentation in lieu of brackets if, like Lamport, you think it makes complex formulas easier to read, but it doesn't force you. However, you should know that a lot of thought has gone into that syntax (by a a Turing award laureate), probably more thought than has gone into most other formal languages' syntax, and at the very least you may want to reserve judgment for a while.

TLA+ syntax is pretty much the standard mathematical and logical syntax, as it has evolved primarily since Peano over about a hundred years, with just a few modifications by Lamport to make it more regular and less ambiguous and more suited to writing the larger formulas needed to describe engineered systems as opposed to natural systems.

> TLA+ programs, from typographic perspective, look really, really awful.

There are no programs you write in TLA+, only mathematical formulas, and they look pretty much exactly like all mathematics does on the page. You can see lots of examples on this chapter of my blog post series on TLA+: https://pron.github.io/posts/tlaplus_part2. TLA+ looks pretty close to anything you'd find in formal logic book.

Here are a couple more examples:

- https://pron.github.io/files/Maths1.pdf

- https://pron.github.io/files/SelfRefPuzzle.pdf,

- https://pron.github.io/files/TicTacToe.pdf (the entire post is written in TLA+)

Maybe it's just me, but I find that syntax quite pretty, and very familiar.

> TLA+ looks like a Fortran program from the 60's

Oh, I think you're not talking about the actual TLA+ syntax, but it's typesetting source (which works like LaTeX source). Actual TLA+ syntax is read in pretty-printed form.

Given that LaTeX is how we normally write mathematics on a computer, TLA+ is a marked improvement (I had to write that post I linked in LaTeX and it was much less pleasant than the TLA+ typesetting source). Honestly, I can't think of any other language for writing mathematics that has a cleaner, prettier syntax than TLA+ (Lean is not terrible, but it's further removed from the standard mathematical notation), but if you know of one, let me know.


This is something I really wanted to build when I learned about TLA+ over a year ago. Taking the core idea into a modern language and toolings. Congratulations that you actually did it!


This looks really nice. It's the first thing I have seen that appears to be approachable from a developer's perspective.


That is our aim! If you give it a shot and find things that aren't so approachable, we'd live to hear how we can improve.



I didn't take a very deep look yet, but this might be similar to https://github.com/pfeodrippe/recife


How do you compare this with Alloy?


I haven't really used Alloy before to give you a nice comparison, but some people have talked about differences in similarities between Alloy and TLA+ (i.e. in https://alloytools.discourse.group/t/alloy-6-vs-tla/329/13), and, in general, this should apply to Alloy vs Quint, since Quint is heavily based on TLA+. Evidently, the points regarding tooling and surface syntax won't really apply, as those are things Quint does not take from TLA+.


Recommended introduction talk for context:

https://www.youtube.com/watch?v=OZIX8rs-kOA&ab_channel=Gatew...

TLDR: compileable modeling language to model the high-level protocol of your blockchain (or any distributed system). It's a "digitalization" step of plain written English protocol specifications to code.


this sounds like something you'd plug into an LLM asA AGI


Interesting


[deleted]




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: