But this exact problem is why I am more interested in approaches that integrate specification and implementation into a single (dialectical) process (e.g., via type-theoretic methods).
> This repository is a package repository for the opam package manager. It contains OCaml with Jane Street language extensions, including all Jane Street packages and necessary patches to external packages for building our libraries.
Key features of note include
- Locality via modal types
- Layout polymorphism for polymorphic unboxed types
I am not much concerned with hyper optimizations, but I was curious how OCaml would fair with the initial, simple steps, before things get crazy. But I opted for a more complex program:
(\* t.ml \*)
let () = print_endline "Hello, World!"
Then just doing a standard compilation and a strip:
$ ocamlopt -o t t.ml && ls -l -h t | cut -d " " -f5
1.5M
$ strip t && ls -l -h t | cut -d " " -f5
356K
$ ./t
Hello, World!
I may be overlooking something, and would be interested to learn what if so, but I was surprised we got a result smaller than the rust binary in the first instance.
It is interesting that before stripping the size of the Rust version is bigger, but after only stripping the size of the OCaml version is bigger. It'd be nice to try and see what the "extra" info that Rust ships by default is.
> I don’t recall people striking up conversations in grocery lines before smart phones, nor would I have been interested in it.
I do. You may not quite be old enough to have seen/participated in this. Or maybe it wasn't happening where you live for some reason? (E.g., this was probably more common in the urbs than the sub-urbs, due to other alienating forces in the latter).
> people also downplay how much smartphones just replaced rampant TV consumption for people who like to be entertained constantly
It was (mostly) not possible to have a TV with you at all moments, on the bus, in line, in the toilet. This meant lots of times where people were with their thoughts or one another. Also, phones have not replaced TV but overlay it. Many (most) now watch their TV (even if streaming) while also layering distraction from their phone.
> Personally, I handle smartphone usage just fine as do most of the adults around me.
Good for you! I'm curious, what's your average phone usage in hrs/day?
IMO, the degrading effect of smart phones (and the distraction machines of the attention economy more generally) is something that should be discussed and problematized. If some are good at self-moderating, their contributions to the discussion may be especially helpful.
> I reject the idea that all techies need to reject technology and smartphones
Just to clarify, rejecting smart phones as they are currently designed and implemented as bad technology is not the same as rejecting all technology per se.
> projecting their own solution on to everyone else is about as appealing as the person who had a problem with drinking too much trying to insist that nobody else should drink any alcohol at all either
This is an interesting analogy b/c ["No level of alcohol consumption is safe for our health"](https://www.who.int/europe/news/item/04-01-2023-no-level-of-...). I doubt that smart phones are as unequivocally toxic as alcohol, but I do suspect that the current way they are used and developed may be, esp. thanks to gamification, nudge design, and surveillance. Research is coming out on this, but I think there is good reason to not assume it is systemically innocuous as you think it is.
However, b/c living in an inattentive society may actually be worse for everyone, it's possible that a better analogy may be smoking in public or drunk driving: if you smoke in public places or get behind a wheel, everyone is at greater risk.
> I also use my smartphone for important things like maps, taking photos, keeping lists, and even paying for groceries if I forget my wallet.
Imagine a utility that enabled these important functions but did not also constantly disrupt your attention with push notifications, harvest your data, or present the temptation of hours wasted in toxic digital fun-houses! :D It's possible!
This is an interesting reflection, and I'm glad to have read it.
A few things came to mind:
The view of programming as "summarizing what's on StackOverflow" is really alien
to me. I suspect this is indicative of a particular approach to programming, and
perhaps to working in general, which I don't share. The author's view seems to
be that knowledge exists "out there" and the role of the "knowledge worker" is
to accumulate, internalize, and reshape it into products derived by
summarization. Compare this with another view on "knowledge work" taken from
https://en.wikipedia.org/wiki/Knowledge_worker
> Nonaka described knowledge as the fuel for innovation, but was concerned that
> many managers failed to understand how knowledge could be leveraged. Companies
> are more like living organisms than machines, he argued, and most viewed
> knowledge as a static input to the corporate machine. Nonaka advocated a view
> of knowledge as renewable and changing, and that knowledge workers were the
> agents for that change. Knowledge-creating companies, he believed, should be
> focused primarily on the task of innovation.
High value knowledge work involves creating and transforming knowledge, not just
compressing or reconfiguring it.
To expand on this in abstract terms: knowledge work is fundamentally cognitive
and it gets its higher order purpose and potential from the application of
reason; i.e., it is concerned with rational cognition. Rational cognition is
mainly about synthesizing new, higher order concepts that direct and evolve
given concepts into more general and potent structures. This work involves
re-cognition as a necessary component, but if it were only recognitive -- as
it would be if were only concerned with recollection and summarization -- it
would not have the creative dynamic which it does.
To expand in more specific terms: programming work involves problem solving, but
it is not mainly about reassembling existing solutions to solve known problems.
The most valuable aspects of this work come from problem discovery, root
cause analysis, and solution invention. (Programming work that consists in
StackOverflow copy pasta is probably best viewed as the production of tech debt
:) This is not to say resources like StackOverflow arent useful, they definitely
are!)
I suspect it says more about the author's career aspirations and the reigning
interests of the political-economic system that they envision a future where
everyone is a manager. First, only if you have "manager brain" can you look at
what's happening in tech and see a future where everyone is a manager as a
positive development, when compared with a future where everyone is a
researcher, artists, arisen, inventor, etc. Second, the managerialization of
work is actually describing an idealized view of the present situation, and if
it is looming in the future it is only as an intensification of the current
dynamics. The rise of [the Professional Managerial Class was heralded in the
70s][1], and most tech workers are in the PMC:
> Who are these Americans working in the upper echelons of the knowledge
> economy, exactly? ... the Professional Managerial Class. The PMC, as they are
> now often called, came into existence in the late nineteenth and early
> twentieth centuries. They were not the old petty bourgeoisie of small-business
> proprietors and independent farmers, but a new class whose expertise was
> required to make an industrial economy function: engineers, scientists,
> teachers, doctors, social workers, functionaries, bureaucrats, and other
> professionals and managers who had the know-how to create and control the
> levers of the modern capitalist world [0].
The managerialization of everything does seem very likely, because that is
basically what our current economic regime has been trying to achieve since the
advent of the "digital revolution" (and maybe since Hobbes: a pyramid scheme of
nested managers, where every managed worker is actually a manager automating its
own autonomous subordinates).
Against the view that computers are a tool for rendering every "maker" into a
"manager", I propose meditating on the view propounded by Conal Elliot that
computers should be "telescopes for meaning".
There is a more reasonable interpretation to "summarizing what's on StackOverflow" than the pure face value
There are many "Solved problems" which you're disincentivized to try to solve personally (sayauth or encryption) since implementing a widely accepted solution tends to have aditional credibility and you're not likely to have any additional value to add
That's what I think of when I see that phrase, similar to my current work, I'm creating basically a glorified CRUD for a small business to try and reduce their manual paperwork, none of the programming is particularly technical, and whatever platonic ideal of the system I had for it to be smart has been positively removed by the people that will use the system, since they prefer it to be completely manual because they're supposed to make the decisions
The Business logic isn't particularly complicated, and anyone competent could have done the same by taking a random accepeted SO answer and adapting the code
> Physical property is a consequence of the laws of physics. If I have a gold coin in my hand, then you don't have that gold coin in your hand.
So do you believe people should only be able to own real estate that they are currently physically occupying? By your reasoning, no one could ever own a piece of land larger than what they were currently standing or lying on. So no one could own land. So abolition of all property rights, essentially.
I'm hella down for this, but then we should also be able to walk into the OpenAI offices and inspect their source code, cause that shit won't fit in any one hand afaik.
This is incredibly naive misunderstanding of how property rights work: they are 100% a social and conceptual construct. IANAL, but I believe you are confusing property with possession.
But, yeah. I'm down: no one can own anything that isn't currently in their hand. Let's go liberate a lot of fake property that is "owned" in violation of the laws of physics!
I think you and Ron are confusing the relevant phase distinction the rest of us have in mind when we talk about type checking in this context.
What we mean is that quint, the language, is expressed on top of a many-sorted logic which allows our tooling to find and diagnose typing errors prior to *running* any type checker. The TLA Toolbox doesn't support this, and TLA+, like TLA itself, is untyped. TLC can (and will) raise errors while in the process of checking your model due to the kinds of problems we usually call "typing errors". Quint and apalache shouldn't do this.
I hope this helps clarify a bit.
We understand that, viewed purely theoretically, TLA+ has no dynamic aspect at all, etc. However, our team is very focused on the fusion of theory and practice, and we think it's important to ground our understanding and our discourse in the way the specification languages are actually used by real people given the existing tools. While it may be true in theory that
> What TLA+ is missing is any notion of executing code.
The file `TLC.tla` in the `StandardModules` includes these lines:
```
Print(out, val) == val
PrintT(out) == TRUE
(* This expression equals TRUE, but evaluating it causes TLC to print *)
(* the value of out. *)
```
In any case, our whole team thinks TLA is great, and we're happy people like you and Ron find it so useful and insightful. We also think it is a very insightful tool for guiding understanding :)
> which allows our tooling to find and diagnose typing errors prior to running any model checker.
Oh, I understand what you meant (and I think Quint is great!), but I'm trying to communicate to others how ineffective programming terminology is when it comes to things that aren't programming (I remember that many years ago, when I first learnt TLA+ I asked on the mailing list if TLA+ evaluates expressions eagerly or lazily because I was confused by the meaning of some expressions, and Leslie Lamport answered that there the question is meaningless because there is no program running and so no evaluation at all).
Quint's tooling can find typing errors by running a tool called a type checker, while in TLA+ the tooling can find "typing" errors by running a tool called a model checker. Neither of these tools actually runs the program expressed in the text (although a model checker usually does evaluate some expressions), and so they're both "static" checks. This is important to point out because some people try to think of a model checker as something that runs a program many times, and that's why I gave an example showing how a model checker can check something in milliseconds without possibly having run "the program" even once, let alone all of its possible executions.
But yes, the runtime efficiency of a type checker, i.e. a checker that performs its reasoning based on simple deductive rules is, is usually better than the efficiency of a model checker, which reasons in the semantic model, so even though in both cases you don't run "the program" to find type errors, the Quint tooling will report typing errors much more quickly (in wall-clock time) than the TLA+ tooling.
> it's important to ground our understanding and our discourse in the way the specification languages are actually used by real people given the existing tools
I think it's also important to distinguish between "the use of model checkers" (or other automated verification tools) and "the use of specification language." Most specification languages used by real people don't have a model checker at all nor any other fully automated verification tool; rather they offer proof assistants. A fully automated verification tool, like a model checker, is very attractive; indeed it is probably the thing most people find most attractive about TLA+ when they first see it because its value is large and it's easy to understand even if you have no interest in advanced formal specification or even know what it is. Consequently, many of those using TLA+ use it not to specify well but as an input language to a model checker because that's what they want. They may then ask, but given that I'm not really interested in powerful specification but only in the model checker, is there a way to use one with a language that's more familiar to us even if we have to give up on specification power (which is not really what we want anyway), and the answer to that is, happily, yes!
So that is definitely one way that real people who really want model checking use a specification language like TLA+ in the real world, but not only is that not the only way (again, most specification languages don't offer a model checker at all) but we're clearly talking about people using a specification language not for powerful specification but just as a means to get to the model checker.
Other people in the real world use a specification language for its power of specification and also enjoy the available tools, but the tools are not the goal. There are probably fewer people in this group than in the former because learning how to specify well is a whole other skill, and the benefits are not something you see immediately because it's not an answer at the push of a button. Nevertheless, these people specify, in the real world and with existing tools, to achieve very concrete goals, such as the better design that is helped by a better specification.
What do those few who actually want to use a specification language for its primary purpose want? Same thing as anyone who wants to specify anything in the real world. For example, if you're a company in search of a conference table, you may want to specify its size, its colour, and its stability and nothing else. If you're a designer for a furniture company, you may specify a table's size, precise shape, and strength requirement, but not its colour or specific materials.
The point is that powerful specification means this: the ability to describe any property that is important to you and not describe any property that is not important to you. That is what TLA+ is. If you want, you can describe an algorithm in a way that is ready to be executed. Or you can describe what the result should be and what the computational complexity should be and nothing more. Or you can describe what the resource constraints and not describe timing constraints. You can describe anything and you can leave anything out. This is what people in the real world who actually want to use a specification language to specify things -- not just to get at the model checker -- want.
I agree everyone will be served if both groups got what they want, and I also agree that the first group is larger, probably much larger, than the second. It's perfectly fair to claim that because the first group is larger then helping them may have a greater practical value than helping the second group. But I think it's not fair to say that because there are people who want a model checker but not so much to specify, then being practical means addressing only those who use specification languages accidentally. A few people really want to specify, and they want to do it out of concerns that are no less practical than those who just want to run a model checker.
I mostly agree with this, with one caveat: There is a group of people that want to specify executable things, mainly for the sake of specifying it. They might use a programming language for that, but doing so with a specification language instead can bring some benefits, including model checking in the case of Quint and TLA+.
I don't think Quint users are only focused on model checker, as you imply. It is very useful to specify things. However, in the world of software, turns out that people want to specify things that actually execute, and won't need a way to express things that don't.
> but doing so with a specification language instead can bring some benefits, including model checking in the case of Quint and TLA+.
I was referring to benefits that only a rich specification language like TLA+ (or Coq or Isabelle) could bring, although TLA+ is more naturally expressive than the others. Model checking is nice and very useful, but just the rich formal specification is at least as useful in itself. Like most people, I was drawn to TLA+ for the model checking, but after spending a lot of time with it, I realised that model checking was its least interesting benefit even though it's its best marketing. It's really cute and useful that there are model checkers for TLA+, but I'd used SPIN and Pathfinder before, and they were cool, too. What set TLA+ apart for me was the way it let me specify and improve my designs, not the fact that it happened to have a model checker (although without the model checker I probably would have never given it a chance).
> turns out that people want to specify things that actually execute, and won't need a way to express things that don't.
Absolutely. And others want to specify things that cannot possibly execute, because if they could they would make for a lousy specification from a very practical perspective (it's of the utmost practical importance to be able to say: this distributed protocol requires O(lg n) messages, and yet it's not something you can easily model-check), and you can't have both. A model-checked specification is often not a good enough specification, and a good specification is not something you can model-check.
So in the practical world of software it turns out there are people who want to model check, and there are people who want to write good specifications, both groups are positively tiny but the former is larger. Neither is fully served by something made for the other. Something that's great at being fully model-checkable cannot be really good for specification and vice versa. I'm all for more languages that are more fully more checkable, but it's important to acknowledge that such languages are less practical for rich specification. Both specification and model-checking are practical concerns, but there's always a tradeoff.
Which is why you had to talk about TLC, not TLA+. And yeah, while the presence of printf-style debugging for specifications being model checked by TLC is technically executable-ish, I think in this case, it kind of proves the point that TLA+ itself isn't.
Regardless, I like the direction Quint is taking though I would quibble with this language in your reply:
> our tooling to find and diagnose typing errors [runs] prior to [...] any type checker
If you are diagnosing typing errors, then definitionally, you are running a type checker. It's great that it's fast and interactive, runs prior to the model checker, is easy for programmers to read/write, et cetera. But…it's still a type checker. :-)
Anyway, keep up the great work! I've been on your newsletter for a long time and love the work you guys are doing.
I think they meant "model checker" instead of "type checker" in that sentence. Otherwise, of course, we need to run the type checker to get the type diagnosis.
But this exact problem is why I am more interested in approaches that integrate specification and implementation into a single (dialectical) process (e.g., via type-theoretic methods).