Hacker News new | past | comments | ask | show | jobs | submit login

> Yes, but I don't think anyone considers a SQL join to be a description of a linear-time algorithm.

This is the first mention either of us have had of "linear-time" outside of a discussion about specifications. There aren't many programming languages that only let you write linear-time algorithms, and I think that's very far from most people's definition of a programming language. That being said, it's not like you can't write linear-time algorithms in Coq; its reduction machine is pretty efficient w.r.t algorithmic complexity (and it's likely to gain native machine integers and arrays soon to bring the constant factor in line with this). Proving linear-time execution for a native Coq function is another question entirely, of course, but you can always model a specific reduction machine in Coq and analyze that instead.

> Hence, specifications cannot be generally compiled, therefore they are not programming languages

I really don't agree with you on this. You have a fixed notion in your head if what it means to compile a type (attempt to find an inhabitant), but that is not how Coq interprets types at all; from its perspective, types are terms of independent interest, and are moreover themselves inhabitants of some other type. We don't expect a program to try to "find the inhabitant" of a term of type boolean when it's compiled; we want to keep reducing the term until it's in normal form (and, hopefully, our theory is consistent so it will be either true or false). In Coq, the same sort of reasoning (more or less) applies to types; it just keeps reducing it until it reaches normal form. There's no special notion of "finding an inhabitant" as distinct from normal program execution, which is exactly why I claim that Coq does not make the "specification-program" distinction. I'm not sure if I can explain it any more simply. I am not saying this is not a meaningful distinction in TLA+, or that it's not a real concept, but it's not the only way of thinking about things.

> I'm not sure what happens if you know the type is inhabited. It might be decidable in that case (just as finding a proof of a proposition that's known to be a theorem), but the search is still intractable, and there is no way the resulting computation would be efficient.

Depends on the problem. In Coq, for decidable problems you can write a decision procedure that is tailored to the individual type, having it produce a proof of the proposition you desire as a byproduct (in fact, in general producing such a procedure is how you prove that the problem is decidable!). Sometimes executing it is tractable and sometimes it isn't, but I don't agree that writing an algorithm to solve such a problem automatically transforms your domain from "efficient, everyday programming" to "ineffecient, specification-driven programming." Moreover, a lot of the time, there will be an easy inefficient decision procedure and a much harder efficient one, but given that they're both written in the same language and solve the same problem it seems hard for me to imagine that one would be considered "ordinary programming" and the other "program extraction" or whatever.

(In response to your [1], tactic languages like Coq's Ltac, much maligned as they are, are basically an attempt at interactive program synthesis; people rarely write complicated proof terms by hand. So I'd say these ideas have been a success, though hardly an unqualified one. Of course, one can argue that it's not "really" synthesis because you're not synthesizing the whole program from nothing but the specification, but even in ordinary program synthesis most successful solutions are based on sketching, where a partial program is provided and the synthesis engine fills in the gaps).




> This is the first mention either of us have had of "linear-time" outside of a discussion about specifications.

I mentioned it before, and my point isn't about linear time in particular. If you write a specification of an algorithm of time complexity T(n), the expectation is that interpreting the algorithm would take p(T(n)), where p is some polynomial (i.e. no exponential slowdown or worse). This is what I mean by "efficient".

> but that is not how Coq interprets types at all; from its perspective, types are terms of independent interest, and are moreover themselves inhabitants of some other type.

That doesn't matter. I can specify a function by giving it a type -- say it's a function from the even naturals to their Goldbach pairs. Turning this into a program requires finding a lambda term that computes such a function.

> There's no special notion of "finding an inhabitant" as distinct from normal program execution

What I meant was that a type in Coq is not a computation (although it could be the result of a computation), and a computation is not a type (although it could produce a type -- Pi types).

> I am not saying this is not a meaningful distinction in TLA+

In TLA+ there is no such distinction because you cannot talk about a program that computes that function, but always the set of all programs that do (you can, of course, give more and more detail to restrict the set, i.e., create subtypes, but there is no "the computation"); that set is always either empty or infinite. In fact, if it's non-empty, it's always too big to even be a set (it's a proper class).

> Sometimes executing it is tractable and sometimes it isn't, but I don't agree that writing an algorithm to solve such a problem automatically transforms your domain from "efficient, everyday programming" to "ineffecient, specification-driven programming."

I am not talking about writing an algorithm to solve such a problem automatically, but about a programming language that always does so.

> So I'd say these ideas have been a success, though hardly an unqualified one.

They have been an abysmal failure when it comes to synthesizing anything that could be considered a replacement for programming. But we're talking about two radically different measures of success. I'm talking about a commercial process of turning lead into gold, and you're talking about turning a couple of lead atoms into gold atoms in a particle accelerator.


> No, I mentioned it before, and my point isn't about linear time in particular. If you write a specification of an algorithm of time complexity T(n), the expectation is that interpreting the algorithm would take O(T(n)). This is what I mean by "efficient".

What algorithm? There are many algorithms that inhabit the same type. A specification that's detailed enough to be able to analyze its complexity is usually a program... and it's easy to write a program that runs in linear time in Coq.

> That doesn't matter. I can specify a function by giving it a type -- say it's a function from the naturals to their Goldbach pairs. Turning this into a program requires finding a lambda term that computes such a function.

In Coq, you cannot specify a function by giving its type. First, CiC is not an extensional type theory in general; it cares about how things are computed, not just what the answer is. Secondly, for many interesting function types in Coq, there are an infinite, or even uncountable, number of inhabitants (for example : Prop -> bool). Moreover, in Coq's core type theory, even if it could automatically find all lambda input-outputs pairs that inhabited some type, that would not be justification to replace the quantified type (you need to assume the function extensionality axiom if you want this).

> I didn't say there was. What I said was that a type in Coq is not a computation, and a computation is not a type. Just as a proof is not a proposition and a proposition is not a proof.

A type in Coq is no more or less a computation than a term is. I don't understand what this has to do with "specification vs programming." As for "a proposition is not a proof", I think that is the main point of significant disagreement between us. Why do you think True doesn't prove Prop, for instance? It may not be the most interesting proof in the world, but neither is the proof I of True (and in many cases, proofs of Prop are very interesting in themselves, particularly in the context of dependent elimination).

> In TLA+ there is no such distinction. You cannot talk about a program that computes that function, but always the set of all programs that do (i.e., just the type).

Okay, then neither of them have such a distinction. What you seem to be missing is that in Coq, the mechanism for manipulating types is not different from the mechanism for ordinary program evaluation; it looks like TLA+ uses an axiomatic formulation rather than computation rules, so if you were to add evaluation to TLA+ you'd probably need a new set of semantics, but that's not the case for Coq.

> I am not talking about writing an algorithm to solve such a problem automatically, but about a programming language that always does so.

As you noted, such a programming language is impossible, so I don't understand why you are talking about it. The impossibility of that hypothetical programming language doesn't mean you can't write a single unified programming language (without two different "parts") that also functions as a specification language, and I genuinely don't understand why you think it does.

> They have been an abysmal failure when it comes to synthesizing anything that could be considered a replacement for programming.

The state of this has been steadily improving! For example, https://www.cs.utexas.edu/~isil/sypet-popl17.pdf looks quite promising for everyday programming tasks.


> A specification that's detailed enough to be able to analyze its complexity is usually a program...

Not necessarily, and I would claim that it usually isn't, but to see that you have to pay close attention to how algorithms and systems are specified. I give the example of the QuickSort algorithm at the beginning of this post: https://pron.github.io/posts/tlaplus_part3

> In Coq, you cannot specify a function by giving its type.

Of course you can; that's the meaning of the type -- it's just not enough to get a computation. That's precisely the point I was getting at. A specification is a description of something at some arbitrary level of detail, but languages that allow compilation -- i.e. programming language -- normally require a minimum level of detail for that to be possible. In a language like Coq, this requirement is a strict syntactic distinction between types and lambda terms.

> First, CiC is not an extensional type theory in general; it cares about how things are computed, not just what the answer is.

Precisely. This is absolutely necessary for a programming language, and undesired in a specification language. (BTW, in TLA, what and how is arbitrary, because the temporal logic lifts everything that in Coq is operational into the denotation)

> A type in Coq is no more or less a computation than a term is.

A type is not a computation. To "run" something, or to speak of the operation of something, that something must be a lambda term.

> the mechanism for manipulating types is not different from the mechanism for ordinary program evaluation

I know that (I don't know Coq, but I know a bit of Lean, which is similar, I think), but that doesn't matter. You can compute with types, but there is a strict distinction between the common form of nonderterministic specification -- types -- and actual computation (not types).

> As you noted, such a programming language is impossible, so I don't understand why you are talking about it.

Because we're trying to work out the difference between a specification language and a programming language. A programming language is one that, when it describes a computation, can always be interpreted efficiently in the sense I meant. A specification language should allow specifying computations at arbitrary levels of detail. I claim that a specification language cannot be a programming language and vice versa -- except if we had this magical language I described. What you can do, and Coq does, is combine two languages in one, but even then the result requires a delineation between a computation and a specification (arbitrary nondet).

> For example, https://www.cs.utexas.edu/~isil/sypet-popl17.pdf looks quite promising for everyday programming tasks.

I honestly do like the research, but this is going from converting two atoms of lead into gold in a particle accelarator to five, or like saying that Olympic high jumpers are steadily improving in jumping all the way to the moon... This is another discussion, but I like the research and I like the practice, but I don't buy the claims that such improvements in research put changes to practice on the horizon. The gap is still huge, and I also think some of the research (especially in programming language theory) is done based on assumptions that completely fail in practice (I wrote about that here: https://pron.github.io/posts/people-dont-write-programs)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: