Hacker News new | past | comments | ask | show | jobs | submit login
Learn You an Agda (williamdemeo.github.io)
127 points by bkirwi on Dec 2, 2014 | hide | past | favorite | 70 comments



Languages that use types for elaborate proofs always seem to me like they only prove the most uninteresting properties of programs, namely the parts that deal with data transformations. Not that data transformations aren't important, but the properties they need to preserve are usually easy to be convinced of without any assistance by the compiler.

At the end of the day, programs are written for their side effects (unless your program is a compiler, which is the classical example of an "interesting", non-trivial data transformation, and possibly the only example). What I'm most interested in is proofs that, say, in a concurrent environment, a function that closes a socket will never be called as long as there are pending tasks to write to the socket. Or, because I write concurrent data structures, I'm interested to prove that a certain function will eventually release all locks it acquires. Are there any such languages?

I've heard of effect systems, but I couldn't find enough information on them. Are effect systems capable of the kind of proofs I'm interested in? Are there practical programming languages with powerful effect systems?

EDIT: Some Googling yielded this: http://lambda-the-ultimate.org/node/4768 tldr: Not yet.

EDIT: At least the lock proof seems to be implemented as one of Java 8's pluggable type systems: http://types.cs.washington.edu/checker-framework/current/che...

It lets you add a type (Java 8's pluggable types are intersection types) to a function called @EnsuresLockHeld(locks...) that proves some locks will be held when a function returns and even @EnsuresLockHeldIf(locks..., result) that proves a function grabs certain locks if its boolean result is equal to the one specified in the type.


Given formal semantics of your programming language and some time, I think you can use Agda to prove interesting properties about correct programs you write in that language. Semantics, the behaviour of objects such as sockets, can be encoded in types the theorem prover can understand. It can look like this for a basic imperative language: https://github.com/Twey/imp-agda/blob/master/IMP/Semantics/O.... You can then define the property you are interested in and get your proof by combining these types together.

This is often tedious but can be automated to some extent and has already led to significant achievements. See http://sel4.systems, a proven kernel, done with Isabelle, a tool similar to Agda.

Maybe you are looking for a language with a type system that will force you to write your program in a safe way regarding sockets and the like. This can also be achieved with static analysis, another way of automatically proving properties about programs. I do not know of any for distributed programs but it is probably possible. I would say this is still a research topic as such programs are still so hard to get right.


Sounds like you are interested in applications of temporal logic for formal verification in software development, which is an area of active research. Prof. David Harel works in that field:

http://www.wisdom.weizmann.ac.il/~harel/papers.html

Do an in-page search for terms like temporal, behavioral, scenario, LSC, live sequence. By the way, I'm no expert but am interested in similar applications.


I'm actually about to integrate some aspects of behavioral programming in a concurrency library I'm maintaining. I also vaguely recall taking a class with Prof. Harel many years ago...


It's not exactly what you're asking for but Ur/Web puts an expressive type system to some slightly unusual ends: http://www.impredicative.com/ur/.


Whoa, this is pretty horrifying: app code, queries and markup all mixed together, like typed-checked PHP. I know this is just a demo but "separation of concerns" does not appear to have been a concern.

    fun list () =
            rows <- queryX (SELECT * FROM tab AS T)
                           (fn (fs : {T : $([Id = int] ++ map fst M.cols)}) => <xml>
                             <tr>
                               <td>{[fs.T.Id]}</td>
                               {@mapX2 [fst] [colMeta] [tr]
                                 (fn [nm :: Name] [t ::_] [rest ::_] [[nm] ~ rest] v col => <xml>
                                   <td>{col.Show v}</td>
                                 </xml>)
                                 M.fl (fs.T -- #Id) M.cols}
                               <td>
                                 <a link={upd fs.T.Id}>[Update]</a>
                                 <a link={confirm fs.T.Id}>[Delete]</a>
                               </td>
                             </tr>
                           </xml>);


Keep also in mind, that it is ultra-fast (http://www.techempower.com/benchmarks/#section=data-r9&hw=pe...).


Oh, nice! Indeed I'm not sure that's what I want, but this seems like the first statically-typed FP language I like. I especially like how you can program the type level with constructs similar to the program level.


Here Idris uses its effect system to verify correct usage of a protocol between a client and a server http://www.youtube.com/watch?v=vkIlW797JN8&t=53m35s


> Languages that use types for elaborate proofs always seem to me like they only prove the most uninteresting properties of programs, namely the parts that deal with data transformations.

You can proof almost anything in agda. Even the most complicated mathematical theorems.

>What I'm most interested in is proofs that, say, in a concurrent environment, a function that closes a socket will never be called as long as there are pending tasks to write to the socket. Or, because I write concurrent data structures, I'm interested to prove that a certain function will eventually release all locks it acquires. Are there any such languages?

You can do this in agda, however, you need to specify semantics of the language you use.


>You can proof almost anything in agda. Even the most complicated mathematical theorems.

And I think that's the thing that makes Agda interesting. If you limit Agda to proving things about computer programs, it doesn't sound that special. People are using Agda to prove things about, for example, topology.


> Languages that use types for elaborate proofs always seem to me like they only prove the most uninteresting properties of programs, namely the parts that deal with data transformations.

> At the end of the day, programs are written for their side effects

The reason for such a focus on data transformation is that it's very easy to do in these purely functional languages (Agda included). It's so easy, in fact, that "other" things, like side effects, event handling, etc. are represented as data transformations. This is most obvious in Haskell, since laziness decouples definition from computation; eg. making it trivial to represent event streams as infinite lists.

> What I'm most interested in is proofs that, say, in a concurrent environment, a function that closes a socket will never be called as long as there are pending tasks to write to the socket.

The usual approach is to define a datatype of operations you might want to perform (eg. `Write String`, `Read Length`, etc.). Next you define a datatype which can combine these operations together in ways you may want (eg. something like a rose tree, if you want threads firing off threads). This forms an "embedded domain-specific language". We then write a bunch of helper functions for manipulating these datastructures (eg. a "combinator library"), then we use these to write down what we actually want as a combination of operations.

Next we write an interpreter function which opens a socket, performs all of the operations in the tree (possibly in multiple threads), waits for them to finish then closes the socket.

> Or, because I write concurrent data structures, I'm interested to prove that a certain function will eventually release all locks it acquires. Are there any such languages?

You can do the same thing as with the socket example, except you can strengthen the type of your program (operation-combining) datastructure to enforce that locks are released. As a simplified example, we can force a serial computation to release locks by only allowing AQUIRE and RELEASE to be inserted together:

    data Op = Foo | Bar | Aquire ID | Release ID

    data SafeOp = SFoo | SBar

    data Program : Type where
      NoOp : Program  -- Empty Program
      WithLock : Program -> Program  -- Add lock ops to a Program
      PrefixOp : SafeOp -> Program -> Program  -- Add a non-lock Op to a Program
      Interleave : Program -> Program -> Program  -- Combine two Programs

    -- Part of the interpreter, not exposed to the world
    progToOps : Program -> [Op]
    progToOps NoOp = []
    progToOps (WithLock id p) = [Aquire id] ++ progToOps p ++ [Release id]
    progToOps (PrefixOp SFoo p) = [Foo] ++ progToOps p
    progToOps (PrefixOp SBar p) = [Bar] ++ progToOps p
    progToOps (Interleave p1 p2) = interleave (progToOps p1) (progToOps p2)

    interleave [] ys = ys
    interleave (x:xs) ys = [x] ++ interleave ys xs
Notice that progToOps can never output a list containing Aquire without also containing a Release with the same ID. Also notice that we can define arbitrary combinations of the other ops:

    [] is NoOp
    [Foo, Bar] is PrefixOp SFoo (PrefixOp SBar NoOp)
    [Foo, Aquire A, Bar, Aquire B, Release A, Foo, Release B] is Interleave (PrefixOp SFoo (PrefixOp SBar NoOp)) (Interleave (WithLock A NoOp) (WithLock B (PrefixOp SFoo NoOp)))
Of course these datastructures would be build up by helper functions instead of by hand, would be tree-like for concurrency, would probably provide guarantees per sub-tree, would allow arbitrary functions (of some suitable type) in place of Foo, Bar, etc.


But in your example the program doesn't represent the code itself but a program written in some new language. I said that current type systems are good for writing compilers, but not so good at describing their own behavior other than data transformations.


It's not reasonable to embed every possible language constructs, semantic and logic into a theorem prover. So the trick is to make the host generic enough to be able to embed your needs as a domain specific language: your critic becomes a library problem. Which implies that you only need to trust the host logic/implementation; and many people can work, collaborate and profit from the same core. Also, we kind of know how a generic logic can be done, but the domain specific logic are more controversial (because everyone needs are different.)

The fact that your DSL is represented as an immutable datastructure by the host is really not important. Any code you write is going to be turned into a datastructure at some point. The immutability should be read as "the context in which this result is valid has been made fully explicit", where "context" includes "things that mutate over time". You don't loose in expressivity, you just can't forget to handle special cases.


I think this is a type system limitation that every type describing side effects -- i.e. the core of every interesting program other than a compiler/interpreter -- can only be used in the context of a compiler/interpreter.

Checker (Java 8's pluggable intersection type-system framework) is able to do that (define side-effect types), but isn't a single general type system. I.e. you need to write a different type system for each property (of course, they can all work together). Maybe that's the only known way to do that. Searching for more general solutions, I've come up empty handed (the problem is well known, but the solutions are all in early stages of research).

The idea of such a type system needs to be something like: each function has a set of types associated with it, and a function's type interacts with the types of the functions it calls in some ways. So the most primitive example is, of course, checked exceptions, where the type of a function is a simple union of the types of its callees. But the interactions can, and should be more interesting, and take into account the order in which the callees are called etc.


I'm sorry, I don't think I understand your message and my answer might be off as a result. Could you link to the current research on more general solutions than Checker, so I can get a clearer picture?

> a type system limitation that every type describing side effects [...] can only be used in the context of a compiler/interpreter.

A type system is just a logic description of a property that a program might have. It specifies how the language constructs interact with this property []. The fact that compilers use a type system is independent of your ability to write your custom type system, or to use it in a different context (IDE, toolchain, ..).

[] Are you saying that a type system shouldn't be concerned by the AST of the language?

An external type checker for Haskell: http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/blog/2013/0... And the various type checkers for the not statically typed languages (python, ..)

Also, I don't understand why you distinguish "typing side effects" as something different from the rest?

> you need to write a different type system for each property

But you do have to define your type system at some point? Are you thinking of something like Mezzo, where some userland properties can be defined without going down the type-system-definition road? https://protz.github.io/mezzo/

> But the interactions can, and should be more interesting, and take into account the order in which the callees are called etc.

That's why type systems are defined with respect to the language construct, by rules on the AST (? I'm confused because of the earlier [*] here).

To go back to Agda significance for a custom type system, it's not enough to define a type system. You generally want to prove that your definition is valid: If your "checked exceptions" says that a program `P` can't fail due to an exception, you want to check that you didn't forget anything by verifying the type checker with respect to the actual semantic of the language.

You can also embed your custom property into Agda type system (isn't that what you want?).


I am not a type theory expert (or even enthusiast) by any means, but I imagine something that will be very useful like so: the intersection types of callees interact in interesting ways with the type of the caller (taking into account ordering). For example, the type of the caller can be: "this function locks A, then B, then unlocks B, then unlocks A". This type can arbitrarily intersect other effect types, such as: "this function writes to socket X" and also "mutates variable V".

I will be able to define a type "locks A" and "locks B", and the kind "locks" will be ordered, so that the type of a function that locks A and locks B, will be "locks A and then locks B". Also, I will be able to define interactions between types so that unlocking A will undo the effect, so that a function that simple locks A will have the type "locks A", but one that locks and later unlocks will have the type "holds A". Of course, these types intersect arbitrarily.


> I think this is a type system limitation that every type describing side effects -- i.e. the core of every interesting program other than a compiler/interpreter -- can only be used in the context of a compiler/interpreter.

Well, to describe a side effect, you need to, uh, describe the side effect. Which requires understanding the compiler/interpreter, and a description of the things the compiler/interpreter does, expressed as machine-readable data.

It sounds like what you're after is something like a standard library where all the effects are carefully enumerated? i.e. something like Haskell, but where rather than the IO "sin bin", every effectful function would be very explicit about its effects? A bit like how Ada(IIRC) shipped with quite high-level concurrency operations as part of the language spec (which then lead users to reimplement low-level concurrency primitives on top of them, but maybe we can hope that history wouldn't repeat)?

Some libraries are taking us in that direction, but it's slow going. E.g. doobie seems to be gaining some momentum, as a principled way of expressing database access. Maybe in 10 years' time (if anyone is still using scala...) this will be "the" standard way of writing any program that uses a database, and the sort of thing that deserves to be in the standard library (whatever that means - IMO with modern dependency management tools a language standard library should only contain those primitives that need direct compiler support, anything else can be a regular library. But that's a whole 'nother argument). But right now it would be very premature to say that the algebra doobie uses is the correct one. Often the appropriate interpreter for a particular problem depends on that particular problem; just as it's good to construct your datatypes bottom-up, writing a program can feel like writing a succession of interpreters that get closer to your domain and capture more of the laws - basic Haskell gives you "thing that does any kind of I/O", then you have a library for "thing that sends complete messages to sockets", a higher-level library on top of that for "thing that does database operations", and so on.

And honestly standardization isn't that important, as long as the language makes it easy to let your code handle it generically. Doobie itself allows a couple of different implementations for your lower-level effect-capturing monad (scalaz Task, IO), or allows you to provide a simple adapter. So if two libraries use the equivalent of a different checker, a different "type system", that's not really any worse than e.g. two C++ libraries using two different String classes - that is to say, it's annoying, but not impossible to work with. User-level code can do the same thing - I've got a generic "report" process that can use a doobie database call, an async call to a web service, or something else. As and when the community reaches a consensus on the right way to express a particular kind of effect, I'd expect more higher-level libraries to standardize on that. But in the meantime, as long as types are first-class values and you can manipulate them as easy as any other values (which these kind of languages get you), multiple "type systems" are no more of a problem to work with than multiple types.


I'm not talking about standardization but about a different kind of type system, where the intersection types of callees interact in interesting ways with the type of the caller[1] (taking into account ordering). From what I understand this kind of thing is being research but we don't have anything close to being practical just yet.

[1]: for example, the type of the caller can be: "this function locks A, then B, then unlocks B, then unlocks A". This type can arbitrarily intersect other effect types, such as: "this function writes to socket X" and also "mutates variable V"


> I'm not talking about standardization but about a different kind of type system, where the intersection types of callees interact in interesting ways with the type of the caller[1] (taking into account ordering). From what I understand this kind of thing is being research but we don't have anything close to being practical just yet.

Isn't chriswarbo's example that? Since your Ops are generic they can be other effect types. The interleaving is the tricky part, but e.g. Haskell's extensible-effects is practical enough that it's used in real programs.


> But in your example the program doesn't represent the code itself but a program written in some new language.

That's the key idea ;)

If I write some Java like `DB.connect(credentials).select("users").where("name", "Kevin")`, am I using "some new language"? As far as Alan Kay is concerned, yes; that was one of his main inspirations for OOP:

> My math background made me realize that each object could have several algebras associated with it, and there could be families of these, and that these would be very very useful.

From http://www.purl.org/stefan_ram/pub/doc_kay_oop_en

The only difference between that Java example and my earlier Haskell/Agda example is that the functional version operates in two phases: calculating which operations to perform (building a `Program` containing `Ops`) is separate to performing those operations (applying an interpreter function to a `Program`).

However, keep in mind that we're not doing imperative programming, so there's is no inherent notion of time: we get the same result no matter which order we evaluate stuff in (that's the Church-Rosser Theorem). Hence these two "phases" are logically distinct, but not necessarily temporally distinct. In practice, we tend to define our `Program` lazily, so it gets constructed on-demand by the interpreter.

This is a bit like having a `main` loop containing a `switch`, for example (forgive mistakes; I've never used function pointers in C):

    void main(op* instruction, int* length, void* callback) {
        socket* s = set_up_socket();
        int close = 0;
        char* data;
        while (!close) {
            switch(*instruction) {
                case READ:
                    // Read logic
                    data = read_from_socket(s, *length);
                    *callback(instruction, data);  // Update instruction based on data
                    break;
                case WRITE:
                    // Write logic
                    data = *callback(instruction);  // Update instruction and return data
                    write_to_socket(*length, data);
                    break;
                case CLOSE:
                    // Close logic (ignores any other instructions)
                    close = 1;
                    break;
        }
        close_socket(s);
    }
This main function is basically the same as our interpreter; it's keeping the dangerous socket-related stuff in one place, providing some safety that the socket will be closed after use, etc. The values of `data` and `instruction` are being computed by arbitrary C code living at `callback`, just like our `Program` can be computed by arbitrary Haskell/Agda/whatever code. In this case the interleaving of choosing and executing instructions is explicit, but a lazily-evaluated `Program` will behave similarly in practice.

Does this mean that those callbacks are in "some different language" to C? No; everything is regular C except that we just-so-happen to be labelling some ints as `READ`, `WRITE` and `CLOSE`. Likewise, in Haskell/Agda/etc. we have the full breadth of the whole language available to us; we just-so-happen to be returning values of type `Op` and `Program`. All of our logic, concurrency, etc. is done in the "host" language; we only define `Op`s for the things we want to restrict, like `Read` and `Write`; there's absolutely no point defining operations for boolean logic, arithmetic, string manipulation, arrays/lists, etc. because that's available already. Of course, once we've finished defining our socket library, that too will be available to everyone (if we release it); just because it might use some crazy `Op`, `Program` and `runSocket` things internally doesn't mean anyone has to care about that; it's just an implementation detail.

Notice that this is very similar to an OOP dynamic dispatch system, where `READ`, `WRITE` and `CLOSE` are method names and the callbacks just-so-happen to be kept together in a struct which we call an "object". Just like Alan Kay said.


Thanks for the very straightforward explanation of how strongly typed functional programs can have effects in the real world.

(forgive mistakes; I've never used function pointers in C)

You mostly got it right; the function declaration would look like this (I moved the * for pointers onto the parameter name for stylistic reasons related to how C declares types):

    void main(op *instruction, int *length, void (*callback)(...))
Also, your function pointer calls should omit the * in front (it would be applied to the return value of the callback, which is void in this case, so you'd get an error during compilation).


You may interested in Nim (http://nim-lang.org/). A practical programming language with an effect system.


You'd be surprised how many different things can be represented as "compilers" (i.e., as sequences of transforms of some languages). Starting from the relatively obvious things like computer algebra systems, CADs, CAE, and going into unexpected areas like even CRUD and computer games.

tl;dr: side effects are overrated.


Could you expand upon this?


Sure. Take a look at all the mathematical theories and models out there. They're all languages, and solving problems within a framework of any given mathematical theory is pretty similar to a process of compilation - a source language should be transformed into another language in which the solution can be "executed".

Let's take CAD as an example. We've got a source language: a constructive solid geometry, for simplicity. Pretty much everything a CAD does boils down to various transforms of this language - into an optimised octree for rendering purposes, into G-code slices for production, into, say, SVG for blueprints, etc.

CRUD stuff boils down to transforms between different semantic domains - a relational structure at one end and workflow graphs and document trees on another.

It's really hard to come up with any practically important problem domain which would not benefit from being represented in terms of languages and transforms.


I really like these "learn you a" books.

I have no reason why but I just like the freaking pictures. It keeps me going. It's like a children book but it also teaches me the subject I like, programming languages.


On top of the pictures, the examples are a lot more fun than foo and bar. Here's (part of) how Learn You a Haskell introduces map:

    ghci> map (++ "!") ["BIFF", "BANG", "POW"]
    ["BIFF!","BANG!","POW!"]


The official Python tutorial is also a good example of this. Is all Monthy Python references.


I hate foo/bar for some reason. I think it's because it is supposed to be "this is just whatever", and yet they force me to think about Kung Fu, the Foo Fighters and crowbars.


You know it's an old military acronym, right? F.U.B.A.R. stands for eFfed Up Beyond All Recognition. (And SNAFU is Situation Normal: All eFfed Up.) ;-)


I just think of a certain audio player...


I like to show you the immense beauty of the Agda standard library: http://agda.github.io/agda-stdlib/html/README.html (all clickable and…you'd better have proper unicode fonts).


Even something as mundane as Data.Bool is beautiful, defining False in terms of bottom.

I love to look at unicode, but is it a pain to type it in practice, even given emacs?


Note that `false` the value of type `Bool` is not defined in terms of bottom. It is a constant in its own right just like `nil` and `cons 1 nil` are.

What happens in Data.Bool is that we have the function from values to types `T` such that `T true` and `T false` are types equal to top and bottom respectively. This value-to-type encoding is called a "universe" and allows us to talk about propositions which are based on boolean function results like

    theorem1 : T (1 - 1 == 0)
which is somewhat interestingly different from

    theorem2 : 1 - 1 = 0
in that the first will reflect upon the definitions of the (recursive) functions (-) and (==) while the second reflects only upon the definition of (-).


Agda-mode will translate from symbol names to unicode for you. For example, \\to gets translated to → and \\== goes to ≡. It is very easy to work with after a short learning curve. There is more explanation available on the Agda Wiki. [0]

[0] http://wiki.portal.chalmers.se/agda/pmwiki.php?n=Docs.Unicod...


I don’t use Agda, but for most mathy Unicode input, I use C-\ (toggle-input-method) and choose TeX mode. Then you can type “\forall\alpha. \alpha \to \alpha” and get back “∀α. α → α”, which is pretty good. There is also C-x 8 RET if you want to type a particular Unicode character by name or code point number.


First time I read about 'mixfix' operators, one less thing I'll have to invent I guess. https://www.google.fr/search?q=mixfix+notation


It lost me at this point:

Type the following into an emacs buffer, and then type C-c C-l (use \_1 to type the subscript symbol ₁):

  proof₁ : suc (suc (suc (suc zero))) even
  proof₁ = ?
On my newly created emacs setup from the earlier instructions, it's not clear what "type into an emacs buffer" means. Do I create a new file with C-x C-f? Create a new buffer with C-x b? I tried both, and promptly lost agda-mode.

I then tried restarting emacs and opening a new .agda file so I get agda mode. Loaded the earlier module and then tried loading this, and it didn't seem to know about suc in the context of this buffer/file. So now I'm lost.


That should read, "Type the following into the Emacs buffer." You should have something like:

    module LearnYouAn where
    
      data ℕ : Set where
        zero : ℕ
        succ : ℕ → ℕ
    
      _+_ : ℕ → ℕ → ℕ
      zero + m = m
      (succ n) + m = succ (n + m)
    
      data _even : ℕ → Set where
        ZERO : zero even
        STEP : ∀ n → n even → succ (succ n) even
    
      proof₁ : succ (succ (succ (succ (zero)))) even
      proof₁ = ?


Thanks! Can't believe I didn't think of that :/

What's the distinction between C-c C-space and C-c C-r? It looks like the former computes proof obligations for ?s, but what does the latter do? And why couldn't we use the latter in the first instance, to deduce the outermost STEP ? ?

Edit: never mind, that did work.


> What's the distinction between C-c C-space and C-c C-r?

Beats the hell out of me. Here's the docstring for agda2-give (bound to C-c C-SPC):

    Give to the goal at point the expression in it
Here's the one for agda2-refine (C-c C-r):

    Refine the goal at point.
    If the goal contains an expression e, and some "suffix" of the
    type of e unifies with the goal type, then the goal is replaced
    by e applied to a suitable number of new goals.
    
    If the goal is empty, the goal type is a data type, and there is
    exactly one constructor which unifies with this type, then the
    goal is replaced by the constructor applied to a suitable number
    of new goals.


If I wrote a natural number calculator in Agda; would all my numbers be represented as lists of successions from zero or can the compiler convert them to two's-complement integers as we know and love them?

In case the compiler can do that conversion: is it is programmed to do that for some subset of numeric types or can it infer an optimal binary representation of a value somehow?

On the other hand, if they really were represented as lists then I presume this language is intended as purely academic work and has no application in a production environment. Am I wrong?


> If I wrote a natural number calculator in Agda; would all my numbers be represented as lists of successions from zero or can the compiler convert them to two's-complement integers as we know and love them?

I don't think so, but Natural numbers can't be represented as two's-complement "integers"; neither can Integers.

There are many ways to encode such numbers if you really want; the easiest are probably "Fin (2^32)" for a type with 2^32 members (easy to convert to/from unary Naturals), or "Vect 32 Bool" for a list of 32 Booleans (easy to do bitwise stuff). You'd need to decide how to truncate the Naturals though; do you take min(x, 2^32 - 1)? Do you take x % 2 ^ 32?

Also, note that "S (S (S (S Z)))" isn't really a list; to turn it into a list we'd have to associate some trivial data with each constructor, eg. [NULL, NULL, NULL, NULL].


I always thought the statement

    {-# BUILTIN NATURAL ℕ #-}
binds the inductive definition of ℕ to an efficient implementation. However, googling now I can find no confirmation of this. Does anyone know more?


It's a proof assistant. Not really something you use for number crunching, industry or not.


But a proof assistant for numerical programming would be extremely useful.


    If we can construct a value of a certain type, we have simultaneously constructed a proof that the theorem encoded by that type holds.

    data _even : ℕ → Set where
I do not understand how to interpret this passage. _ even, given a number, returns a type? in this case, zero even is a type, but we have not created any value with that type. What am I missing?


That's exactly correct.

In a Dependently Typed (DT) language like Agda the language of values and the language of types are one and the same. Thus, a function from types to types is exactly the same sort of thing as a function from values to values. In this case we have a third sort of totally-natural-day-old-kind-of-thing: a function from values of ℕ to types (in Set). It's "just" a function from values to types.

Now, we annotate values with their types. Thus we might have a new top-level definition

    something : zero even
    something = _
where we've used our postfix `even` function to construct a type from the value `zero`. Is this type inhabited? That's a question of proof and programming. Regardless of its inhabitation, though, we can clearly see that the notion of `zero even` simply being a type is sound---that's the nature of function application when the target domain is that of Set!


An Agda program isn't like a C program; a C program is just a bunch of strings that tells the processor where to go and what to do. An Agda program is, instead, a bunch of strings that one-by-one teaches the typechecker what is true about the world.

In the beginning, there was Set, which is the set of all valid types. It's empty (more or less). Then comes along

    data ℕ : Set
This says that the type ℕ, which will be our natural numbers, belongs in Set. Now Set has one valid type, which is ℕ. But ℕ, which is also a type, is empty. And if there's one thing we know about natural numbers, is that there's lots of them.

So we must tell it what its members are. We COULD do it like this:

    zero : ℕ
    one : ℕ
    two : ℕ
    three : ℕ
But how tedious would that be? Instead, we use induction:

    data ℕ : Set where
        zero : ℕ
        succ : ℕ → ℕ
So zero is a member of ℕ, like before. But the successor of any member in ℕ is also in ℕ. So succ zero, and succ succ zero, and succ succ succ zero.

People see the arrow and they think functions. That's not how I think about it. Functions are great, but when I hear function I think C functions, which are tiny little instructions for tiny little machines and don't make any sense in a dependently-typed world. So instead, when I see the arrow, I think of it as a way of telling the type checker this fact: wherever you see the string "succ n", assign it the type ℕ but ONLY IF "n" is also of type ℕ. Think of the arrow as the lambda abstractor, and the space as the lambda applicator. Maybe that helps a little; maybe it hurts a lot.

We plunge forward!

    _+_ : ℕ → ℕ → ℕ
    zero + n = n
    (succ n) + m = succ (n + m)
More substitution rules. Whenever you see zero + n, the typechecker is allowed to replace it with n. Whenever you see (succ n) + m, it's allowed to replace it with succ (n + m). Allowed, but not required: the typechecker should only use these rules if it needs help typechecking a string it doesn't recognize.

We would now like describe even numbers in Agda. Could we just define

    data even : Set?
And thus give Set its second element? No! Because evenness is a proposition! It _depends_ on our other type ℕ, which is where we get our fancy dependently-typed programming from. We really want to say this:

    data even 0 : Set
    data even 2 : Set
    data even 4 : Set
But that would be tedious. So instead we say

    data even : ℕ → Set
If the arrow means substitution, like before, we can think about it like this: wherever we see the string "even n", assign it the type Set (a.k.a. let it be a valid type) but ONLY IF "n" has type ℕ. Is that true for any n? Nope. If it were, we would write

    data even : ℕ → Set where
        even_everything_is_goddamn_even : (n : ℕ) → even n
Instead, we first assume that "even n" is true for no n at all. That's right: a function with an empty domain. We fill in the domain gradually.

    data even : ℕ → Set where
        even_zero : even zero
        even_succ : (n : ℕ) → even n → even (succ (succ n))
First, even zero means that "even zero" is a member of Set. zero is a natural number, so this typechecks. Second, suppose you have n : ℕ. And suppose even n typechecks. Then even (succ (succ n)) also typechecks.

It's simply another form of induction, although slightly trickier because we're trying to fill out a proposition.

Anyway. Hope that helps more than it confuses. I think Learn You An Agda makes it more confusing that it really is by not explaining Set and by using that weird postfix notation. Set is just Haskell's asterisk kind "*" if you know Haskell; otherwise it's pretty hard to grok for a while.

You should also check out Benjamin Pierce's textbook on Coq programming, which expands on a lot of these details. (Coq and Agda and Idris are all pretty much are built on the same ideas.)

http://www.cis.upenn.edu/~bcpierce/sf/current/index.html


In his example, `ZERO` is a value of type `zero even`. `ZERO` is the "proof" that zero is even. Of course its not much of a proof, rather its true by definition. Similarly `STEP ZERO` is of type `suc(suc(zero)) even` and thus a "proof" that two is even.


Recently learned this language in a functional programming course. Very cool language. The Curry-Howard Isomorphism proves that programming languages are equivalent to mathematical proofs and you can really feel it here. I could see this language being very useful in math-y applications such as cryptography and verification.


Is there any reason to use Agda over Coq or vice versa?



How does Agda compare to the likes of Clojure, Haskell, ML, Scheme etc? What are the best reasons to learn it?


The article covers this here http://williamdemeo.github.io/2014/02/27/learn-you-an-agda/#...

Some excerpts:

> Agda is a programming language that uses dependent types ... you can actually include values inside a type. For example, the List type constructor can be parameterized by both the type of its contents and the length of the list in question ... . This allows the compiler to check for you to make sure there are no cases where you attempt to call head on a potentially empty list, for example.

> If I can come up with a function of type Foo -> Bar (and Agda says that it’s type correct) that means that I’ve written not only a program, but also a proof by construction that, assuming some premise Foo, the judgment Bar holds.

> Proofs work in concurrent scenarios. You can’t reliably unit test against race conditions, starvation or deadlock. All of these things can be eliminated via formal methods.

> Thanks to Curry-Howard, Agda can also be used as a proof language, as opposed to a programming language. You can construct a proof not just about your program, but anything you like.


Is there a comparison to Idris? It sounds like they're very similar, and more of the people I've followed seem to be interested in Idris than Agda.


I always wondered about dependent types. Even though it was a little bit too technical for me, at least I have some idea about what it really is.

By the way, Z notation rocks.


Ok, that's neat. Is there a proof checker and an editor/IDE other than Emacs' Agda mode?

Because I'm probably going to stick with Coq for its tooling.


I posted this on lobste.rs and somebody stole my link....

Hackernews always gets so much more comments.


Not guilty! I picked this one up on Twitter this afternoon, but it's possible the lineage traces back to your lobste.rs post. (I have no lobste.rs account, though not for lack of interest -- while sparse, the comments there seem to be of very high quality.)


When are they going to open registration back up on there?


My thanks OVERFLOWS. For those who are 'older but not quite Learn U Agda vs 'wiser strategies' HELPFUL.

Think eric raymond and too bad that perl is just another dead language -- your wisdom though as recompense. When you are older, it becomes EASIER for crunchfit and yes, it sometimes becomes harder to exercise to point of vomit - "ad nauseum' in Latin, but the pain is much easier. Don't you do three hundred (300) pushups a day?

So, the routine is simple. Get stuck running Haskell (another bump on the road to the true language of Coq) - go ahead and flame - do the pushups. Then laugh cause it releaxes and then refrshed back to AGDA.

PS. Engineering school was arduous and graduate school. The personal private books/research were just hobbies and no I will not mention my name. Even the so-called Yourdon 'death marches' are just a bump on the road.

It's worse than some females - over the period of a 'big city man.' - go ahead flame - Just take code from arxiv, run it in Haskell and Agda and there are only three explanations 1.)the programmer is stupdi - yup that's me 2.)scientific fraud - always possible 3.)the 'compiler' or the strange AMD CPU is fickle in a feminine way? 4.)the language paradigm is 'slightly broken.' 5.) all of the above

Fair Warning and Serpents be Here. Haskell, Agda and even Coq are DEFINED as EXPERIMENTAL RESEARCH Languages. Of course, you can always go Python (which version?) and Javascript, which is the bubble gum and string that holds together the Internet.. along with BAsh shellshock.

It this why they call it a bit of 'hacking'? PPS. learn to ride bicycle in big city and survived with most of my fingers intact.


Ah yes, dependent types. They tried to teach this at my university because some researchers there were working on it. Too bad they didn't bother to teach something more useful, like what malloc and free do.

But let's see. Perhaps the field has become less wonkish and more relevant to real programmers in the last ten years.

Most language tutorials start with the typical “Hello, World” example, but this is not really appropriate for a first example in Agda. Unlike other languages, which rely on a whole lot of primitive operations and special cases for basic constructs, Agda is very minimal - most of the “language constructs” are actually defined in libraries.

That sounds reasonable ...

Agda doesn’t even have numbers built in, so the first thing we’re going to do is define them

That's much less reasonable ...

This is a very elegant way to define infinitely large sets. This way of defining natural numbers was developed by a mathematician named Giuseppe Peano, and so they’re called the Peano numbers .... Now we can express the number one as suc zero, and the number two as suc (suc zero), and the number three as suc (suc (suc zero)), and so on.

Oh dear. This is a new use of the word elegant I have not encountered before.

I was expecting the next stage of the tutorial to describe how to make numbers in Agda resemble number systems actually used by humans, or even machines, but no such luck. They stick with a Lisp-style Peano representation throughout the entire thing. Having defined an entirely unworkable method of representing integers because their standard library apparently doesn't do so (?!) they then go on to prove things like 1 + 1 == 2.

Yeah, I think I'll skip. Perhaps in another few decades someone will have bothered to make a dependently typed language that doesn't use untypeable characters as part of its syntax, and has an integer type built in, and we'll all get awesome statically checkable code for free. Until then I'm gonna stick with FindBugs.


Wow, you are just radiating ignorance; I don't even know why I'm wasting my breath. Nonetheless,

> Oh dear. This is a new use of the word elegant I have not encountered before.

The Peano numbers are elegant because they correspond directly with induction. Agda is first and foremost a proof assistant, so it's natural that we lean towards things that help as write our proofs. This is not "Lisp-style" at all, it's a mathematically consistent way of defining natural numbers. The idea extends further with ornamentation to model lists and other data structures.

> Having defined an entirely unworkable method of representing integers because their standard library apparently doesn't do so (?!) they then go on to prove things like 1 + 1 == 2.

I don't know what's "unworkable" about Paeno arithmetic. Yes, it's not the most efficient representation, but you're working in a proof engine, so that is secondary to our concerns. Furthermore, Agda has {-# BUILTIN #-} pragmas that let us switch out this data type with something that is more optimal. We don't use "the number system actually used by humans" (whatever that is) or machines, at least traditionally, because they just make the proofs harder.

> Perhaps in another few decades someone will have bothered to make a dependently typed language that doesn't use untypeable characters as part of its syntax, and has an integer type built in

Yea, some of us did "bother" - you're looking for Idris. But I'd really rather you didn't join us.


I know Agda isn't really a programming language, but then why is this book trying to claim it is? It says at the start:

_Agda is a programming language_

That's the standard it sets right from the first chapter. It then goes on to say that this programming language doesn't have numbers.

If the book had said, "Agda is a proof assistant for CS and mathematics researchers to research inductive logic" then I'd have been much less harsh on it, but whilst this book is claiming it's a tool for programmers I will judge it by that standard. And by that standard representing numbers and even forcing you to type them in as (suc (suc (suc (zero))) is not elegant.


As I see it, this is not so much a book at this stage as an introductory tutorial written by someone as they were learning Agda (about four years ago I think), as a contribution to the community to help other beginners. As such, one of the (unstated) assumptions is that the reader is already motivated to learn about programming with dependent types.

The early examples you see with inductive structures such as Nat are about learning the foundations, and you can't go on to do anything especially interesting without a solid understanding of the foundations. Its structure often also turns out to have a convenient correspondence with more realistic data structures, such as lists (their length) and trees (their depth).

I do sometimes wonder if we should start using something more obviously useful as an introductory example though. There's plenty of possibilities, and programmers aren't generally going to be using Nats in practice (I rarely do).

I don't use Agda myself (I use Idris, which is similar but is more directly aimed at general purpose programming) but I do know that it has primitive types...

Anyway, there's all kinds of interesting work going on at the minute which goes way beyond these introductory examples. For example, we're working on a way of specifying and verifying implementations of communication protocols (https://github.com/edwinb/Protocols), which we'd like to apply to cryptographic protocols when we've made some more progress. You can hear me waffling on about that, as well as about how we deal with stateful systems in general, here: https://www.youtube.com/watch?v=rXXn4UunOkE&feature=youtu.be

I post this mainly because I suspect I'm one of the wonks who was researching dependently typed programming at your university ten years ago. At the time, there was lots of foundational work to do, and there is still lots to do before it's industrial strength, but we've made a lot of progress, and I think the goal is worth striving for.


> I know Agda isn't really a programming language, but then why is this book trying to claim it is? It says at the start:

> _Agda is a programming language_

Well, that's because programming languages and formal systems (proof languages) are the same thing ( http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspond... ).

Agda is well-suited to computing with values that carry around various formally-checked properties. That's useful to some people for some tasks. If it's not suited to whatever you're doing, then don't use it; I presume your projects don't all use Java, Bash, Prolog, Python, COBOL, Pure, Unlambda and MATLAB at the same time; well, just add Agda to that list.


Agda and most other dependently-typed languages (e.g. Coq) weren't really designed for "real programmers": they're research-oriented systems for real computer scientists to explore cutting edge ideas in logic and PLT.

If you're interested in a "practical" dependently-typed language, check out Idris:

http://www.idris-lang.org/


I'm not sure if they will meet your standards, but Idris and ATS strive to be practical dependently typed languages:

http://www.ats-lang.org/Examples.html http://www.idris-lang.org/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: