I've been making my way through this book for the past few weeks; just started Chapter 20. I tried reading Harper's Practical Foundations for Programming Languages first, but it was too abstract for me, so I switched to TaPL.
What I like most about Pierce's book is that he introduces each concept with a formal, abstract definition, complete with proofs of correctness, but also follows that up with a concrete implementation in OCaml. The latter is very easy to follow if you've had some experience with the ML family of languages. I sometimes find myself skipping ahead to the OCaml version when I get lost in the math syntax, which for me is less familiar. I'm planning to come back to Harper's book later, but Pierce's book is the perfect fit for where I am now.
My only criticism is that some parts are very dated given it hasn't been updated in almost 20 years. In particular, the version of Java he discusses throughout the book (pre-generics, pre-type-inference) bears little resemblance to the modern one. And since 2002 we've seen affine types (e.g. Rust) start to have mainstream influence, among other things.
In case it's helpful, I'm compiling a list of resources as I learn type systems, logic, category theory, etc.:
There are somewhat more involved formal treatments like Welterweight Java [1], which adds some more features, but of course, nowhere near the full set of Java features.
Some of the citations in that paper are also interesting.
I don't think there is a core calculus that specifically incorporates generics as in Java, but I could be wrong. In any case, much of Java's generics flow from theories of parametric polymorphism [2], specifically I believe bounded parametric polymorphism [3].
re: "I don't think there is a core calculus that specifically incorporates generics as in Java": doesn't Featherweight Java [1] already include generics? (Though a simplified version.)
This is a tipping point work making the case that type theory is like a musician reading sheet music. Sure, the Beatles couldn't, but... Anyone developing a new programming language should understand languages at the level of Lean, even if the common uses of their constructs aren't theorem proving. The analogies are mind-blowing. Monadic parsing is the same thing as meta-programming tactics? I'm still wrapping my head arond that one.
I’ve heard a lot of recommendations for this book from programming language designers - is it (and the subjects it covers) useful for everyday programmers (i.e. language users), or just for language designers?
For the latter, I assume type theory would be essential if you’re trying to design a language like Haskell, but irrelevant if you’re trying to design a dynamically-typed language like JavaScript? Would it be useful for the designers of “in-between” languages, like Java or Go?
So it starts from the generic untyped lambda calculus and then shows that it is possible to reach erroneous states: so far so much like any dynamically typed language (so javascript, lua, python). Bear in mind javascript is also quite weakly typed, but that doesn't matter much for the point I am trying to elucidate.
You introduce types to try to describe the program state in a way that a static analyser is able to prove that the program will not reach erroneous states. You learn that types limit computing: the simply typed lambda calculus always halts. Note that a human reader is a type of static analyser.
You start introducing more advanced types to try to get back that unbounded possibility: to be a real programming language, you need the halting problem--a real server doesn't halt.
Anyways, after some chapters you learn that you may need a lot of rather advanced type theory to truly describe some of the complex things that some crazy people did in previously dynamically typed languages. That's why python type hints and typescript sometimes (but not usually) need some really crazy stuff: people were able to and did do all kinds of crazy things.
Arguably, there are easier ways to solve a problem than the way that requires advanced type theory to describe. The dynamic languages don't make it easier to write, verify and reason about such code other than the fact that many static languages prevent it outright.
This type of questions - "is this knowledge useful?" - is hard to answer. Is music theory useful for musicians? But Jimi Hendrix didn't know music theory! You can get by without reading this book, most language designers probably hasn't read it either, and lots of programming languages were invented long before type theory was a real thing.
The formalisms introduced in the book are absolutely applicable to dynamically typed languages (such as JavaScript) and "less strict" statically typed languages like Java and Go. The book teaches you that there is a logic framework which you can use to analyze any programming language you can think of. It's not something you use in everyday life but it is very interesting!
This. You do not need music theory if you have a very special ability to know what sounds come from your playing. It becomes muscle memory. And you need a pretty good ear.
note: what follows is a reflection on my own limitations, not Pierce's or the book's.
I found working my way through it tough going. I'm not a natural mathematician, and found the "formalisation first" approach quite difficult to deal with. I would definitely have found it easier if each type judgement (formula) had a natural language description to go with it: intuition to complement the formality. The code helped in that regard.
That's a comment I have with many mathematical / formal texts. I wish authors would write down, in natural language, what they 'hear' in their heads when they read an equation on the page. Some at least have a reference (e.g. "read an upside down 'A' as 'forall'"). (I know it's more nuanced than that: someone experienced with type judgements won't interpret them literally. But even still, they could 'read out' what they see on the page).
Having said all that: I did find the book helpful and insightful. Just quite difficult to parse and assimilate the formal content. It certainly gave me a much better appreciation of type systems.
Unison is definitely very cool. I used it myself for a work thing and it was quite useful. I'm just surprised every time I go back to https://github.com/bcpierce00/unison/issues/375 and see the wire protocol still hasn't been fixed to work across different versions.
Is there's any other book in computer science that takes a thorny academic subject and makes it so approachable that pretty much any engineer can grasp it without trouble?
The way to learn monads is to just avoid them while hacking Haskell (or another functional language) and find the boilerplate of handling things like state or errors annoying. I truly don't believe any other way is as effective.
Put another way, use each monad instance on its own, and ignore that it's a monad in the first place (ie. use the IO monad, but pretend the syntax is IO specific, use the List monad, but pretend the syntax is List specific etc.)
After you do enough of these, you'll start to see the common pattern, and wish there was a way to abstract over them. Surprise: there is!
This is basically I assume monads were developed as well.
Nice to see my favorite book on HN! I highly recommend this book if you want to really learn programming language theory (especially operational semantics and type systems).
Great book! It was the textbook for my class on programming language theory in grad school. I had heard so much about the lambda calculus, but I could never understand it from Wikipedia alone. The book did a great job of introducing how things work in lambda calculus and motivating the desire for typed lambda calculus. You implement various common programming language abstractions in pure simply typed lambda calculus, and then replace those patterns with new syntax. Then you start to see a familiar looking programming language take shape.
This was good background for me to have to be able to understand the development of evolving programming languages built on academic formalism, like Scala.
I'd say it's the recommended book for type systems. For type theory, I'd recommend "Certified Programming with Dependent Types" by Adam Chlipala or "Programming Language Foundations in Agda" by Philip Wadler.
What I like most about Pierce's book is that he introduces each concept with a formal, abstract definition, complete with proofs of correctness, but also follows that up with a concrete implementation in OCaml. The latter is very easy to follow if you've had some experience with the ML family of languages. I sometimes find myself skipping ahead to the OCaml version when I get lost in the math syntax, which for me is less familiar. I'm planning to come back to Harper's book later, but Pierce's book is the perfect fit for where I am now.
My only criticism is that some parts are very dated given it hasn't been updated in almost 20 years. In particular, the version of Java he discusses throughout the book (pre-generics, pre-type-inference) bears little resemblance to the modern one. And since 2002 we've seen affine types (e.g. Rust) start to have mainstream influence, among other things.
In case it's helpful, I'm compiling a list of resources as I learn type systems, logic, category theory, etc.:
https://gist.github.com/dicej/d1117e5d65155d750c16234e6eff16...