I had a seminar on set theory and the foundations of mathmatics. It was very fun! But, coming from a computer science background, i found the foundations via set-theory unintuitive and a bit ackward to work with.
I know there are some type-system based approaches, how do they turn out in practice? How nice are they to work with "on a low level"?
Type theories, especially dependent type theories a la Martin-Löf, are quite natural to work with. They are easier to implement on a computer, and there are active communities around several such implementations (Agda, Coq, Lean,⋯).
Most computer verified proofs are implemented in systems based on type theory, not set theory. Most of the time you do not need sets of sets of sets… You want types whose elements are the mathematical objects you care about.
In fact, if you want to do set theory, you can do that inside type theory as well. You can define the type of iterative sets and define a membership relation on this. The resulting stucture will satisfy set theoretical axioms. This was first done by Aczel in the 70s.
>> i found the foundations via set-theory unintuitive and a bit ackward to work with.
> Type theories, especially dependent type theories a la Martin-Löf, are quite natural to work with (and) are easier to implement on a computer...
Whether or not type theories are "natural" is probably a matter of taste.
But it's hard to argue that they are necessarily "easier to implement on a computer". Metamath is a general-purpose verifier, and its primary database (the Metamath Proof Explorer at http://us.metamath.org/mpeuni/mmset.html ) is based on classical logic and ZFC set theory. A Metamath verifier in Mathematica is only 74 lines, while another in Python is only 350 lines. The Metamath verifier written in Rust is longer (it's written for speed), but it manages to verify about 32,000 proofs in about 0.9 seconds. It's so easy to write one that there's a long list of verifiers (see http://us.metamath.org/other.html#verifiers ). Metamath can handle HOL, too. I don't know of any shorter verifier that can handle the full range of mathematics. So by both code size and speed it's hard to argue that type theory is necessarily "easier" than traditional set theory on a computer. They both simply process symbols given a small set of rules.
There are many possible foundations of mathematics beyond ZFC and type theory, e.g., Quine's "New Foundations" (NF) (see http://us.metamath.org/nfeuni/mmnf.html for a Metamath encoding of that), HOL, etc. And that's not even counting logic systems: Classical logic is the most common in mathematics (by far), but intuitionistic logic is in use (and tools like Coq are built on it), and there are many other logic systems as well (paraconsistent, minimal, etc.).
That said, I believe ZFC (combined with classical logic) is by far the most commonly foundation referenced in mathematics today. Wikipedia claims: "Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics." https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...
That doesn't mean that other systems are "not as good" or "not mathematical" - but when people ask about mathematical foundations, ZFC is typically what people refer to first. It's probably good to at least know a little about ZFC when discussing foundations, even if you decide to do something different.
EDIT: It is a fair point that it is difficult to say what is easier to implement (the below expands on this). By type theory being natural I was mostly thinking of the fact that it is basically a functional programming language with a very strict compiler.
Actually, Metamath is not tied to ZFC. As the name implies it is a meta-theory where you could implement a lot of different theories. That said, most Metamath developments use ZFC. I haven’t seen anyone doing type theory in Metamath, but from what I understand it may be possible.
Also, this means if you are counting lines you need to include the formalisation of the logic and the axioms of ZFC inside metamath. The kernel of Coq is similarly a quite short program.
Certainly there is a wealth of systems available. But the ones based on type theory, such as Coq, seem to be at least as popular as the set theory based ones (For instance the first formalised proof of the four colour theorem and Gödel’s theorems were in Coq).
> Actually, Metamath is not tied to ZFC. As the name implies it is a meta-theory where you could implement a lot of different theories. That said, most Metamath developments use ZFC. I haven’t seen anyone doing type theory in Metamath, but from what I understand it may be possible.
That's true. There's a Metamath database that encodes Higher-Order Logic (HOL) aka simple type theory: http://us.metamath.org/holuni/mmhol.html It's very bare-bones, but it shows it's possible.
> Certainly there is a wealth of systems available. But the ones based on type theory, such as Coq, seem to be at least as popular as the set theory based ones...
I suspect we have to ask "popular for whom?" I'd agree that type theory is at least as popular as set theory among computer tools that formalize math. But I think far more mathematicians point to ZFC, not type theory, as the foundations of mathematics. Type theory probably seems "natural" to people who are used to programming languages and compilers, and thus already have a daily familiarity with types (in the computer science sense).
"It's complicated" seems like an accurate summary of affairs :-).
> But I think far more mathematicians point to ZFC, not type theory, as the foundations of mathematics.
Most mathematicians spend very little time thinking about the foundations of mathematics though, so I'd take that with a grain of salt. ;)
Seriously though, I think the statement that "all of modern mathematics can be formalized in ZFC" is probably wrong. There have been attempts to do just that (e.g., Bourbaki), but they invariably failed and ended up with extensions of ZFC (e.g., for category theory) or just silently started using higher-order logic.
Type theory seems to be a lot more practical. Both because it is explicitly designed to be an open system (a closed system must fail eventually due to Gödel arguments) and because similar formalization efforts go a lot more smoothly. For example, compare the treatment of category theory in the HoTT book with any introductory text using set theory.
> But I think far more mathematicians point to ZFC, not type theory, as the foundations of mathematics.
That's true, but it's just as true that those same mathematicians only care about foundations as a proof of concept, not a practical device that you might someday want to do actual math in. ZFC is plenty good enough for the former, but you need type theories - or at least structural-set theories - to enable the latter (with the sorts of sanity checks that we've come to expect in any practical formal endeavor)!
"Set theory" itself is a bit of an overloaded term; most practical uses of naïve set theory in math do not make use of properties like extensionality or the global membership relation, that characterize axiomatic set theory over TT. Thus, one could also describe the sort of "more natural" systems you reference in your comment as 'structural set theory' https://ncatlab.org/nlab/show/structural+set+theory
It's true however that one can do material-set theory inside structural-set theory or type theory, by defining a type of material sets (not necessarily "iterative" however, that's a matter of whether you want to model the axiom of foundation!) and a membership relation on them.
Structural set theory, which as you say throws out the iterative structure, is more like type theory than set theory (this is pointed out in the nlab page you linked).
Naïve set theory as practised in say a course in discrete mathematics certainly uses the iterative structure to construct say a pair. You will typically find the definition 〈x,y〉 ≔ {{a},{a,b}} instead of defining it through a universal property.
All the axioms of material set theory take this iterative approach, not just foundation. In fact foundation is not very critical (The Axiom of Anti-foundation is just as workable), as you can always just use the class of well-founded sets and work with those.
I agree that "set theory" itself is a bit of an overloaded term, as there are in fact many set theories.
However, as you're probably well aware, naïve set theory can quickly lead to disaster because it leads to well-known paradoxes (like Russell's paradox). So if you're going to have foundations for mathematics (and it needs foundations!), it needs to avoid the paradoxes. ZFC does so, and in a way that many people have found satisfying enough.
Type theories like the one of coq support inductive definitions.
Inductive nat: Set
:= | O: nat
| S: nat -> nat
However, this feels a bit like having a whole programming language for the foundations of mathematics. An alternative would be to basically do the same as is usual in set theory. That is, have an axiom that states that an infinte set exists. This is enough to construct the natural numbers. This axiom can state that there is a Set S and a function f: S -> S that is injective but not surjective.
Seeing these comments let me remark that the problem that I have with inductive definitions is that one could have the expectation that the foundations of mathematics would be a rather small and simple set of rules from which the rest can be derived. When one introduces inductive definitions as is done in coq this principle is out of the window. The branches of an inductive must satisfy a 'positivity condition' that is quite complicated. Then there is the thing that one can also do mutually recursive inductive definions. Then there are more rules for what kind of match statements are allowed. And let us not even get started on CoInductive definitions. IIRC Freek Wiedijk described the foundations as they are in coq as 'beyond baroque' and that is quite to the point.
Actually, that `Inductive nat` is precisely a natural number object in terms of categorial semantics - it's worth mentioning that NNO's in categorial semantics are unique up to isomorphism. That "Inductive" syntax in Coq is actually a key part of the system, it's involved in defining things as basic as equality and product- or sum-types. This is quite different from how things would be done in a "standard" set theory.
> An alternative would be to basically do the same as is usual in set theory. That is, have an axiom that states that an infinte set exists. (...)
I was not aware of this construction, thanks! The only thing I knew is the construction of natural numbers from the empty set, by iteratively building singletons.
4. There exists a set X such that ∅ is in X and T ∪ {T} is in X whenever T is in X
These all follow from the axioms of ZFC and (4) is actually itself one of the axioms — the Axiom of Infinity. It says that an infinite set like X exists.
But you can see that the essence of it is just: "There exists a set X which includes the natural numbers." Note that this set X might contain many things beyond the set-theoretic copy of the natural numbers — the axiom just guarantees that a set containing the natural numbers exists ("some infinite set").
You can then use some of the other axioms to remove the unwanted elements, creating the unique, smallest set that satisfies this property.
In other words, the axioms of ZFC were designed to be Peano-friendly. It wasn't as if we arrived at ZFC and then said, "Oh, look at that, we can represent the natural numbers as sets. What a surprise!":D
Set theory as a toolkit is mostly geared towards questions of consistency/existence, size or "order" of objects, and raw mathematical explanatory power of theories or claims. It traditionally arose to explore ideas of infinity precisely, then grew to meet the needs of a mathematical community grappling with antinomies (e.g. Russell's paradox) and heated foundational divides (classicism vs intuitionism, etc). It still meets those needs well -- topics like large cardinals which chart the territory dangerously beyond the boundaries of (assumed) mathematical consistency are plied with set theory, and questions of consistency that arise in other fields tend to end up ported into set-theoretic terms.
I'm not a type theorist, but I imagine the split between set theory and type theory is analogous to the way in which different models of computation appeal to researchers on different sides of the "Theory A/Theory B" split in computer science. Researchers in the "A" group who study the outer limits of what problems are/aren't tractable, how different kinds of computational resource compare in power, etc, tend to use Turing machines and RAM models and circuits, which allow computational resources to be easily measured, bounded, simulated, reduced into things, etc. Researchers in the "B" group who explore programming languages, type systems, program analysis, etc, tend to prefer models like lambda calculus which allow programs to be easily composed and annotated with types and whatnot. I don't think this is just about cultural differences -- if you try to prove time-space trade-offs with the lambda calculus or invent a new type parameterization system with Turing machines then you're going to have a bad time.
Ironically, large-cardinals and related devices give set theory a rather type-theoretic "flavor". One can definitely explore the very same questions of 'consistency/existence, size or "order" of objects, and raw mathematical explanatory power of theories' with type-theoretic tools, and arguably do it quite more elegantly, because the related formal devices (e.g. "universes") arise more naturally in a type-theoretic context.
I know there are some type-system based approaches, how do they turn out in practice? How nice are they to work with "on a low level"?