Hacker News new | past | comments | ask | show | jobs | submit login

This was very interesting.

As a mathematician with no comp-sci type knowledge, my only understanding of inheritance is the "is a" rule. Using this, I realized that a subtype of the set of functions from Dog to Dog must be a set of functions such that each function could be treated as a function from Dog to Dog under an appropriate restriction. This would be the only way for such a set to satisfy what felt like the "is a" inheritance rule.

In other words, a set of functions from A to B where Dog is contained in A and B is contained in Dog would be a subtype of the set of functions from Dog to Dog. So Animals -> Greyhound works.




Can you figure out what the functor is? As another mathematician, if they're saying "contravariant" I expect to see something like a Galois functor. This kind of looks like a pullback or an evaluation functor, but I can't even see the categories it might be between. Is it the Hom functor?


See my other comment about co/contravariance as monotonocity/antimonotonicity. Since monotonicity is just functoriality when your categories are posetal, covariance is a special case of functoriality.

More generally, you can see

   f(x) = Int -> x
   g(x) = x -> Int
as, respectively, co- and contra-variant endofunctors on the category whose objects are types and whose arrows are definable functions in your programming language. And they're both just special-cases of the internal hom functor,

   hom : C^op * C -> C
   hom(x,y) = x -> y


Wow, that's a new word, "posetal". Does that mean "can be partially ordered"?

What is "Int"? Integers? Integral? Int... ernal? ... types?

What's with the notation, anyway? Why use "<:" instead of "<" like you would for any other transitive anti-symmetric relation?

"Anti-monotone" is weird language. I would say "monotone" in general and "increasing" or "decreasing" in particular.

What does any of this have to do with programming languages? I mean, types are purely abstract, right? Do they have to be part of a programming language? I thought types were merely sets ordered by inclusion.

If programming languages are not something that can be abstracted away, what's a programming language in this context? A set (class?) of types and functions on those types?

What is a definable function? Are some functions not definable? Is it something like "there exists a Turing machine such that..."?

I don't know why computer people's category theory always looks so foreign to me. Sometimes I feel like we're in completely different worlds with vaguely similar-looking language. Category theory for me was mostly about homological algebra, but I don't think computer people care very much about the snake lemma or chasing elements across kernel and cokernel diagrams.


> What is "Int"? Integers? Integral? Int... ernal? ... types?

Int is the/a type of integers (possibly modulo 2^32), but just being used as an example of some arbitrary fixed type. "f(x) = Int -> x" is an example of a functor on the category of types in the same way that "f(x) = 4 + x" is an example of a function on the integers.

> What's with the notation, anyway? Why use "<:" instead of "<" like you would for any other transitive anti-symmetric relation?

I suspect computer people didn't initially realise they were the same... sort? of thing. Failing to properly distinguish between values and types causes a lot of confusion which one goes through a lot of effort to train oneself out of; finding out that some things are polymorphic over the two (polykinded) becomes really confusing then.

> I mean, types are purely abstract, right? Do they have to be part of a programming language?

One of the foundational definitions that distinguishes a type from something that isn't a type is that types are associated to terms in a language. I'm not sure how useful that is when we're talking about types in an abstract setting though.

> I thought types were merely sets ordered by inclusion.

Types can be larger than sets, and more importantly their equivalence doesn't behave much like set equivalence in the setting where we normally use them. I suppose you could identify a type with the set of terms of that type and that might work (though I'd worry about circularity), but it doesn't feel like a natural way to work with types.

> If programming languages are not something that can be abstracted away, what's a programming language in this context? A set (class?) of types and functions on those types?

I think it's a term reduction system, so a set of terms (and terms are a concept that have a bit of structure, though I'm unable to find a sensible standalone definition) and a finite set of reduction rules.


> I don't know why computer people's category theory always looks so foreign to me. Sometimes I feel like we're in completely different worlds with vaguely similar-looking language.

Well, another way to think about it is that we are using the same language to talk about very different things. For example, in homological algebra you are always working in (at least) a pointed category. In computer science a lot of the things we are interested in are already reflected in the partial order induced by a category (i.e., subtyping). If your category has a zero object the latter is trivial and so we're pretty much never looking at pointed categories.


> In computer science a lot of the things we are interested in are already reflected in the partial order induced by a category (i.e., subtyping).

If by "the partial order induced by a category", you mean 'let A <= B iff Hom(A,B) is nonempty', then that's not subtyping. That's "does there exist a function from A to B". Just because I can write a function from Int to String doesn't mean Int is a subtype of String.

I tend to think of subtyping as an additional structure: a PL comes equipped with a particular notion of subtype. Say C is the category of types and functions. Then subtyping is a partial order on types equipped with an inclusion functor into C. That is, a map (coerce : (A <: B) -> Hom(A,B)) that chooses for any pair A,B of types such that A <: B a coercion function from A to B. Moreover, coercing from A to A must be the identity, and if A <: B <: C, then coercing from A to C is the same as coercing from A to B and then from B to C.


A posetal category has at most one arrow in each homset – i.e. it’s a poset dressed up as a category.

A definable function is one that can be defined in the programming language in question. There's a lot more to definability than Turing machines, when you consider higher types: that is, types whose domain is a function type. John Longley has written on this extensively.


Thanks. Is there something to read that puts all of this together, starting from a common base? I really don't know where to start.


I know what you mean about not knowing where to start! I think “all of this” is roughly what theoretical computer scientists call “Theory B”, or at least a large chunk of it, and where to start depends on which specific things you find most interesting.

Benjamin Pierce’s _Types and Programming Languages_ is a relatively accessible introduction with an emphasis on types, but of course it’s just an overview of a pretty large research area.


Because S <: S in most (all?) type systems with subtyping.


Then use \leq, like any other reflexive, antisymmetric, transitive relation.


> Wow, that's a new word, "posetal". Does that mean "can be partially ordered"?

Posetal means that Hom(A,B) has at most one element (the category is "thin"), and that any two isomorphic objects are equal (the category is "skeletal"; corresponds to antisymmetry). So the category can be seen as a poset, where (A <= B) iff Hom(A,B) is nonempty.

> What is "Int"? Integers? Integral? Int... ernal? ... types?

Integers. I was just picking an example type, as lmm says. I could have said: "Fix a type A, and consider f(B) = A -> B".

> What's with the notation, anyway? Why use "<:" instead of "<" like you would for any other transitive anti-symmetric relation?

Tradition. <= is also perfectly fine. Why do set theorists write \subseteq instead of \leq? \subseteq is also a reflexive, transitive, antisymmetric relation.

> "Anti-monotone" is weird language. I would say "monotone" in general and "increasing" or "decreasing" in particular.

Yeah, in analysis "monotonically increasing/decreasing" is more common. I find it more useful to explicitly draw out the fact that one is the reversal of the other; also "monotone" is more concise than "monotonically increasing". shrug

> What does any of this have to do with programming languages? I mean, types are purely abstract, right? Do they have to be part of a programming language? I thought types were merely sets ordered by inclusion.

Well, the OP was about types in programming languages. Types in PLs are not always best thought of as sets; for example, the equation

    a = (a -> Bool) -> Bool
has solutions in some type theories! Interpreted in the naive way, this would be a set equal to the powerset of its powerset, which is impossible. This conundrum leads to the study of Scott domains, for example.

> If programming languages are not something that can be abstracted away, what's a programming language in this context? A set (class?) of types and functions on those types?

I'm being deliberately vague on that front. How to define "programming language" is not something widely agreed upon. But similar patterns, such as co- and contra-variance, recur.

> What is a definable function? Are some functions not definable? Is it something like "there exists a Turing machine such that..."?

I just meant "any function you can write in the programming language you're considering". As robinhouston points out, not all functions will be definable, and "Turing-computability" is a bit vague when talking about functions that take other functions as arguments (since TMs operate on bitstrings, so how you encode functions becomes relevant). Constructive mathematics studies which things are still definable/provable if you insist on computability; "constructible" and "computable" usually mean the same thing.

> I don't know why computer people's category theory always looks so foreign to me. Sometimes I feel like we're in completely different worlds with vaguely similar-looking language. Category theory for me was mostly about homological algebra, but I don't think computer people care very much about the snake lemma or chasing elements across kernel and cokernel diagrams.

Yes. Category theory is a very general framework, and different bits of it become relevant depending on what you're studying. Although there are sometimes deep connections between unexpected regions, such as between computation and topology; I don't really understand that one myself.


You keep using the word "just" in what feels like an unusual setting.


I use "just" to mean "exactly"; I say "X is just Y" to mean "X is exactly Y; when you have Y, you have X and vice-versa". I guess it might be kind of off-putting, because "just" implies the connection is boring or obvious, which is not what I mean to say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: