My problem with category theory (my limited study of it, several years ago) was that it describes and defines a list of properties, but those properties don't combine to reveal any unexpected, exciting results.
Again, with my abstract algebra example from above: after just a couple of basic abstract algebra definitions, you learn about subgroups. Simple enough, and not particularly exciting so far. But then you quickly reach Lagrange's Theorem, which shows that in a finite group, the number of subgroups divides the order of the parent group. And that means... if the group has a prime number of elements, then it can't contain any (non-trivial) subgroups at all! That's super cool and not at all obvious in the original definition of groups and subgroup. And it keeps going from there, with a brain-punishing amount of results that all just emerge from the basic definitions of a group, ring, and field.
In contrast, category theory just felt empty. This is definition of a functor. This is a monad. Here's different kinds of morphisms. Etc.
Dunno, maybe I just needed to keep reading. But my sense from flipping forward through my CT books is that it was mostly just more concept definitions.
>My problem with category theory (my limited study of it, several years ago) was that it describes and defines a list of properties, but those properties don't combine to reveal any unexpected, exciting results.
IMO, the main value of category theory is unifying existing math knowledge in one theory. I.e. it helps you see connections between seemingly unrelated areas of math. I.e. in some sense it's a pure abstraction.
- An operation may be "functorial", meaning that it preserves more structure than perhaps originally thought. For instance, the "fundamental group" operation is indeed functorial, which means that it acts on continuous functions in a nice way as well as topological spaces. Other examples are tensor-product, vector-space-duality, forming function-spaces in certain categories, etc.
- Two categories may be isomorphic, but this is trivial.
- Two categories may be equivalent, which while a weaker notion than isomorphism, is sufficient for most things which can be expressed in categorical language to be true for both categories. This is helpful when one category is well-understood and the other one is an object of present interest. (One application is showing that the category of representations of a fixed quiver Q is Krull-Schmidt, by showing that it's equivalent to another category with the Krull-Schmidt property).
- A functor between two categories may admit a left-adjoint. It then immediately preserves all limits (it's "continuous") which immediately means that a great deal of structure gets preserved by it.
- A functor between two categories may preserve all limits. It may (under some circumstances, expressed by the "adjoint functor theorem") therefore admit a left adjoint. This may be a non-trivial fact of interest in its own right. It's related to dualities in optimisation and game theory.
- There's isolated results like the Seifert Van-Kampen Theorem (which states that the Fundamental-Group functor preserves pushouts) which would be difficult to express without categorical language.
Ultimately, Category Theory appears to be a language for compressing complicated facts about structure-preservation of operations.
Category theory is helpful in advanced algebra, and helpful too in advanced topology, and is in its absolute element in any area which combines the two, like algebraic topology and algebraic geometry. In the latter two areas, you've got lots of functors between algebraic categories, lots of functors between topological categories, and even functors going between algebraic categories and topological categories.
There's also categorical logic, which is where the CS-adjacent stuff seems to be found. But this is of little interest to everyday programming, and is very forbidding for people who lack the requisite mathematical maturity. Only the most dedicated should enter its harsh plains, and should expect to gain nothing without tremendous efforts and sacrifice.
Where do adjoint functors occur in CS? They occur in advanced algebra, and they occur in topology, but where else? And indeed, the fact that they preserve limits/colimits may help speed up communication and thinking. But I'm not seeing CS connections here.
>The first example is trivial, and serves as an illustrative but useless example (which I don't need).
Yep. Most of the example of adjoints trivial if you know the field of math where they are used. The interesting part is why it happens almost everywhere.
Another one is a free monoid. With pair of forgetful/free monoid functors. It sounds a bit mathematical but for type T free monoid is a List<T> in a programming language with generics.
As a general, i.e. React programmer, there's not a lot of value. However, as I said, if you work in some areas, i.e. programming languages, or logic it might be even requirement in some areas to be productive.
- Adjoints preserve limits/colimits.
- Adjoint functors give rise to a monad
- They are connected to universal morphisms