I think the real insight of category theory is you’re already doing it, eg, whiteboard diagrams.
Category theory is the language of diagrams; it studies math from that perspective. However, it turns out that category theory is equivalent to type theory (ie, what computers use). So the reason why diagrams can be reliably used to represent the semantics of type theory expressions is category theory.
My workflow of developing ubiquitous language with clients and then converting that into a diagram mapping classes of things and their relationships followed by translating that diagram into typed statements expressing the same semantics is applied category theory.
The “category theory types” are just names for patterns in those diagrams.
Trying to understand them without first getting that foundational relationship is the path to frustration — much like reading GoF before groking OOP.
> However, it turns out that category theory is equivalent to type theory (ie, what computers use).
Just to elaborate on this a little, a program of type B in context of variables of types A1, A2, ..., An, can be modeled as an arrow (A1 * A2 * ... * An) -> B. (More elaborate type theories get, well, more elaborate.) The various ways you might put programs together (e.g. if-expressions, loops, ...) become various ways you can assemble arrows in your category. Sequential composition in a category is the most fundamental, since it describes dataflow from the output side of one program to the input site of another; but the other ways of composing programs together appear as additional structure on the category.
Categories take "semicolon" as the most primitive notion, and that's all a category really is. In fact, if you've heard monads called "the programmable semicolon", it's because any monad induces a related category whose effectful programs `a ~> b` are really pure programs `a -> m b`, where `m` is the monad.
Category theory is the language of diagrams; it studies math from that perspective. However, it turns out that category theory is equivalent to type theory (ie, what computers use). So the reason why diagrams can be reliably used to represent the semantics of type theory expressions is category theory.
My workflow of developing ubiquitous language with clients and then converting that into a diagram mapping classes of things and their relationships followed by translating that diagram into typed statements expressing the same semantics is applied category theory.
The “category theory types” are just names for patterns in those diagrams.
Trying to understand them without first getting that foundational relationship is the path to frustration — much like reading GoF before groking OOP.