Hacker News new | past | comments | ask | show | jobs | submit login
Category Theory Illustrated – Sets (abuseofnotation.github.io)
218 points by boris_m on Sept 13, 2023 | hide | past | favorite | 93 comments



Nice!

Note that this page has no category theory yet since it explains sets, so if you already know sets, set product, etc and want to learn about category theory, my advice is to go directly to the next chapter, more specifically to this section:

https://abuseofnotation.github.io/category-theory-illustrate...

which uses set theory terms to define the category theory way of defining products (the corresponding "universal property").


I really like this approach, but it contains some confusing mistakes. For example, unless I'm very much mistaken, the illustration of the initial object is backwards: https://abuseofnotation.github.io/category-theory-illustrate...


The author said he is just starting on the book, so he's not claiming it's perfect. And the beauty of github, open source is you can fix them with a pull request. For example, the svg is here: https://github.com/abuseofnotation/category-theory-illustrat...


> For example, unless I'm very much mistaken, the illustration of the initial object is backwards: https://abuseofnotation.github.io/category-theory-illustrate...

You are right. (Curiously, the picture of the terminal object is correct, so they didn't just switch them!)


Looks fine to me. What's wrong with it?


Author here. It was backwards, I fixed it when I saw the comment. Feel free to report any other mistakes you see on https://github.com/abuseofnotation/category-theory-illustrat... or email.


Seems you might have fixed it before I saw it! Thanks!


This short youtube video really helped with me to understand the big pictures of category theory with concrete examples.

The Mathematician's Weapon | An Introduction to Category Theory, Abstraction and Algebra https://www.youtube.com/watch?v=FQYOpD7tv30


>In particular, a set can contain itself.

There are many kinds of mathematics, this is an unusual one.

"In Zermelo–Fraenkel set theory, the axiom of regularity and axiom of pairing prevent any set from containing itself."

https://en.wikipedia.org/wiki/Universal_set


It is mentioned 2 sentences earlier that it was the case in naive set theory.


This gives rise to Russell's Paradox:

Does the set of "all sets that do not contain themselves" contain itself?


You can have set theories that allows sets to contain themselves without allowing Russell's Paradox. You can read about non-well-founded set theory[1] if you're curious.

[1]: https://en.wikipedia.org/wiki/Non-well-founded_set_theory


I like the diagram of "the set containing itself". It illustrates non-well-foundedness niceley.


Which diagram are you referring to?



Resolving this paradox is discussed in TFA as being the founding rationale for the Zermelo–Fraenkel set theory in fact.


simple: there is no such set.


"And they might be right. But mathematical functions have one big advantage over non-mathematical ones — their type signature tells you everything that the function does. This is probably the reason why most functional languages are strongly-typed."

I'm confused about this statement. types give you important contextual information about the function in a summarized form, but surely we can have two functions with the same type signature that perform different mappings.


> surely we can have two functions with the same type signature that perform different mappings.

Yes - and you can count the mappings! Enums are sometimes referred to as 'sum types' because you can just add up the number of different states they can be in. Structs are sometimes call 'product types' because you can calculate the number of states they can be in by multiplying the number of states of their members. And functions are 'exponential types'.


This might be true in a programming language but it’s not for mathematical functions which is what they refer to in the first half of this paragraph.

I think what they mean is that two functions won’t have the exact same signature and result typings, though in practice this isn’t normally true for computer systems. Although there’s an argument that if it has those exact same things maybe you don’t need two functions but to improve your one function to be more robust.


> A function is a relationship between two sets

Ok then what is relationship? There's a whole theory of relations, and I'd rather not dive into that for the article. Also, drawing arrows can give misleading intuition.

It's better to define a function as a set of pairs, then you don't need to use anything else not introduced yet.


> It's better to define a function as a set of pairs, then you don't need to use anything else not introduced yet.

If you're looking towards category theory, it may arguably be better to think in terms of abstract arrows as much as possible, so as not to get confused by non-"concrete" categories where there's no obvious "function semantics" for morphisms.


Is there a simple example you could use to illustrate the point? This is interesting.


Let (X,≤) be a partially ordered set. Define a category C whose objects are the elements of X, while for the morphisms there is a single arrow x→y iff x≤y. Those are called posetal categories and are often used as examples



That makes perfect sense and is very helpful. Thank you.


A relationship between two sets is a set of pairs.


Set theory has the big unexpected results of the reals being bigger than the natural numbers, the independence of the Axiom of Choice, and the undecidability of the Continuum Hypothesis. What are some similarly big results in category theory, for a novice?


I'd say the Yoneda lemma should be in there. It's hard to explain without going through all the definitions but very broadly speaking it gives a precise way to characterize a object by its relations to other objects.

Though mostly I consider category theory useful not for its results but because its concepts generalize well. If you can relate something to a category then most of the concepts a category has (functors, limits etc.) will have some useful meaning. This makes it easy to come up with good concepts and gives some of their properties for free, which honestly is more practically useful than some clever theorem.


> I'd say the Yoneda lemma should be in there. It's hard to explain without going through all the definitions but very broadly speaking it gives a precise way to characterize a object by its relations to other objects.

To add a bit to that, Yoneda's lemma says that you know everything about an object if you know the ways that it can be mapped to other objects. The "co-Yoneda's lemma", while often less useful in practice (in my practice, anyway), is maybe easier to understand in this intuitive way: you know everything about an object if you know the ways that other objects can be mapped to it, which I have heard phrased as something like "you can learn everything about an object by probing it with other objects."


Coyoneda comes up with algebraic effects systems as the "Freer Monad."

`Free (Coyoneda f)` gives you `Freer` which allows you to build Monads without even a `Functor` on `f`.


Lawvere's fixed point theorem as a generalisation of Cantor's theorem is a big result.

https://ncatlab.org/nlab/show/Lawvere's+fixed+point+theorem

Emily Riehl:

"The author is told with distressing regularity that 'there are no theorems in category theory' ...

Sadly, the majority of the theorems that are personal favorites of the author were excluded because their significance is more difficult to explain."

(long list of theorems)

https://math.jhu.edu/~eriehl/161/context.pdf


One interesting thing is that category theory provides a way to precisely describe a Most General Unifier. It is a example of a coequalizer.

This isn't a really basic and accessible result, but IMO it gives a good flavour of what category theory is suitable for: formally describing constructions we didn't previously have the tools to express precisely.

https://www.sciencedirect.com/science/article/abs/pii/B97801...


Yoneda Lemma, an object is entirely determined by its relationships to other objects.


Algebra and geometry are the same thing: formalizing this was the motivation, originally.

Semantics is the topology of your diagrams.


Does anyone else study Category Theory in the hope of finding Revelation/Truth? I'm mostly kidding but for some reason CT scratches a different kind of itch.


I've watched some lectures about CT (and they flew over my head pretty quickly) and I did get the feeling that on one hand there is some deep insight there (that I was barely able to glimpse), but on the other hand it felt like that eagle eye view that CT gives is missing too many details.


Physics say it's all probabilities. CT says it's all relationships.

CT is related amusingly to abstract nonsense. Basically CT's ability to prove things at such a high level that provides no insights into the going ons in low level details.

https://math.stackexchange.com/questions/823289/abstract-non...


The term "abstract nonsense" was actually coined in context of category theory: https://en.wikipedia.org/wiki/Abstract_nonsense#History


As much as I like philosophy of mathematics, I never feel quite at ease with sets being some universal foundation.

Lists, sets, graphs, all seem quite fundamental to us humans, but where in nature does one observe these weird things? A cave or a jug are highly complex things. Perhaps molecules resemble a graph, but if I understand physics correctly, atoms move like crazy and it's almost accidental that the graph structure is somewhat stable in most molecules.

This led me to believe that these fundamental containers are probably a byproduct of how our brains work, more than that they are fundamental outside of those.

Thanks to ChatGPT, I now know that this makes me a mathematical fictionalist, or mathematical anti-realist.

Anyone care to talk me out of this? :)


The ideas of sets being the universal foundation of math is arbitrary as any other bit of math. Which is to say all of it is arbitrary.

Just as you ask, "where in nature does one observe these weird things?" I would to ask where in nature does one find a one? or a pi? or any other number really. Where do you find a triangle? Or a coordinate? Or any of the mathematical constructs we use day to day.

Sure you can point at something triangle shaped and go "there!" but is it _really_ a triangle? Or just an approximation? Sure you can count one of something, but that's not the same as the number one. Just like you can't have pi of something.

All of math is just a model that is surprisingly applicable to the real world.

All of math is a byproduct of our brains. It doesn't exist, out there, in some Platonic World. Anyone who thinks so, I'm looking at you Max Tegmark, is mistaking the map for the territory.


> All of math is a byproduct of our brains. It doesn't exist, out there, in some Platonic World. Anyone who thinks so, I'm looking at you Max Tegmark, is mistaking the map for the territory.

I could brashly say the exact opposite and would have proven just as much as you. Fictionalists often attempt to shunt the Platonic realm into an ill-conceived emanation of the mental realm (which just so happens to itself be an accident of matter). Somehow all of this works by virtue of following a kind of mathematical logic that just so happens to not exist or something. I suspect the fundamental problem here is some kind of neurosis that psychologically compels people to reduce the quality of their thought until hard problems disappear. I hope we one day are able to build a catalog of all the ways thought goes wrong so as to prevent such nonsense from proliferating in at least some section of the world.

I think a reasonably compelling way to teach yourself how to actually see The Problem (tm) is to view it from the perspective of Roger Penrose's three worlds ( https://hrstraub.ch/en/the-theory-of-the-three-worlds-penros... ) and actually think through the implications in a contemplative, meditative way over the course of several hours. Any analysis of this issue that doesn't involve a sustained look at both logic and phenomenology is a waste of time.


You could brashly say that the Platonic world does exist? I must be from Missouri, show it to me! Do that and you can bring me to your, and Max Tegmark's, side. Show me a Tree that has the pure essence of tree, show me a One, show me the pure ideal of a triangle.

I do take umbrage at your insinuation that I have not actually thought "through the implications in a contemplative, meditative way over the course of several hours" on this subject. I have spent many years, and not a few semesters of college, contemplating this very subject.

Consider this, where would your platonic world go if there were no sentient beings in the universe? Would it sit there, on some "higher plane" awaiting discovery by no one? Would the pure idea of a Tree get lonely? We're very into "does a tree fall in the forest with no one to hear it does it make a sound?" territory, however I think this is extremely important when trying to decide what is Real.

Plato, a man who lived ~2,400 years ago, decided that ideas were Real with zero proof and you're just going to accept his word on this, I'm supposed to just accept this? This does strike me as extremely life-centric, for lack of a better, all encompassing, term. If all life in the universe disappeared, the universe itself does not suddenly vanish in a puff of smoke. Stars will still fuse the elements, blackholes will still gobble up matter, the earth will continue to orbit the sun until it's consumed by the sun or is disrupted by some massive interstellar traveler. But the world of ideas wouldn't exist because there would be nothing to think of them. Mathematics wouldn't exist, because there would be nothing to conceive of them.

If you assume the platonic realm exists, sure, I would grant you all of Penrose's Three Worlds, I would grant Tegmark's belief that somewhere out there is a physical Platonic realm. I'd also probably believe in a lot of other things with out evidence as well.

But, maybe I'm just from Missouri. You're gonna have to show me.


Playing devil's advocate here. Mathematics and concepts can be shown in the regular way: in books, or in configurations of bits. Their meaning is generally lost though, unless there is someone to interpret it.

I think we should be careful about the contexts that we are discussing things in.

If we consider, for example, a string of text that contains the King James Bible, and we assume that the string "exists", that does not imply that the actual stories in the text exist, let alone that the characters featuring in it exist.

In the above example, existence has three different meanings.

It would be great if philosophy would be able to use strict type checking on their arguments :)

It would also be nice if I could simply state my philosophical dependencies in a plain text file.


I do agree that they can be "shown" or demonstrated, I don't think that's at argument here. And to be specific I chose the phrase Platonic World, or Platonic Realm specifically because it is the philosophical notion that ideas are real in the sense that the keyboard I am typing this message on is real, that those ideas exist independent of humans, and indeed independent of any living/sentient existence.

I called out Max Tegmark, partially because I find it humorous and, specifically because he has said that he believes that there exists a world where math is real, or maybe that the world is only math. It stems partly from his view of the multiverse. He makes an easy "punching bag" for this sort of thing.

So, yes, existence has many meaning in your example. I specifically called out one where ideas have a real existence independent of our reality or any subjects that operate within it.


I believe we are in agreement on this. Intuitively, and under the assumption that the physical world exists (although I have no clue why or how it exists), I feel that mathematics is formed inside of the physical world. The experience of being able to practice mathematics probably emerges from brain structures which have evolved in the physical world.

Still, there are some problems that I run into when taking these thoughts further. How, for instance can one apply deductive reasoning or apply Occam's razor in a context where these are not available?

I am also intrigued by your earlier remark that "math is just a model that is surprisingly applicable to the real world." (emphasis mine). This brings to mind "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". Perhaps there is an easy way out for believers of anti-realism.

Would it be an interesting hypothesis to say that the real (physical) world that we observe is limited exactly by the way that our sensors and brains take shape in it? I'd like to think of this as the antithesis to "in the beginning there was nothing" -- I'd rather think that outside our physical world "there is all and everything"; we just seem to be able to reflect only on part of it. The unreasonable effectiveness of mathematics hints at a correlation between how brains work and what physical laws there are. Perhaps Emmy Noether's ideas on symmetry may lead to some clues here as well.

In this way it would not be surprising at all that mathematics is applicable to the real world, as it is so almost by definition. This is obviously not the same interpretation as Max Tegmark's, but it does hint at some kind of interplay between a mathematical world and a physical world.

Unfortunately, I can only make this theory work for myself intuitively. I have no grasp on what it means that the physical world is part of something bigger. In a way, it seems to be moving the goalposts, similar to how some people believe we are somewhere in a nested series of simulations. And I feel quite uncomfortable in using logic, concepts, abstractions and what have you, which are part of the human brain context, and possibly not of the context that I magically believe our physical world to reside in.


Something one should consider is that while math is surprisingly applicable, and as you correctly picked up I was making a reference to the "Unreasonable Effectiveness" quote, it is never exact.

Newton's laws are enough for us to fling rockets and robots to Mars, but they are not good enough for us to create our GPS system. And Relativity is amazingly good, but still not good enough to model black holes, dark matter, and dark energy. The breadth of equations in Quantum Mechanics are also supremely successful, and yet they don't work well in the realm of Relativity. The Standard Model doesn't know what dark matter or dark energy is.

So yes, all of this math we have is Unreasonably Effective. But it's still a model, and a model that is not 100% correct. We have gaps in our models and as we figure out better and better approximations for them we move to them.

In my first post I made a small comment about those who are Platonist "mistaking the map for the territory". This is a logical fallacy where one is confusing/conflating the semantics (in this case mathematics) with what it represents, reality.

Math, and by extension logic and any other model, or heuristic, that we use to make our way through this world is the map, it is amazingly effective. Just because a map is not the territory does not mean it's not useful.


Thank you for that reference!

It is indeed not so simple. If I continue my thought experiments about sets not being universal or foundational, I run into a myriad of problems.

For one, how can one reason about anything when rejecting concepts? How can one conclude anything when rejecting logic? How can one infer anything when rejecting time?

These problems seem to point to some recursive or symmetric (or circular as the article suggests) dependency between the realist and non-realist perspectives.

I don't yet fully understand why there would have to be three worlds -- I'd intuitively say that two (e.g. physical and mental) suffice. The platonic world might simply follow from the mental one, or vice versa. I'll put in several hours of thought and report back in the next post that touches upon this subject.

I concluded for myself that logic, science, nor philosophy are going to be of much help with this. I therefore turned to contemporary art, where such thought still has some kind of validity. Let's see where that leads me :)

Edit: It seems that the "three world" idea is originally an idea by Karl Popper. Wikipedia [1] explains this in some detail, from which it becomes clear why the thought experiment has three, not two worlds.

[1] https://en.m.wikipedia.org/wiki/Popper%27s_three_worlds


Part of the reason that you're going to run into trouble with sets being universal, or foundational, rightly found in a theorem that was birthed from set theory, Gödel's Incompleteness Theorem[0].

I will confess that when I took discrete in college I was seduced by set theory. I literally had the, naive, thought to myself "you could prove all of math with just sets!". As my education continued I found out, much to my chagrin, and Hiblert's[1], that I was very, very, mistaken.

This hasn't stopped everyone from trying to continue Hilbert's program, though with a bit more limited scope[2][3].

By and large, all math is undergirded by a set of axioms that have to be taken as true with no proof. Even as far back as Euclid's Elements basically starts with a set of axioms and then proceeds from there. Strange that something that is so real must have a bunch of rules given as true with no proof of their validity beyond "well, everyone can see that it's true".

In a final, ish, dig, I'll just say leave it up to Penrose to take Popper's sensible cosmology and turn it into a quasi-religious one.

[0] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_... [1] https://en.wikipedia.org/wiki/Hilbert%27s_program [2] https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t... [3] https://en.wikipedia.org/wiki/Reverse_mathematics


I wonder if you could prove it all with G. Spencer Brown's Laws of Form?


> I could brashly say the exact opposite and would have proven just as much as you.

go ahead


I prefer the HoTT/univalent foundations approach, which is about equivalences, which seem to me to capture a deep essence of what mathematics is about. Arithmetic starts with the idea that you can make some marks on the paper and manipulate them according to certain rules, and this somehow corresponds to how many sheep are in your field or what have you. Geometry starts with the idea that you can record some angles and then after your fields have flooded you can put the boundary markers back in places that are somehow the same as the places they originally were. Lists, sets and graphs are formal systems with arbitrary rules - but, crucially, they're equivalent to things we care about, and we can lift results about those formal systems (where they're easy to calculate) back along that equivalence to be facts about those real things (where they're useful).


My answer would be: when we distinguish something from something else. This is the root of all logic and cognition.


I like this answer. Been reading “Gödel’s Proof” and for a demonstration where the definition of “tautology” in the main text makes use of the concepts of True and False, there is an appendix explaining that you can arrive at the same result without those concepts, just by treating things as belonging to one class vs another (there is a one to one correspondence with True and False but the meaning is arbitrary)


"An engineer, a physicist and a mathematician have to build a fence around a flock of sheep, using as little material as possible. The engineer forms the flock into a tight circular shape and constructs a fence around it. The physicist builds a fence with an infinite diameter and pulls it together until it fits around the flock. The mathematician thinks for a while, then builds a fence around himself and defines himself as being outside.”


I think sets are fundamental to counting. An example: how many trees are in the forest? It depends on where you draw the boundary for tree. An Aspen Grove, for example have many stems of one large interconnected tree.


Try to count trees from the perspective of the sun, or from the perspective of an electron.

Trees are made up of billions of atoms, in extremely differing configurations, which can only be appreciated by machinery that is able to abstract it into the concept of a tree. To the rest of the universe it's just a bunch of atoms, with no clearly defined boundary.

So, counting is not necessarily a fundamental thing either. At least, according to how I like to interpret things :)


Excellent points. I think we are viewing "fundamental" differently. Instead of fundamental to the universe, I see counting as fundamental to the usefulness of math, which in itself is just an abstraction of the real world. (Whole Plato's cave thing)


> where in nature does one observe these weird things?

That's like asking "where in nature does one observe numbers"?

Literally everywhere.

Your jug partitions water into the set inside the jug and the set outside it.

Caves are a subset of rock formations.


All that partitioning and counting can only be performed by a system that is able to abstract and differentiate between different abstractions. Brains and computers appear to be good at this, but other than that, I don't see many other structures that are capable of doing this.

So, no, I don't observe numbers in the universe, if I take on the perspective of, say, a rock.


Um what?

Are you saying you want something that can be logically reasoned by a rock?


It's obviously not logical, and possibly not even reasonable, but it seems like a viable way to get out of the frame of reference from a human being.

I'm fine if this way of thinking is discarded as nonsense. I'd be happy to find more constructive ways to continue with this.


> I never feel quite at ease with sets being some universal foundation

That's not a big issue nowadays since there is univalent foundations, which is based on ∞-groupoids instead of sets.


Sets being the foundation is a historical accident. It is convenient and well-studied, and if you can encode your objects and axioms in set theory, you have a sure footing (at least as sure as it gets) with respect to consistency.

Otherwise, of course, lists, graphs, etc. are all objects on their own and exist independently of any encoding as sets.


I still don't know how category theory will help me as a programmer.

I understand, and agree, that small functions, composed, are easier to understand and maintain, easier to port and easier to build upon than large monolithic functions.

But, aside from that, I'm not sure of the tangible day-to-day benefits of reciting parts of category theory.

I admit I don’t know category theory in much detail but I just can’t see the tangible benefit. Any hints would be appreciated.


I studied category theory for a while, and frankly it plays zero role in my day-to-day programming. It would maybe be a different story if I were writing libraries in Haskell or something, but as it is, it’s really not that relevant to me.

Programmers use monads all the time without knowing it, but there is little need to understand the deep math-y concepts or proofs that underlie them.


I worry this is often the case. I like the idea of learning maths for maths sake. And I would love the time to do that.

But knowing I have a limited amount of time, I often suspect a lot of people extolling the benefits of understanding the mathematical underpinning of concepts are more showing off and in love with their own understanding than offering real benefits to programmers.


Before I learned programming, I thought it had a lot to do with mathematics. I mean both look pretty formal. I was surprised to learn that mathematics played hardly a role in ordinary software development. If you happen to write a physics simulation, sure, you need math, but that is because of physics, not because of programming being inherently mathematical.

When I later learned more mathematics, I was also surprised to learn how informal everything is programming: Mathematicians use fancy symbols, but also ambiguous, idiosyncratic or inconsistent notation, and they always write proofs where a lot is left unspecified because the intermediate steps are assumed to be obvious. In software development it's the opposite, the compiler has to understand everything.


I think that depends what the math is they’re saying to learn, does it not?


Its useful when you have uber structured data. Think when programming a programming language for example, the input is extremely structured but you need to handle so many things in so many structured ways.

for wishy washy data like you have in almost every other case it isn't useful, unless you want to solve those problems by making a programming language. But in most cases you already have languages there, like SQL for relational data etc, so you don't have to solve those problems.


It won’t really help you in day to day programming. Studying it for that reason will yield disappointment.

I see three reasons to learn it if you are a programmer:

1. You find it fun and interesting

2. You work in a language such as Haskell or Purescript that uses many aspects of category theory in its library, or Scala which has CT inspired third party libraries, or languages in general that have monads, functors explicitly mentioned. Languages that implement these concepts but don’t name them also count here. Understanding that all these things are based on quite easy to understand CT concepts helps to unify them all and make their purpose more clear. But you can also just look at the type signatures!

3. A bit more tenuous but I believe studying CT helps train your brain to be more mindful of structure and especially composition of structure.


I think three would be the main reason for me. Although I'm not sure if I even deal with that much data which would benefit from that kind of insight. But who knows.


I think Category Theory is useful in a way that it brings abstractions for many areas of mathematics that used to be considered unrelated to each other or quite distant. It studies interactions between entities, rather than the details about entities themselves.

I think it's quite useful in functional programming languages and gives a lot of insight on how to organise things that in the beginning seem totally unrelated.


I haven’t seen any examples of that in practice. And I was deeply into category theory for a while.


Langlands program ?


It's design patterns for functional programming.

You can quickly recognize common patterns, e.g. monads, functors, bijections.


Ed Kmett talk a few years ago: https://youtu.be/HGi5AxmQUwU?si=zSktsfqBNup2veBE&t=29m7s (starting at a relevant moment, relating inverse semigroups to LSP)


Haskell uses some concepts from category theory, but even there (as people pointed out in a Haskell thread a while ago) it is not actually necessary to understand the category theory behind them. E.g. to be able to use monads. If category theory isn't even helpful for Haskell programming, we can be pretty sure that it doesn't help at all with more common languages.


A math major named Dave Beazly is also figuring it out: https://www.dabeaz.com/bits/ct.html. You are in good company.


It won’t. I have studied it enough to realise that it doesn’t give me anything new. I have learned a lot more studying (Martin Loff) Type theory. Which is also a solid foundation of math by the way.


I'm surprised to see no mention of topos theory in this page? The sense in which category theory is a generalisation of set theory is pretty weak imo until you bring in concepts like subobject classifiers. This isn't the only thing topoi generalise, but is a pretty significant one


If you're interested in category theory, I have compiled a list of resources quite recently: https://github.com/madnight/awesome-category-theory


Disclaimer: I don't know category theory and I only skimmed the linked page :)

This looks great - I love the illustrations, and as far as I know the information looks great! I've got it in my Pocket list and am looking forwards to reading it on the bus.

A while back there was a "Group Theory Coloring Book" that someone posted here. I was kinda hoping that this link would be another one of those. (Spoiler: it's an illustrated explanation of category theory - which is great! - not a coloring book).

Sorry in advance for hijacking this post, but it's kinda, sorta related to ask: Does anyone have a link to 'fun math/STEM-themed coloring books'?


New to set/category theory here, so this is most likely a failure on my part to understand what the diagram is actually depicting but, why does the Identity function diagram contain two sets? Shouldn't it'd be just the one set with one arrow "turning back on itself"?


They are not 2 sets they are two diagrams, depicting the same set. You can present it like this or in the way you mentioned.


One of the most powerful things I learned in topology was that functions can be viewed as sets of ordered pairs with the restriction that the first item in each ordered pair can only appear once.


Relations between sets are generalizations of functions, which is another way to realize that. I think this was taught in my first CS semester.

Also, in the context of automata theory, functions = deterministic and relations = nondeterministic.


One thing I would like to understand is what limitations of set theory made it necessary to invent/discover category theory. Can someone enlighten me? What do categories let us do that we can’t do with sets?


Category theory was not a response to any limitations of set theory, but rather a collection of new abstractions, still grounded in set theory (originally anyway). The first paper introducing these abstractions was by Eilenberg and Mac Lane [1], who formalized for the first time the idea of natural functions between mathematical objects.

For a long time prior to E&M, mathematicians had used an informal notion of “natural” or “canonical” mapping, which meant something like one special mapping out of several available ones. Especially important is the idea of natural isomorphisms. Just knowing that two objects are isomorphic is often not good enough to prove results about them because you have to make a choice about which isomorphism of several you’re using, and you might have to make such an arbitrary choice about infinitely many pairs of objects all at once. Having a canonical choice solves this problem.

Prior to E&M, mathematicians couldn’t formalize this idea of canonical choice. They would hand wave about how natural their choice of isomorphism was and how this allowed them to avoid making arbitrary choices. Then E&M defined categories, functors, and natural transformations to formalize this idea of naturality. Their motivation was algebraic topology, but the abstractions they defined turned out to be extremely broadly useful across all much of mathematics.

[1] https://www.ams.org/journals/tran/1945-058-00/S0002-9947-194...


Ah, this is right on the money in terms of the level of explanation I was looking for. Thank you so much for that, and thank you for the ref, which I will read during my long wait at the DMV today :)


Jean-Pierre Marquis' article may be helpful:

https://plato.stanford.edu/entries/category-theory/

>what limitations of set theory made it necessary to invent/discover category theory?

Category theory did not start as alternative to set theory.

But: "Category theory even leads to a different theoretical conception of set and, as such, to a possible alternative to the standard set theoretical foundation for mathematics."

>What do categories let us do that we can’t do with sets?

"At minimum, it is a powerful language, or conceptual framework, allowing us to see the universal components of a family of structures of a given kind, and how structures of different kinds are interrelated"

Some category theory constructions like adjoints and monads are higher level and more powerful than basic set theory constructions like power set.

"The number of mathematical constructions that can be described as adjoints is simply stunning."


Thank you!


I know how it relates to monoids, rather than to sets. For example, you cannot just multiply together any two matrices (like you can with monoids); they need to have the correct dimensions. So, in category theory, this corresponds to the composition of morphisms, so in this case, the objects are the number of rows/columns and the morphisms are the matrices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: