Is there any Category Theory tutorial that illustates that we need this apparatus to solve some real problem? For example, for Group Theory there is an excellent book 'Abel's theorem in problems and solutions' that does exactly that.
> that illustates that we need this apparatus to solve some real problem?
I think your question is wrong in a sense. Category theory is one of several languagues of mathematics, and there are analogies between them. It's kinda like asking "is there a computer program that requires to be written in C?"
So I think there is an element of taste whether you prefer category theory to some other (logical) language. That being said, just like in programming, it's still useful to know more than one language, because they potentially have different strengths.
Modern algebraic topology, especially homological algebra, more or less requires category theory... intro textbooks such as Rotman's will contain primers on category theory for this reason.
I'd say Grothendieck's proofs of the Weil Conjectures is a good example. His proof uses etale cohomology and the definition of etale cohomoly uses Category Theory in a fundamental way. From the etale cohomoly wikipedia page https://en.wikipedia.org/wiki/%C3%89tale_cohomology
"For any scheme X the category Et(X) is the category of all étale morphisms from a scheme to X. It is an analogue of the category of open subsets of a topological space, and its objects can be thought of informally as "étale open subsets" of X. The intersection of two open sets of a topological space corresponds to the pullback of two étale maps to X. There is a rather minor set-theoretical problem here, since Et(X) is a "large" category: its objects do not form a set."
There's a lot of advanced math in that paragraph, but it should be clear that Category Theory is needed to define etale cohomology.
One application I like is the use of the Seifert-van Kampen theorem to prove that the fundamental group of the circle (S^1) is isomorphic to Z. While category theory is not strictly needed to prove this (you can compute pi_1(S^1) using R as a cover in a way that is purely topological, see Hatcher "Algebraic Topology"), if one states the Seifert-van Kampen theorem for groupoids (this uses category theory through the notion of a universal property/pushout) one can compute pi_1(S^1) largely algebraically just from the universal property - in fact you can go through the whole proof without mentioning a homotopy once (see tom Dieck "Algebraic Topology" section 2.7).
This might not meet your criterion exactly, as one can extract a more topological proof and relegate the category theory to a non-essential role, but this requires some more effort and is a harder proof. So I do think it still illustrates that the category theoretic approach does add something beyond just a common language.
As far as I understand, fundamental groups were defined by Poincare in 1895. And functors in category theory are a generalisation of this idea (i.e. proving something for fundamental groups and then relating this back to topological spaces). So your example sounds backwards to me.
Putting topology aside, and recognizing that 'ease' is subjective, imo Moggi's use of monads to model the denotational semantics of I/O in lazy functional languages such as Haskell is a common textbook example; the creators of Haskell had tried many solutions that did not work in practice before monads cracked it open. Even now this solution is more widely adopted than the alternatives (streaming I/O, linear I/O types, etc) and Moggi's paper remains a classic.
It's annoying that you need so much math to understand the utility of Category Theory. I learned a bunch of Category Theory before I ever saw it used in a useful way.
Grothendieck wrote modern Algebraic Geometry in the language of Category Theory. This is the first time I saw Category Theory really used in a useful way. Grothendieck's proofs of the Weil conjectures I would say is a good example of using Category Theory to solve a famous problem. Category Theory is used to define and work with etale cohomology and etale cohomology plays a fundamental role in Grothendieck's proofs of the Weil conjectures.
That's not an application of category theory. The important theorem here is that the fundamental group is a functor, plus computations of the fundamental group of the disc and the circle. But that's a theorem from topology, not a theorem from category theory. Category theory is merely used as a language, to give the proof a structure.
By that I don't want to say category theory is useless. But regarding the video it's neither necessary, nor an application of category theory.
I found the series of Category Theory lectures by Bartosz Milewski[1] extremely helpful and approachable. It indroduces the abstract concepts of category theory while giving concrete examples of those concepts and tying some key concepts back to properties of types in programming languages.
I haven't dug far into CT. I'm slowly making my way through Modern Foundations of Mathematics (Richard Southwell) [1] that was posted here recently.
That said, two comments:
1) The definition of a category is just objects, arrows, and composition. If you're looking for more features, you might be disappointed. (If you've grown up with 'methods' rather than arrows, then you don't necessarily have composition.)
Writing your logic with objects and arrows is just damn pleasant. If I have bytes, and an arrow from bytes to JSON, then I have JSON. If I have also have an arrow from JSON to a particular entry in the JSON, then by the property of composition, I have an arrow from bytes to that entry in the JSON.
2) The various CT structures are re-used over and over and over again, in wildly different contexts. I just read about 'logict' on another post. If you follow the link [2] and look under the 'Instances' heading, you can see it implements the usual CT suspects: Functor, Applicative, Monad, Monoid, etc. So I already know how to drive this unfamiliar technology. A few days ago I read about 'Omega' on yet another post - same deal [3]. What else? Parsers [4], Streaming IO [5], Generators in property-based-testing [6], Effect systems [7] (yet another thing I saw just the other day on another post), ACID-transactions [8] (if in-memory transactions can count as 'Durable'. You don't get stale reads in any case).
They're also widespread in other languages: Famously LINQ in C#. Java 8 Streams, Optionals, CompletableFutures, RX/Observables. However these are more monad-like or monad-inspired rather than literally implementing the Monad interface. So you still understand them and know how to drive them even if you don't know all the implementation details.
However what's lacking (compared to Haskell) is the library code targeting monads. For example, I am always lacking something in Java Futures which should be right there: an arrow I can use to get from List<Future<T>> to Future<List<T>>. In Haskell that code ('sequence') would belong to List (in this case 'Traversable' [9]), not Future, as it can target any Monad. This saves on an n*m implementation problem: i.e. List and logict don't need to know about each other, vector and Omega don't need to know about each other, etc.
Perhaps not in Java but in C# it is quite common to do
var tasks = names.Select(GetThingAsync); // IEnumerable<Task<Thing>>
var things = Task.WhenAll(tasks); // Task<Thing[]>, await to get Thing[]
Can mix and match with other LINQ methods. There are other useful methods like Parallel.ForEachAsync, .AsParallel (Java has a similar abstraction) and IAsyncEnumerable for asynchronously "streaming back" a sequence of results which let you compose the tasks in an easy way.
Maybe what you wrote is clear to other LINQ users, but as a short feedback, your point is not easily understandable for people who don't know much about it.
As in, from outside, it looks like you are in there way deep, but it might need some more work to go and fetch the (assumed) target audience.