I think Rich's innovation here is extremely clever and quite subtle, but it's pretty Clojure-specific, both in terms of the problem it solves and the way it solves it.
What's clever is recognizing the "kernel" of a function like map. Hickey answers the question: if we take this list-processing/decimating function like "count-if" or "map" or whatever, and express it with reduce, what is that reducer function which we will need? And how is that reducer derived from or related to the function that goes into the list processing function, like the mapping function in map? He then makes those functions behave as methods (when called with one less argument) which give you that reducer.
Now what if this is done to reduce itself? What should a reduced arity (reduce fun) return? I think that is a noop: nothing needs to be done to fun to make it reduce under reduce, so (reduce fun) -> fun. I.e. to transduce the reducer "fun", an arbitrary function, into a function X such that (reduce X list) will do the job of (reduce fun list), we simply equate X with fun.
The overall pattern of map-reduce is possible in most languages, certainly any language that has closures. And the idea of composing multiple reducing functions together is something that comes up in many languages. If a language does not have closures, you could do something similar in any language by walking an accumulating object through multiple loops, but that is awkward and ugly. But you could certainly do something like this in Javascript, through the clever use of multiple closures, composing the closures together. But transducers formalize this as an idiomatic part of Clojure.
In languages with more leeway on execution order, this problem can be made to go away. For example, Haskell's stream fusion combines back-to-back calls to list processing functions into a single iteration over the list.
In case anyone is unaware, LINQ lets you represent queries as a series of function calls which it represents as a series of Expression objects and an in-memory AST.
There is also the dynamic language run-time (DLR), which as a generalization of LINQ (to support statements) is pretty powerful when writing up these tools on .NET. I use it often in my work.
Sorry, I should have been clearer. The rewriting happens in GHC, but it's the application programmer who creates the rules. So, stream fusion isn't implemented by the compiler AFAIK.
The thing is, it's something of a redefinition of what idiomatic clojure is. Overloading arity to do radically different things isn't idiomatic Clojure, but transducers do it at two levels. Equally, functions are expected to do explicit state passing e.g. old school drop, rather than implicit state e.g transducer drop.
I think Rich's innovation here is extremely clever and quite subtle, but it's pretty Clojure-specific, both in terms of the problem it solves and the way it solves it.