One thing I think will eventually come out of linear logic is that in classical linear logic there's no intrinsic division between inputs in outputs. In LL it's the case that implication, which corresponds to functions in programming, is decomposed into a linear implication, which is a pure application, and an exponential which handles copying. But even further than that, in classic linear logic there is a symmetric version of implication called par. A -o B is the same as Not A `par` B.
It's this symmetry that intrigues me, because it points to a way of programming beyond functional programming that still maintains its fundamental good properties.
I recommend looking at fusion calculi, which are based on the idea of input/output being symmetric. They were invented independently by B. Victor together with J. Parrow, and at the same time by Y. Fu. See [1] for some links. NB this is an old page, and I don't think anyone currently works on this.
When it was first released, Microsoft's
BizTalk was based on a generalisation of Fusions. (L. Wischik, the author of [1] what on the BizTalk team.)
Linear logic is the shit. I’ve been reading up on it and I believe that 21st century will be built on top of linear logic. For example I have a hunch probability doesn’t need to be founded on measure theory, but can be founded on linear logic instead.
Super interesting! I had never heard of linear logic before you just mentioned it, so my first question was, how can it be described in terms of category theory. So I googled it, and apparently there is a lot to be said. The first link (pdf) is over 200 pages!
I am not the dude you talked to but I looked at your profile and I found multiple points of overlapping between your interests and mine.
I think that if I had known you IRL we would have been good friends. Or maybe sworn enemies because we might share interests but have really really different opinions on those things, to the point where we could not reconcile our differences :P
But I live in a different country all together so the point is kind of moot I guess. (I live in Norway.)
How does one control runtime operations with this? Ok compiler/optimizer might have a notion of "fastest execution possible" (which most likely have some local time minimums). But what about latency/bandwidth relationship? What do I do if I need minimal latency possible? Or maximal bandwidth possible?
It's this symmetry that intrigues me, because it points to a way of programming beyond functional programming that still maintains its fundamental good properties.