Hacker News new | past | comments | ask | show | jobs | submit login
A Taste of Linear Logic (1993) [pdf] (ed.ac.uk)
146 points by andrewflnr on July 30, 2018 | hide | past | favorite | 62 comments



Here's a great talk by Simon Peyton Jones on integrating linear types into Haskell: https://www.youtube.com/watch?v=t0mhvd3-60Y


One of the most amazing things about linear logic is that "classical" linear logic has a direct constructive reading.

This has some interesting applications in constructive mathematics (https://arxiv.org/abs/1805.07518) and programming (http://homepages.inf.ed.ac.uk/wadler/papers/multiparty/multi...).


The Shulman paper was the first instance I’ve seen of someone properly using Par in their logical modelling

I’ve been slowly working on some applied linear logic stuff for my day job the past two years. I definitely think a lot of work has missed out on what happens if you have both implication and par native to your system. You get a really nice kernel for a concurrent functional programming language. With nice resource safety.


I couldn't agree more, classical linear logic leads to surprisingly concise definitions.

Your day job sounds fascinating, would you mind expanding on it a bit?


Linear logic is the shit. I've been reading up a lot on linear logic and certain things I have a hunch it closely corresponds to and I'm convinced it's the formalism for 21st century.

Basically, it's like classical logic but with two players. From player 1's point of view, player 2's AND has different semantics than his own AND.

It's the logic of state developing over time. In classical logic a statement can only be true since negative eternity and it will be true until eternity.

On the other hand, linear logic deals with statements like "I have a dollar" which can be spent on an ice cream. Therefore, at some point, "I have a dollar" was false, then it became true, and then it became false again.

It's also the logic that captures the difference between a reference and a value which is weirdly important. Unix's "everything is a file" should actually be recast as "everything is a reference".

The Rust borrow checker is based on a subset of linear logic and it's the secret sauce of said compiler. It can reason statically about dynamic systems.

I'll make some claims that I can support with papers if need to be so ask if you want proof. Currently, probability is based on measure theory however linear logic can also be used (Vovk bases probability on game theory which is essentially linear logic).

Generative adversarial networks (which are based on minimax) are based on linear logic as minimax is very linear logical.

I would go as far as to say that mathematics, in addition to axiom of choice also needs an axiom of adversarial choice. Axiom of choice assumes that you can always pick the best possible option, however you need to be able to account for your worst possible options as well. All mathematical optimization falls outside of classical logic.

The funniest thing is that this idea, the duality of things is very old. Dialectics is very firmly rooted in Western philosophy (Aristotle, Kant, Hegel, you know the usual suspects) however no one ever extended it to a formal logic.

Karl Marx (I hate Marx but realize that his ideas might have influenced people so knowing why he was wrong is worth something) also talked about a similar idea (https://en.wikipedia.org/wiki/Dialectical_materialism). In particular Engel's summary is work exploring https://en.wikipedia.org/wiki/Dialectical_materialism#Engels...

They are essentially talking about linear logic, however the third rule is very much wrong, negation of negation is not a part of a logic of two opposites.

It can also be used as the foundation of probability instead of the more traditional measure theory.

I can talk about this for hours.


You make a lot of strong claims, and express them with supreme confidence, but without providing much in the way of evidence. Some of what you say is complete nonsense (e.g. "All mathematical optimization falls outside of classical logic", or "[Marx and Engels] are essentially talking about linear logic").

   I can talk about this for hours.
I recommend talking less about linear logic, and spending more time doing the maths, so that you actually understand LL, which is a precondition to making reasonable judgements about the pros and cons of LL.


What do you want evidence for? And like what form of evidence do you accept?

> All mathematical optimization falls outside of classical logic"

How so?

> I recommend talking less about linear logic, and spending more time doing the maths, so that you actually understand LL,

Cool, I don't remember asking for your suggestions.


   How so?
How about you show how even one of the well-known techniques for mathematical optimisation (see e.g. [1] for an overview) are not expressible in the currently accepted foundations of mathematics (such as ZF, PA, MLTT, HOTT), or at least point to peer-reviewed papers in credible mathematical publications that give evidence that "[a]ll mathematical optimization falls outside of classical logic". Or even weaker, point to Girard's written works where he claims that "[a]ll mathematical optimization falls outside of classical logic".

"Extraordinary claims require extraordinary evidence."

[1] https://en.wikipedia.org/wiki/Mathematical_optimization


I'm trying to have an informal discussion about things that aren't established science. Linear logic is from 1989. How many people do you think are familiar with it?

HOTT states nothing about optimization. None of the foundations of mathematics really tackle optimization.

I have a claim that you may or may not accept.

There is such a thing as linear types (they are similar to Rust types). There's this language called ATS which is basically OCaml with C like performance. The performance gain is thanks to linear types, the compiler can statically reason about performance and statically optimize code. This isn't unrelated to mathematical optimization. Linear types let you to trully distinguish between values and references.

Your standard for proof is too high and there's only one particular type of proof that you accept unquestioningly. I'm still not convinced that you've looked into this for long enough to really see how they are related. Before we continue, tell me something about linear logic, or like how you understand it. Maybe helping you understand linear logic would be more productive than repeatedly invoking a couple of memes. It's quite naive that our current understanding of math is really "all there is".

> "Extraordinary claims require extraordinary evidence."

I'm allowed to make whatever claims I want. You are allowed to disagree.


   trying to have an informal discussion 
That's fine, but you express yourself in a way that sounds like you have a deep understanding of LL, and its relationship to other subjects such as programming languages and economic games. You have gotten a lot of pushback here for this reason, I'm just expressing myself more bluntly than other. Sorry, but you don't currently have this knowledge, and that might be misleading to others. I recommend to take this criticism constructively, and deepen your knowledge of logic (linear or otherwise), and there is only one way of doing this, which is to go beyond intuition and informality, and "do the math".

    How many people do you think are familiar with it?
A lot on HN. Including myself, having written several papers on the subject, and discussed LL on numerous occasions with the world's leading researchers in the area, including Girard himself and several of his students.

   HOTT states nothing about optimization. 
   None of the foundations of mathematics 
   really tackle optimization.
This is a complete misunderstanding of the foundations of mathematics. ZF(C) doesn't mention the natural and real numbers either. You can construct the rational and the real numbers in all foundations of mathematics (and this is typically done in undergraduate maths see e.g. [1] where I learned the construction from). Notions of minimun/maximum as well as related concepts can be expressed, too. This means that all mathematical optimisation as conventionally understood is trivially expressible in e.g. ZFC.

   There is such a thing as linear types
I know, I coauthored some of the early papers on linear types.

   Linear types let you to trully distinguish 
   between values and references.
No, linear types let you distinguish between values that are guaranteed to be used exactly once, and values that may not be used exactly once. Note that Rust (at least last time I looked -- the language evolves fast) has more like affine rather than linear types (guaranteeing use at most once).

   that our current understanding of math is really 
   "all there is".
Nobody says this, indeed this is an immediate consequence of Goedel's incompleteness results.

[1] Yiannis Moschovakis, Notes on Set Theory (https://www.amazon.com/Notes-Theory-Undergraduate-Texts-Math...)


Can you post your papers on linear logic?

> Notions of minimun/maximum as well as related concepts can be expressed, too.

Exactly. There are two types of minimum and maximum, something that isn't expressible in any of the current foundations of mathematics. There is your minimum and maximum, and my minimum maximum. This is exactly what linear logic is about.


   Can you post your papers 
   on linear logic?
That would de-anonymise me on HN, which I want to avoid.


> Notions of minimun/maximum as well as related concepts can be expressed, too.

Show me a peer reviews paper showing this. Extraordinary claims bla bla bla. Do you feel like we have progressed anywhere?


Every standard undergraduate textbook on set theory does this. I quoted Moschovakis' fine book. It's trivial, once the core concepts are in place.


Gimme a sketch then.


This is why I come to HN.


> Axiom of choice assumes that you can always pick the best possible option, however you need to be able to account for your worst possible options as well. All mathematical optimization falls outside of classical logic.

Not sure what you mean here. The axiom of choice just says that the Cartesian product of a set of nonempty sets is nonempty. Can you explain what you mean by "best possible option" and "worst possible options"?


Imma paraphrase Wikipedia. You are right but I find the informal definition to be more illuminating. “Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin, even if the collection is infinite”. Now imagine that there’s an opponent who decides the object selection of certain bins.

This is a silly example but compare playing chess single player vs multiplayer. In the single player chess, you still have two sides, you are still trying to make your side to win, however you decide all the moves (even for the other party). IRL, a lot of choices are outside of your control. So in some sense, you need to account for your opponent's best moves (which are generally the worst moves for you).


So where is the part about the “best choice”?

Two player games to me are all about quantifiers. If the opponent plays, it means we have to prove a “for all”. (This includes all worst case scenarios obviously.) And own plays are “exist” (just the best option at that time). No axiom of choice involved, just FO logic.


Your formulation doesn't work well if games are allowed to be infinite, because standard propositional logic doesn't handle formulae of infinite length.

There is something called the Axiom of Determinacy which states that all two-player games of perfect information have a winning strategy for one player. This axiom applies to games of any length, including all types of uncountable infinities. The Axioms of Choice and Determinancy are incompatible; if one of them is true, the other can be proven false.

I don't know if this is related to what GP was talking about, but it reminded me.


Neither propositional logic nor first-order logic (FOL) allow formulae of infinite length.

While there are logics with formulae of infinite length (see [1, 2] for an overview), I would not consider them logics on par with propositional or FOL. Why? Because as a finite human you cannot actually write down infinite formulae! Instead, you use some finite abbreviation mechanism (typically some variant of FOL, extended with set theoretic axioms like ZFC) so you can denote (in a finite way) those infinite formulae. I'd suggest to see infinite formulae as a semantics gadget that is sometimes useful in model-theoretic investigations.

It's not surprising that we encounter questions independent from ZFC (or whatever your preferred foundation of mathematics) when we look at infinite games. It's but an instance of our inability to define infinite sets in an impredicative way (i.e. the usual inductive definition of the natural numbers as the least set closed under successor).

[1] https://en.wikipedia.org/wiki/Infinitary_logic

[2] https://plato.stanford.edu/entries/logic-infinitary/


You can convert any terminating formula (including recursive) to a finite logic representation in second order logic with exponential number of terms. This by use of Peano numbering.

Proving whether the given (inductive or recursive) formula is actually terminating is the halting problem though.

Linear logic is closely related to martingales too.


I'm not sure I can follow you here. I have said that you need to express infinitary formulae in a finitary form.

What's the connection with martingales?


Are you familiar with the minimax algorithm? Google for how it’s related to linear logic.

For all is a clutch. There exists is much more manageable. However you need both the worst and the best “there exists”.


I see. That makes some sense now. Thanks!


I’m having difficulty following your comments about adversarial choice as related to the axiom of choice. What you’re saying doesn’t sound well-defined (in the mathematical sense).


It's actually very well defined (or which part are you confused by). I'm stating these things informally as a formal treatment is right now beyond me and it also doesn't really exist. Look up Chu spaces esp their relationship to linear logic. This paper is pretty informative

https://www.semanticscholar.org/paper/Chu-spaces-as-a-semant...

Fundamentally, think of the minimax algorithm. You have two sides each optimizing for victory.


I don't see how Pratt's paper justifies any of your claims.


Can you give me the summary of the paper?


"It's actually very well defined" and "a formal treatment... doesn't really exist" seem to me to be contradictory statements.


It's a path that unifies several very formally related. The unification itself isn't fully formalized, you need to look at the symmetries in the well formalized subparts.


> It's the logic of state developing over time.

You may already be aware of this, but that various kinds of temporal logic allows one to capture very complex predicates for the evolution of state machine (much like your person with a dollar).

If you have a familiarity with state machines (here Kripke systems) it might be approachable.

Some useful links:

1. https://www.cs.cmu.edu/~emc/15414-s14/lecture/ModelChecking....

2. https://plato.stanford.edu/entries/logic-temporal/


I'm aware of temporal logic (besides temporal logic there's there's also temporal linear logic) I do find it to be more useful in certain situations, however you do lose some generality and I really enjoy that generality. For example, idk if you could build a better borrow checker by using TLL rather than LL. At the same time, the second I said this I realized that Idk whether you could.


"It's the logic of state developing over time. In classical logic a statement can only be true since negative eternity and it will be true until eternity."

You can fully encode change of state against time with classical logic, it is a fundamental underpinning of solvers for digital synthesis (specifically the extension of SAT solvers to timing analysis).

It's been used, in production, for decades now.

Are you sure this works the way you think it does? I might be wrong but I'm getting a very low signal to noise ratio vibe from your comment.


Ok so I actually haven't explored the SAT solver connection too much however my gut tells me that linear logic can solve the same thing however with better performance. I think that the fundamental difference is that SAT solvers aren't differentiable. SAT solvers are fundamentally stateful, whereas linear logic is very dynamic.


Be careful in mixing popular culture and mathematics.


I've heard this before however I disagree. In fact I would argue that most of the people I really admire don't put mathematics on a pedestal but instead treat is as just another human endeavor and as such can be compared with other human endeavors.


If it's Marx's thought that's being referred to as "popular culture", that's a pretty inaccurate characterisation. It's bona fide political philosophy and/or political economy.

Even a soft, humanities subject like political philosophy, can, unexpectedly, be a topic for mathematical study, e.g. Arrow's theorem.

You're right not to believe in policing disciplinary boundaries.

I think the problem here is more that you haven't demonstrated just how Marx's thought resembles linear logic, so the idea might seem a little crazy, maybe a product of an idée fixe. If it's not something mathematicians would be impressed by (in terms of rigour or depth), and not something Marxists care about (in terms of philosophical implications) it's not going to lead anywhere.


It's pain to learn one must take care to avoid confusing math with everything else. Unexpectedly, sure. Safe languages...sentimentality. Thought processes... everything it can be compared against. Here be dragons, tedious caution required.


> It's pain to learn one must take care to avoid confusing math with everything else.

Why not?


It's easy to get them flipped around. Inside of mind versus the outside.

Axiom of choice, choosing the right thoughts to have versus to speak.

If you don't understand, you don't want to. It's a perpetual type of terror that often has nothing to do with reality, but you can never be sure.


I have no clue what you are talking about I'm sorry.


Yup. That's why.


For that particular point, yes, a demonstration would suffice.


> Even a soft, humanities subject like political philosophy, can, unexpectedly, be a topic for mathematical study, e.g. Arrow's theorem.

I'm aware but idk how this is relevant.

> If it's not something mathematicians would be impressed by (in terms of rigour or depth), and not something Marxists care about (in terms of philosophical implications) it's not going to lead anywhere.

I don't care? I'm not trying to impress anyone and I don't care what Marxists care about.


Did you read the link on dialectical materialism?

> It's bona fide political philosophy and/or political economy.

It’s above all a description of development of science.


I don't think this point has to do with the "mathematical pedestal", but rather, mathematics should be precise. What I mean is not to philosophise in a manner in which disperate concepts are equated.


"Proofs of programs are too boring for the social process of mathematics to work."

- Richard Demillo, Richard Lipton, and Alan Perlis, 1979

"... So don't rely on social processes for verification"

- David Dill, 1999


Interesting post, would be fun to read a expanded article about that. I think Aristotle incorporated time and space in logic, like X is true at time Y and place Z. Linear logic sounds interesting and useful, I need to check it out.

I think Hegels negations are part of breaking things apart from a omnipresent One so it basically means that things get defined as separate entities once they negate other things so 'negation of negation' does not mean that the same thing is negated twice rather it means something new is emerging and something old is disappearing. Maybe you can understand why negation of negation is true in this sense if you consider that the things change in relation to time and space.


I would be very interested to hear more about the connections between linear logic on the one hand, and probability and game theory (and by extension, GANs) on the other. Could you expand a little on those claims, or provide references?

(Edited for grammatical clarification.)


GAN is based on a minimax algorithm (as mentioned in the paper). Linear logic has very minimaxy semantics.


I was actually more interested in a reference making an explicit construction of a probability theory using linear logic, or if that's unavailable, then some more on the LL -> games -> probability route.

Edit: I found this comment [1] by you 29 days ago. Would you mind expanding on the "more connections" you mention? Thanks.

[1] https://news.ycombinator.com/item?id=17432612


I'd actually like to see a coherent explanation, e.g. paper reference, of how the Minimax [1] algorithm relates to Girard's linear logic.

[1] https://en.wikipedia.org/wiki/Minimax


Now we are talking! Check out the paper "Stochastic Interaction and Linear Logic" by Lincoln, Mitchell, Scedrov


As far as I understand, that paper talks only about MALL, which is a small propositional fragment of LL's, in particular, exponentials are missing. Extending models for small fragments to full LL has always been difficult. Note that the Lincoln et al paper is > 2 decades old. Has anyone extended it to full LL?

Note also that this paper doesn't seem to be talking about Minimax.


What is the semantics of Linear Logic?


You have normal additive (and/with, or/plus) and multiplicative operators (forall/times, nonsuch/par) as well as exponential ones (ofcourse/ever, whynot/sometimes).

The final two are the linear part as they cause the logic to be positional.

The closest CS equivalent are categorial grammars (including combinatorial ones), the difference is that pure linear logic is commutative while these are not in general.


It has several interpretations. One is close to game theory, one is related to ownership, one is related to probability, one is related to interfaces. These are all the same idea.


Are they for full LL or just small fragments (like MALL)? And what kind of completeness properties are known to hold or not to hold for those models?


I too dislike Marx, but I find myself thinking about rhetoric and sophistry and discourses and a whole other bunch of critical theory in a good light, specifically for how it seems to produce much more descriptive theories of how we come to believe things.

Aligned with linear logic, I find it productive to think in the abstract about the dialogues I'm having, in terms of what I would have to say to refute a claim given the opposing claim's form. I have found this oddly more productive at coming up with things to check or validate than asking purely, "what is the truth of the matter?"


Why not go one step further to relevance logic and model human thinking with that? ;-)


Human thinking models itself. Logic is just one way of avoiding the insanity of how crazy the experience of life can actually be. Relevance logic though, clever.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: