Hacker News new | past | comments | ask | show | jobs | submit login

Leave it to a Haskell coder to make an incomprehensible mess of matrices under addition.



Matrices under addition? I know very little of Haskell, but I think this blog post is quite interesting; it's not obvious to me how you would translate the 'join' operator (->) into a matrix formulation that preserves nice structure (you'd have to be weird about dimensions and such).

Sure, the 'add' operation is nice (+) since it's just an OR over F_2 (e.g. 0+0=0, 1+0=0+1=1+1=1 [1]) in the matrix representation, but even this involves some bending-over-backwards by noting that you still have to extend the dimensions of the matrices in question in order to even make sense (think about, say (1->2) + (3->4)).

I think the minimal axioms are quite elegant if you ask me.

------

[1] Over F_2, this isn't quite the same addition; I define

a+b ≡ a⊕b + a·b

where ⊕ is XOR and · is AND, which are the usual, defined operations.

EDIT: I guess I didn't mean to say it's not obvious to do these things per se, but rather, a simplistic approach without defining a universe of possible vertices and a mapping from these into indices can't really be done in the linear-algebraic picture.


It's not really 'extending the dimensions of the matrices' -- we live in a universe with an infinite number of distinct vertices, so the matrices are necessarily of infinite degree. It's only the sake of notation that we project them to finite dimensions.

  A:  x x x   B:  y y y
      x x x       y y y
      x x x       y y y

  A join B:

      x x x 0 0 0   0 0 0 0 0 0   0 0 0 1 1 1   x x x 1 1 1
      x x x 0 0 0   0 0 0 0 0 0   0 0 0 1 1 1   x x x 1 1 1
      x x x 0 0 0 + 0 0 0 0 0 0 + 0 0 0 1 1 1 = x x x 1 1 1
      0 0 0 0 0 0   0 0 0 y y y   1 1 1 0 0 0   1 1 1 y y y
      0 0 0 0 0 0   0 0 0 y y y   1 1 1 0 0 0   1 1 1 y y y
      0 0 0 0 0 0   0 0 0 y y y   1 1 1 0 0 0   1 1 1 y y y
This becomes much less accessible when dressing this up as a novel 'algebra of graphs' and obscuring it in Haskell syntax.


Your comments seem to be nothing more than "I understand infinite dimension linear algebra better than Haskell", which is fine but not insightful.

I would argue that having to drag around the notion of everything being indices + projection tuples constructed from a universal, infinite dimension matrix to be needlessly complex compared to the presented version.

This only seems to get worse when you generalize to multiple edges between a pair of vertices, with your rank 3 infinite tensor being n x n x m, and requiring a canonical way to index edges.


That's fine, but then in the 'infinite vertices case' we require an associated numbering scheme and defining an ordering over vertices which requires a bijection from whatever set you're working with into an index labeling (⊆ ℕ), etc, when this could just be abstracted away. In general, I don't think this is any cleaner than the few axioms defined whose structure we can attack with the tools of semirings.

Overall, I think both cases have their uses; matrices in the area of computation and abstract algebras where manipulations are straightforward. In particular, I'm interested here since the axioms can be written as a language and operations can be done over finite sets which may yield some nice computability results over this 'language of graphs.'


LolWolf, thank you very much for 'defending' the algebra much better than I could possibly have defended it myself :-)

Just to add: I don't understand why we should compare the algebra and the matrix encoding suggested above. The latter is just one possible implementation (a model) of the former. It's a good, useful model. But the point of having an algebra is that we can abstract of details of particular implementation and instead focus on the laws that all such implementations satisfy. Sometimes these laws can be interesting by themselves (at least they are interesting to me).


I agree: I don't really see what the obsession is with linear-algebra based graph theory. Yes, it's cool, yes it has interesting results, but it's not everything and we've got tools for semigroups and general algebras that may be useful here, along with natural generalizations that may emerge which may not be obvious in the linear-algebraic picture.

Anyways, thanks for the interesting blog post!


Almost every time I've implemented a graph algorithm professionally, it's been using matrix operations because they came with a GPU implementation and smashed the performance of a naive implementation.

But from a design and theory standpoint, I prefer to work independent of the representation as much as possible, and only switch in the matrix language as an implementation of a type in already working code. (Sometimes, this isnt quite possible and you need to, eg, reorder operations to make it efficient to calculate in your representation.)

So I would say that linear algebra is a big deal for implementations, if not as much of one theoretically. (Mostly cause my computer has really fast linear algebra chips/code, and not so much for random graph traversal.)


Sure, I agree; in this case I refer to "tools" as theoretical tools rather than computational ones.

There's no debate on the merits of computational tractability by converting problems to linear algebra ones; I was speaking from a purely graph-theoretic standpoint.


Thanks LolWolf! You mentioned a few interesting ideas about monotonicity/fixed-point theorems & computability/hardness and I'd very curious to know if you come up with anything on these fronts -- please drop me an email if you do!


Certainly will; thanks again for the interesting post!


Just a side note: The join of two graphs does correspond to a reasonably nice matrix operation: take the two adjacency matrices A and B (of dim m and n respectively, say), then form an (n+m)x(n+m) block matrix:

A J

J B

where J is the all one matrix (of the appropriate sizes to make the above matrix square).

Further aside: this block form might seem unnatural, but it pops up all the time in adjacency matrices of graphs. For example, bipartite graphs are exactly those that can be written in the form

0 A

B 0

(Where B = transpose(A))


Yep, I definitely agree! Nominally, there are plenty of interesting results in spectral graph theory which make use of operations like these (esp. when computing laplacians/spectral partitions, etc), but, like most tools, while we can represent many groups/associated structures as linear-algebraic operations, there are subtle results that emerge from looking at an axiomatic treatment which aren't as clear when working with the linear-algebraic picture.


Like what?


Prove that a topological sort exists for a given graph using only linear algebraic properties of the adjacency matrix or the laplacian.

Or even, give a proof that a graph is a DAG iff there exists a topological sort using only linear algebraic operations on the adjacency matrix.

The point is, I'm sure you can do this; in some sense, we can perform every operation on a graph as we can its adjacency matrix, but why make it so complicated when there are easier pictures to work with?

Why perform surgery with a chainsaw when a scalpel will do without such a mess?


https://cs.uwaterloo.ca/journals/JIS/VOL7/Sloane/sloane15.pd...

I get that lin alg maybe isn't your picture of elegance, but you can't say that proof is laborious or esoteric. Any intro-level LA course is enough to grok it.


No, Lin-Alg is in fact what I work on in my research on spectral graph theory and graph partitions for approximation algorithms; but I do think there are different tools for different jobs.

There are plenty of interesting results where linear algebra shines and is beautiful (Kirchoff's theorems for counting trees, random walks on lattices as electrical networks, harmonic functions on graphs, topological data analysis) but there's plenty of statements where linear algebra is a hammer that's wholly unnecessary---statements that are already simple or elegant using other descriptions of graphs.

Though: interesting paper you linked to, thanks for that.


rjtobin, Sorry I don't understand what happens if the two graphs have common vertices?

For example, in the algebra I described, you can do 1 -> (1 + 2) and this represents a graph with vertices {1,2} and two edges (1,1) and (1,2). How would that work with your matrix join?


I didn't really think about that case. In full generality, then the corresponding (n+m-k)x(n+m-k) matrix can be written as the matrix whose top-left (n-k)x(n-k) block is A\B, whose bottom-right (m-k)x(m-k) block is B\A, and where everything else is 1.

Note that both involve relabelling vertices. Many graph theorists do this freely (it's the same graph), but perhaps you have some application in mind where you don't want to do this.


Thank you for clarifying. I see how your construction works now.


With an obvious notion of negation of graphs, wouldn't the `connect` operation be \g1 g2 -> not $ (not g1) `overlay` (not g2)? (I've already asked about lattice theory elsewhere (https://news.ycombinator.com/item?id=13125334 ); this thought makes me wonder if there's any connection to modal logic https://en.wikipedia.org/wiki/Modal_logic, where modalities are often defined in these sort of "de Morgan pairs".)

EDIT: I originally typo'd and proposed the above as a definition of the `overlay` operation, which would be both a pointless circularity and wrong.


JadeNB: This works but I believe only in the case with undirected graphs and if the graphs being connected do not have common vertices.

So, indeed there is a sort of de Morgan law:

a -- b = !(!a + !b) where ! is the graph edge-complement, a and b do not have common vertices, and -- is a commutative version of ->.

(Responding so late, because I was also "submitting too fast"!)


> JadeNB: This works but I believe only in the case with undirected graphs and if the graphs being connected do not have common vertices.

Agreed; I thought that your `overlay` was the "(forced) disjoint union". I had missed that "(vertex x) `overlay` (vertex y)" was not isomorphic to "(vertex x) `overlay` (vertex x)". I don't even know what negation would mean for directed graphs.

(By the way, your code doesn't seem to give any way of producing inhabitants of `Vertex g`. Since the displayed code only ever uses `Vertex g` in order to promote it immediately to `Graph g`, does it make sense just to have them as a separate type? (Maybe I'm misreading; it's been a while since I've Haskell'd.) Also, am I wrong to read `vertex` as a kind of `return`? (I hope that I am not wrong to think that graphs should serve as just as good instances of monads as lists, if not better.))

While we're talking: thanks for this article; I enjoyed it a lot. My one suggestion is that I think that what you call a "clique" is usually called a "complete graph", with "clique" (implicitly, in G) reserved for subgraphs of G that happen to be complete. https://en.wikipedia.org/wiki/Clique_(graph_theory)


> By the way, your code doesn't seem to give any way of producing inhabitants of `Vertex g`.

In fully polymorphic code (like clique) you indeed can't produce new inhabitants, or do anything else with the values of type `Vertex g`. However, if you know something about `Vertex g` then you can do certain things. For example, if we know that the type of vertices has `Ord` instance then we can compare them (as we do in `Relation`). And in very concrete code, e.g. operating on `Basic Int` values, you can create new inhabitants freely -- any number will do.

> Also, am I wrong to read `vertex` as a kind of `return`?

It is indeed very much like `return` or `pure`! We use it to inject a type into a sort of container type, in this case, into a graph type.

> I hope that I am not wrong to think that graphs should serve as just as good instances of monads as lists, if not better

Many `Graph` instances are `Monad`s. For example, `Basic` is a `Monad` (and many other things too). But on the other hand `Relation` is not a `Monad`, because it imposes the `Ord` constraint on the underlying type (and Haskell monads are not allowed that).

> While we're talking: thanks for this article; I enjoyed it a lot.

Thank you! I didn't expect so much interest to it, so I'm a bit overwhelmed to be honest :)

> My one suggestion is that I think that what you call a "clique" is usually called a "complete graph", with "clique" (implicitly, in G) reserved for subgraphs of G that happen to be complete.

Thanks, indeed "clique" may be not the best name... I decided to use it, because I would like to reserve "complete graph" to mean something like `clique [1..]` i.e. the complete graph on the whole universe. If we look from this perspective then something like `clique [1..10]` may look like a clique within some bigger graph. Anyway, I admit this may be confusing.


> Many `Graph` instances are `Monad`s. For example, `Basic` is a `Monad` (and many other things too). But on the other hand `Relation` is not a `Monad`, because it imposes the `Ord` constraint on the underlying type (and Haskell monads are not allowed that).

Indeed, I did say that it was a monad and not a `Monad`; math doesn't suffer from these implementation constraints. :-)

> Thanks, indeed "clique" may be not the best name... I decided to use it, because I would like to reserve "complete graph" to mean something like `clique [1..]` i.e. the complete graph on the whole universe. If we look from this perspective then something like `clique [1..10]` may look like a clique within some bigger graph. Anyway, I admit this may be confusing.

Oh, that makes more sense. Maybe it's worth even making just a throwaway remark to that effect?


> Indeed, I did say that it was a monad and not a `Monad`; math doesn't suffer from these implementation constraints. :-)

Ah, I see :-) Indeed, from the mathematical standpoint we can both inject a vertex into the graph type with `vertex`, and flatten a graph whose nodes are graphs (the monad `join`, not to be confused with graph `join` operation that I'm using). I haven't given this much thought before, thanks for noticing this.

> Oh, that makes more sense. Maybe it's worth even making just a throwaway remark to that effect?

I've added the following remark: "we will call such cliques covering the whole universe complete graphs". Hopefully this helps to clarify the naming convention.


I'm afraid I don't quite understand what the negation of a graph is in this case and what you mean by 'overlay,' perhaps elaborate a bit?


overlay or + is the operation from the article; not is the graph with the complementary edge set (i.e. `not (V, E) = (V, E \ (V × V))`)


Yup, that's what I meant (except that I think your set complement is backwards). Sorry for not replying earlier; I was "submitting too fast."


> I think your set complement is backwards

It is. Unfortunately I can't fix it now.


> Leave it to a Haskell coder to make an incomprehensible mess of matrices under addition.

Since we are supposed to explain downvotes: this seems to me like useless anti-snobbery. Even a minor variation like "Although I believe that I understand matrix addition well, I had difficulty understanding the translation into Haskell" seems more constructive. (On the other hand, CTRL+Fing the article, which I haven't yet read, doesn't show any mention of addition. Are you pointing out that this is matrix addition in a cumbersome disguise? That, too, would have been more useful, without even having to make any linguistic slurs.)


Since we are supposed to explain downvotes

We aren't. If anything, we're encouraged not to talk about vote meta stuff. You can just reply if you want to reply and vote however you want (or not at all), independently of that.


Yes, this is matrix addition in a cumbersome disguise.

Haskell is a fine language, it's the authors I can't stand. I'm anti-snobbery because snobbery keeps people from understanding stuff. Programming languages are about building a standard and a foundation for common understanding -- Haskell articles are all CV-fodder about byzantine type algebras.


I'm sad that you found my blog post snobbish. If you point out specifically which part you can't stand I'll see if I can make it better. But I'm afraid I can't remove the algebra part, as there will be nothing left!

Haskell is certainly not essential to understand or explain the algebra, but having an implementation (in any language!) helps to play with the algebra and experiment with possible models. I am a beginner in Haskell, and that's my current language of choice, as I'm keen to get better at it.

To respond to your comment about matrices: I'm pretty sure one can come up with a matrix-based model (instance) of the algebra. So, no arguments here. In fact, I'd be very interested to see such an instance, to add it to other instances I've found.


I didn't find the post itself to be snobbish. I liked the tone and pacing, I think that your algebra makes for an interesting topic, and it was laudable for you to write about it.

What I dislike is the bait-and-switch, wherein the lede implies a digression on an math topic, then the bulk of the content is going over mundane Haskell code.

I don't care if Haskell coders want to get together and code golf their homomorphisms all day, but I find it exploitative to package that up as an excursion into algebra. As someone who's interested in math, but honestly couldn't care less about Haskell, these posts are all noise to me.


In case you missed the tags right under the title: "coding, math; algebra, haskell." I wouldn't call this a bait-and-switch. (I was happy enough to learn that graph join and graph union distribute, and that they satisfy a certain algebraic relation. Though I wouldn't have minded more math.)

Sometimes I find that code works as a clear notation for math. Maybe you just have something against Haskell, but all of this could have been written in any other modern language without too much effort, and I find Haskell to be more on the mathy end. (By the way, I can't tell if you're against Haskell or homomorphisms!)

And I think this really is an excursion into algebra, more on the general algebraic structures end of things. One might say that this describes an algebraic theory of graphs, where your matrix representation is a particular construction which satisfies the theory. The very end, the "Basic" type, feels quite a lot like reading a book on general algebraic structures.

I have some questions about this algebra, like how decompositions work, and maybe I'll think about them sometime.


I do agree with the anti-snobbery part, but, again, as someone who totally skipped over the Haskell parts (which were mostly gibberish to me), I still think the article was quite enlightening.

Of course, it's also a matter of the audience you're picking, I wouldn't talk about optimization algorithms, convexity, and computational hardness to an audience of 10-year-olds first learning to program (even though it underlies everything they will do in any first programming class which can be done efficiently), but I also wouldn't discuss introductory programming topics and the definition of a Turing machine to an audience of theoretical computer scientists, even if that's what we end up talking about in a very abstracted sense.


Ironically, I think you failed to comprehend the article. Adjacency matrices (which you are presumably referring to, but were not mentioned in the article) are neither efficient nor algebraicly convenient. No one represents graphs as matrices in computer science, and graph theorists rarely touch representation theory. It's not that useful when the natural representations are normally much simpler and easier to work with.



Not sure it's fair to say that graph theorists rarely use matrices to represent graphs: for many problems in graph theory some of the simplest solution involves representing the graph as a matrix (eg Hoffman-Singleton theorem).


> No one represents graphs as matrices in computer science, and graph theorists rarely touch representation theory.

Are you conflating linear algebra and representation theory, or did you really mean to refer to representation theory in particular? The notion of a quiver (https://en.wikipedia.org/wiki/Quiver_(mathematics) ) seems to me to be a unification of the two fields; I'm not a graph theorist, but I have a hard time imagining that they don't at least consider these structures.


Oh, yes, we should definitely draw lessons from academic mathematicians about what is accessible and convenient to understand. THAT's a field that's never had a problem connecting to laymen.


Hard to imagine doing anything interesting if you've got to appeal to laymen with every word.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: