Hacker News new | past | comments | ask | show | jobs | submit | sn0wleopard's comments login

JadeNB: This works but I believe only in the case with undirected graphs and if the graphs being connected do not have common vertices.

So, indeed there is a sort of de Morgan law:

a -- b = !(!a + !b) where ! is the graph edge-complement, a and b do not have common vertices, and -- is a commutative version of ->.

(Responding so late, because I was also "submitting too fast"!)


> JadeNB: This works but I believe only in the case with undirected graphs and if the graphs being connected do not have common vertices.

Agreed; I thought that your `overlay` was the "(forced) disjoint union". I had missed that "(vertex x) `overlay` (vertex y)" was not isomorphic to "(vertex x) `overlay` (vertex x)". I don't even know what negation would mean for directed graphs.

(By the way, your code doesn't seem to give any way of producing inhabitants of `Vertex g`. Since the displayed code only ever uses `Vertex g` in order to promote it immediately to `Graph g`, does it make sense just to have them as a separate type? (Maybe I'm misreading; it's been a while since I've Haskell'd.) Also, am I wrong to read `vertex` as a kind of `return`? (I hope that I am not wrong to think that graphs should serve as just as good instances of monads as lists, if not better.))

While we're talking: thanks for this article; I enjoyed it a lot. My one suggestion is that I think that what you call a "clique" is usually called a "complete graph", with "clique" (implicitly, in G) reserved for subgraphs of G that happen to be complete. https://en.wikipedia.org/wiki/Clique_(graph_theory)


> By the way, your code doesn't seem to give any way of producing inhabitants of `Vertex g`.

In fully polymorphic code (like clique) you indeed can't produce new inhabitants, or do anything else with the values of type `Vertex g`. However, if you know something about `Vertex g` then you can do certain things. For example, if we know that the type of vertices has `Ord` instance then we can compare them (as we do in `Relation`). And in very concrete code, e.g. operating on `Basic Int` values, you can create new inhabitants freely -- any number will do.

> Also, am I wrong to read `vertex` as a kind of `return`?

It is indeed very much like `return` or `pure`! We use it to inject a type into a sort of container type, in this case, into a graph type.

> I hope that I am not wrong to think that graphs should serve as just as good instances of monads as lists, if not better

Many `Graph` instances are `Monad`s. For example, `Basic` is a `Monad` (and many other things too). But on the other hand `Relation` is not a `Monad`, because it imposes the `Ord` constraint on the underlying type (and Haskell monads are not allowed that).

> While we're talking: thanks for this article; I enjoyed it a lot.

Thank you! I didn't expect so much interest to it, so I'm a bit overwhelmed to be honest :)

> My one suggestion is that I think that what you call a "clique" is usually called a "complete graph", with "clique" (implicitly, in G) reserved for subgraphs of G that happen to be complete.

Thanks, indeed "clique" may be not the best name... I decided to use it, because I would like to reserve "complete graph" to mean something like `clique [1..]` i.e. the complete graph on the whole universe. If we look from this perspective then something like `clique [1..10]` may look like a clique within some bigger graph. Anyway, I admit this may be confusing.


> Many `Graph` instances are `Monad`s. For example, `Basic` is a `Monad` (and many other things too). But on the other hand `Relation` is not a `Monad`, because it imposes the `Ord` constraint on the underlying type (and Haskell monads are not allowed that).

Indeed, I did say that it was a monad and not a `Monad`; math doesn't suffer from these implementation constraints. :-)

> Thanks, indeed "clique" may be not the best name... I decided to use it, because I would like to reserve "complete graph" to mean something like `clique [1..]` i.e. the complete graph on the whole universe. If we look from this perspective then something like `clique [1..10]` may look like a clique within some bigger graph. Anyway, I admit this may be confusing.

Oh, that makes more sense. Maybe it's worth even making just a throwaway remark to that effect?


> Indeed, I did say that it was a monad and not a `Monad`; math doesn't suffer from these implementation constraints. :-)

Ah, I see :-) Indeed, from the mathematical standpoint we can both inject a vertex into the graph type with `vertex`, and flatten a graph whose nodes are graphs (the monad `join`, not to be confused with graph `join` operation that I'm using). I haven't given this much thought before, thanks for noticing this.

> Oh, that makes more sense. Maybe it's worth even making just a throwaway remark to that effect?

I've added the following remark: "we will call such cliques covering the whole universe complete graphs". Hopefully this helps to clarify the naming convention.


rjtobin, Sorry I don't understand what happens if the two graphs have common vertices?

For example, in the algebra I described, you can do 1 -> (1 + 2) and this represents a graph with vertices {1,2} and two edges (1,1) and (1,2). How would that work with your matrix join?


I didn't really think about that case. In full generality, then the corresponding (n+m-k)x(n+m-k) matrix can be written as the matrix whose top-left (n-k)x(n-k) block is A\B, whose bottom-right (m-k)x(m-k) block is B\A, and where everything else is 1.

Note that both involve relabelling vertices. Many graph theorists do this freely (it's the same graph), but perhaps you have some application in mind where you don't want to do this.


Thank you for clarifying. I see how your construction works now.


solidangle: Indeed, there are a lot of graph algebras out there, and I looked at many of them, including the Kleene Algebras. So far I haven't found any algebra with the decomposition axiom, but if you come across one, please let me know!

Why do I need the decomposition axiom? I haven't actually provided a motivation for it (or why I defined overlay and connect the way I did). So, here it is: just like LolWolf, I wanted to represent arbitrary graphs and only graphs using expressions, so expression x -> y -> z should be just a graph, and therefore I should be able to construct it from smaller pieces. But what should these pieces be? We can't make it equal to x -> y + y -> z because then our definition of -> would be partial, since it's not always possible to find 'the last node' (in this case y) that we need to connect to z (e.g. the left-hand side may be a cycle). The only choice that seems to work is to connect all vertices on the left-hand side to all vertices on the right hand side, which leads directly to the decomposition axiom.


LolWolf, thank you very much for 'defending' the algebra much better than I could possibly have defended it myself :-)

Just to add: I don't understand why we should compare the algebra and the matrix encoding suggested above. The latter is just one possible implementation (a model) of the former. It's a good, useful model. But the point of having an algebra is that we can abstract of details of particular implementation and instead focus on the laws that all such implementations satisfy. Sometimes these laws can be interesting by themselves (at least they are interesting to me).


I agree: I don't really see what the obsession is with linear-algebra based graph theory. Yes, it's cool, yes it has interesting results, but it's not everything and we've got tools for semigroups and general algebras that may be useful here, along with natural generalizations that may emerge which may not be obvious in the linear-algebraic picture.

Anyways, thanks for the interesting blog post!


Almost every time I've implemented a graph algorithm professionally, it's been using matrix operations because they came with a GPU implementation and smashed the performance of a naive implementation.

But from a design and theory standpoint, I prefer to work independent of the representation as much as possible, and only switch in the matrix language as an implementation of a type in already working code. (Sometimes, this isnt quite possible and you need to, eg, reorder operations to make it efficient to calculate in your representation.)

So I would say that linear algebra is a big deal for implementations, if not as much of one theoretically. (Mostly cause my computer has really fast linear algebra chips/code, and not so much for random graph traversal.)


Sure, I agree; in this case I refer to "tools" as theoretical tools rather than computational ones.

There's no debate on the merits of computational tractability by converting problems to linear algebra ones; I was speaking from a purely graph-theoretic standpoint.


Thanks LolWolf! You mentioned a few interesting ideas about monotonicity/fixed-point theorems & computability/hardness and I'd very curious to know if you come up with anything on these fronts -- please drop me an email if you do!


Certainly will; thanks again for the interesting post!


I'm sad that you found my blog post snobbish. If you point out specifically which part you can't stand I'll see if I can make it better. But I'm afraid I can't remove the algebra part, as there will be nothing left!

Haskell is certainly not essential to understand or explain the algebra, but having an implementation (in any language!) helps to play with the algebra and experiment with possible models. I am a beginner in Haskell, and that's my current language of choice, as I'm keen to get better at it.

To respond to your comment about matrices: I'm pretty sure one can come up with a matrix-based model (instance) of the algebra. So, no arguments here. In fact, I'd be very interested to see such an instance, to add it to other instances I've found.


I didn't find the post itself to be snobbish. I liked the tone and pacing, I think that your algebra makes for an interesting topic, and it was laudable for you to write about it.

What I dislike is the bait-and-switch, wherein the lede implies a digression on an math topic, then the bulk of the content is going over mundane Haskell code.

I don't care if Haskell coders want to get together and code golf their homomorphisms all day, but I find it exploitative to package that up as an excursion into algebra. As someone who's interested in math, but honestly couldn't care less about Haskell, these posts are all noise to me.


In case you missed the tags right under the title: "coding, math; algebra, haskell." I wouldn't call this a bait-and-switch. (I was happy enough to learn that graph join and graph union distribute, and that they satisfy a certain algebraic relation. Though I wouldn't have minded more math.)

Sometimes I find that code works as a clear notation for math. Maybe you just have something against Haskell, but all of this could have been written in any other modern language without too much effort, and I find Haskell to be more on the mathy end. (By the way, I can't tell if you're against Haskell or homomorphisms!)

And I think this really is an excursion into algebra, more on the general algebraic structures end of things. One might say that this describes an algebraic theory of graphs, where your matrix representation is a particular construction which satisfies the theory. The very end, the "Basic" type, feels quite a lot like reading a book on general algebraic structures.

I have some questions about this algebra, like how decompositions work, and maybe I'll think about them sometime.


I'm the author of the blog post -- thank you for your comment! I was wondering whether anyone would be interested in a different version of the algebra that modells hypergraphs with k-edges. For example, you can change the decomposition axiom to: a.b.c.d = a.b.c + a.b.d + a.c.d + b.c.d, which allows you to have an algebraic structure with nodes, 2-edges (pairs of nodes), and 3-edges (triples of nodes). I can write about this in the following blog post if you are interested. (And I'd love to hear further thoughts from you on the connection to the boundary operator -- do get in touch by email!)

EDIT: Used dots (.) for the connect operator not to mess up with formatting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: