Perhaps I was wrong to expect mention of explicit discsussion of spectral graph theory? Anyway, always interesting topics to me.
On a related note: Gilbert Strang (MIT) has a relatively new class in the area of Linear Algebra (of course!) and learning: from data: "Matrix Methods in Data Analysis, Signal Processing, and Machine Learning"
> https://ocw.mit.edu/courses/mathematics/18-065-matrix-method...
You are most definitely not wrong. I would have expected some mention of the spectrum of a graph beyond simply defining what the eigenvalues are.
Incidentally, one interesting thing about the graph spectrum is that it's actually not a very interesting invariant. Among other things, almost all trees are cospectral, as proven in the following paper by Allen Schwenk in 1973: https://www.researchgate.net/publication/245264768_Almost_al...
> Among other things, almost all trees are cospectral
This does not really mean that the spectrum is not interesting, does it? Only that all trees have nearly the same spectrum. Sure, there's not much variability inside the class of trees, but almost no graphs are trees. After all, the matrix associated to a tree is basically a permutation of the identity.
You're right, of course. I should have said that the spectrum isn't all that useful for telling two graphs of the same order apart. Certain eigenvalues are frequently useful by themselves (e.g. the largest, second largest, and smallest eigenvalues often contain some information).
But, speaking of information, consider this: the adjacency matrix of a simple, loopless graph of order n is a symmetric n x n matrix with all zeroes on the diagonal. That means the adjacency matrix contains (literally) n^2/2 - n bits of information, while the number of connected graphs of order n as n -> infinity grows exponentially [0]. That implies that for the purpose of telling two graphs of the same order apart, the spectrum is of fairly limited utility, which is what I was thinking of when I wrote that it was not an interesting invariant.
I don't think that follows. Since the adjacency matrix contains O(n^2) _bits_ of information, the number of adjacency matrices of graphs is O(2^(n^2)). This makes sense: graphs with vertex labels {1,...,n} are in bijection with symmetric n x n binary matrices with zero diagonal. It's difficult to quantify how much information is lost when we reduce this matrix to its spectrum, since the eigenvalues are real numbers, but there are O(n) degrees of freedom, and in principle plenty of room to distinguish any two graphs.
Of course, as a sibling comment notes, unlabeled graphs are often more interesting, for which one has to look at equivalence classes of adjacency matrices. The spectrum is nice here because it is automatically invariant to vertex permutations and hence a graph isomorphism invariant. The existence of nonisomorphic cospectral graphs is not immediately obvious, although there are simple examples, and that there are infinite families of cospectral pairs is even less obvious. So at the very least, it's interesting that the spectrum is not a complete invariant, and that it does work well for certain classes of graphs.
I don't understand why you are linking to something about labelled graphs when people are (presumably) trying to distinguish between graphs up to isomorphism i.e. after modd'ing out the labeling.
The term "labeled graph" just means a graph with each node labeled differently (but arbitrarily). It just allows for reasoning & enumerating the vertex set. It's a typical assumption to make in the context of graph isomorphism.
It doesn't relate to machine learning (which is what I assume you mean).
Isomorphism of a pair of graphs usually refers to isomorphism of their unlabelled equivalents.
Yes the concrete expression of the isomorphism would be as a mapping between the labels.
Given that the paper linked to is by Brendan McKay et al, it seems reasonable to mention that nAUTy works by finding (efficiently) all permutations of the labellings that result in an automorphism of the graph.
> Perhaps I was wrong to expect mention of explicit discsussion of spectral graph theory?
I mean, this is a link to a pdf titled "chapter 3" buried deep in the directories of someone's academic homepage. There's no context about who the author is (is it a student project from a course? Part of a dissertation? notes for a draft of a book?), except that I think I can extract their name from the URL. It's pretty hard to tell, thus, who the desired audience is or what the aim of it is.
So whether one should expect one thing or another from this random pdf... I mean, without knowing more, what is there to reasonably expect?
For instance (and I can imagine this because I've taught an undergraduate course on graph theory, among other undergrad math courses), if these are notes from a course, then the instructor may have some good, context specific reasons for focusing on some topics while not going into details on others. But, as I said above, it's really impossible to know because this has just been plunked on HN with no context, as if that is somehow valuable.
I think the desired audience is the student, professor, and students' peers in University of Utah's MATH 2270 Linear Algebra class. If you walk up the URL, the context presents itself more.
And true, these projects have limitations/constraints, and I am happy that the author took LA and learned more about LA and Graph Theory. I didn't know what to expect, but assumed Graphs + LA ==> Let's talk about spectral graph theory -- well, because I hoped for it.
The laplacian of a matrix also has cool applications in computer graphics, if anyone is curious (about further connections). Think of the mesh of an object as a graph and compute the lapacian. (You get natural unsupervised segmentations of the mesh for instance if you look at it as a spectral clustering problem.) Look up resources by Keenan Crane. If you've played with 'smooth normals' in blender I believe it uses techniques like that as well.
I opened this pdf expecting to see a section on total unimodularity (TU).
For example, if an undirected graph is bipartite, its node-edge incidence matrix is TU. Any linear program operating on a TU matrix is guaranteed to have an integer solution. This means that any integer program can be relaxed to a linear program. Thus, the problems of bipartite matching and maximum flow can be easily solved using Simplex.
awesome - spent couple of years deep in this space for dissertation using graph theory to boost linear algebra computes; lots of room to extend our discrete result into approximate / continuous computes and boost AI, GPUs, etc.
similar result came out around the same time using category theory as basis; our work was intentionally simpler / more applied, partly to make it more accessible from an algorithmic perspective and also to put it into practice, etc.
That's really cool. These days similar strategies based on graph coarsening and vertex ordering are really popular for improving the sparsity pattern of preconditioners- e.g. incomplete lu decompositions for iterative linear solvers.
yeah, there's so much room to extend this work in terms of graph coarsening / sparsifying, especially for progressive / approximate approaches
there's lots of overlap with entropy binning as well and likely some sort of combo for win -- figure out "max frequency" with some sort of FFT / wavelet like meta technique or somehow bin edges based on entropy contribution, etc.
or, lots of pathways to make coontinuous / numerical in JPEG like way
basically JPEGs for graphs -- with big impacts in AI / ML computes
Interesting. I write a lot of code that deals with graphs in engineering data (pipelines, circuits, etc.). And typically I use the somewhat reverse logic to this, which is having 'hyper' vertices, i.e. a vertex can be a grouping of vertices.
Perhaps, we can broaden the scope and include numerical/randomized linear algebra.
I come from a machine learning background, and the most recent directions I am aware of are in applications of Krylov subspace methods to very large number of data points and low-precision training (both in the context of Gaussian processes).
I teach linear algebra in a pure math department. We like to think that freeing people to be able to daydream about linear algebra is our goal, that there's more to it than what function calls to make in MatLab.
Still, you could throw out our course and spend a semester teaching people how to really think about singular value decomposition. It's everywhere, critical for example in machine learning, yet pure math types think it's "applied".
Riemannian geometry begins with an inner product at each point (though what's amazing about O'Neill's text is that he slips by you a categorical definition first at every step). As another route up the mountain, one could now understand parallel transport as a local version of the singular value decomposition, and work out the whole theory that way. This requires a fluid, intuitive grasp of singular values, rather than the textbook definition. We should teach that understanding.
> "What are recent developments in linear algebra?"
Have you heard of geometric algebra? It takes a while to wrap your head around, but unifies a lot of mathematics in a very elegant way. For instance, you can view Euclidean geometry as a special case, and concepts like quaternions and rotations can be explained in a very natural way. There are a few good resources below if you're interested.
Linear Programming methods (barrier/simplex) have been able to run faster and solve larger instances in great part due to algorithmic advances (the other contribution being faster hardware, more memory etc). Considering that LP is a fundamental tool for mixed/integer programming, these improvements are incredibly valuable.
Gurobi had a white paper mentioning how much their solvers' performance was improved by which techniques (I'll link it here if I can find it).
Depends on how you define "linear algebra." Representation theory, lie groups, and grassmannians can all be considered "linear algebra" and are active areas of math research, but it probably isn't what you are looking for.
I don't quite care about obscure and beautiful theoretical problems. I am looking for new developments that have shown promise to make big practical impact.
For what it's worth, I am no linear algebra definition police so feel free to list what you think is reasonably relevant.
If you're checking that 17 isn't a typo, do a screen grab and doodle on the graph. For each vertex, write down how many ways you can get there in two steps from v1. There are the same number of ways to get back, so you sum the square of these numbers. That's the dot product (3,1,1,1,1,2) with itself, 9+1+1+1+1+4 = 17.
The (1,1) entry of the matrix multiplication of A^2 with itself is exactly this calculation. Matrix multiplication is a dance one sees everywhere. It's crippling to teach young minds that matrix multiplication is composing linear functions, when the dance shows up so many other ways.
Path counting is for example at the heart of automata theory in computer science. For a finite state machine, the question becomes "is there a path?" rather than "how many paths?" Think of 0 as false, and any positive integer 1, 2, 3, ... as true, and one sees this same dance in automata theory.
"Caught another mistake:
Section 3.2: First, an observation: the column sums of Q(G) are all zero is wrong. Contradicts their definition of Q(G)." True, but it's pretty obvious he was referring to a directed graph G where matrix Q(G) is defined as Qij = 1 if vertex vi has an outgoing edge ej; -1 if vertex vi has an incoming edge ej; 0 otherwise.
On a related note: Gilbert Strang (MIT) has a relatively new class in the area of Linear Algebra (of course!) and learning: from data: "Matrix Methods in Data Analysis, Signal Processing, and Machine Learning" > https://ocw.mit.edu/courses/mathematics/18-065-matrix-method...
With Part IV: Special Matrices discussing the Laplacian and Distance matrices as noted in the posted paper. [TOC: https://math.mit.edu/~gs/learningfromdata/dsla_toc.pdf]