I cannot recommend Alan MacDonald's two books enough (Linear and Algebra; and, Vector and Geometric Calculus). They literally replace reams of texts on complex numbers, quaternions, octonions, spinors, etc. under a single unifying concept. (They don't replace those concepts; and, for sure, if you want to learn quantum you're going to need them; but they make getting your head around them so much easier.)
One issue is that most of the "online" references are crap. MacDonald's presentation is a wonderful slow ramp: it starts with really basic linear algebra and then just slowly segue's into geometric algebra.
I firmly believe that in 100 years we just won't be teaching 'bare' linear algebra, complex, quaternions, ... we'll just teach GA and then introduce the physical concepts separately.
I spent around two months being completely enamoured by geometric algebra until I realised it flies in the face of some pretty strong trends of mathematics in the last couple of decades.
How come? Well, we’ve been moving towards more and more powerful generalisations and unifying concepts over the years. First, it was the various flavours of algebraic this-and-that, then category theory, then today, people are trying to do hot things with HoTT.
(of course math is much, much bigger than this, but this is the story I particularly have been watching unfold before me)
The trouble with geometric algebra is that it’s merely a special case of a much more general concept, namely tensor algebras. Tensor algebras are the most general algebra you can possibly get from things that behave like vectors; with GA you are stuck with vectors where the distances are measured by something squared as opposed to any other power (or some other crazy function like the taxicab metric).
Using geometric algebra feels like typeless programming because you’re combining everything into a single quantity called the multivector whose components aren’t straightforward to keep track of. Furthermore, you have to decide the dimension in advance because IIRC these types change when you add more dimensions (the formulas however do not, surprisingly).
That said, it’s really marvellous how compact things can get if you stay in 3+1 dimensions. Maxwell’s equations become literally a one-liner, and the horrible, horrible mess that is rotational dynamics becomes so much more tractable for mortals like me. One can literally do the entire physics undergrad curriculum using GA without breaking a sweat. Trouble is, you’d have to learn some basic algebraic theory and that’s a hard sell for people who are already afraid of math because “it’s too abstract”.
Tensor algebras are the most general algebras you can get (categorically inclined people would call them a “free object” in a suitable category), but because of this they are almost never useful on their own. We pretty much always study specialisations of tensor algebras, of which GA (or more generally Clifford algebras) are an example.
There are no interesting formulas in a tensor algebra, because the whole idea of it being “most general” (every other algebra is a quotient) is that it has no relations. It is only when passing to more specific algebras (symmetric, exterior, GA, ...) that the subject really becomes interesting.
> There are no interesting formulas in a tensor algebra, because the whole idea of it being “most general” (every other algebra is a quotient) is that it has no relations. It is only when passing to more specific algebras (symmetric, exterior, GA, ...) that the subject really becomes interesting.
And yet you can still go a level further, with objects like those in Moon & Spencer's holor theory that include generalizations for transformation rules beyond covariant or contravariant, generalized connections similar --for example-- to universal Christoffel symbols, and effectively creating rules for tensors, pseudotensors and stranger array-like objects.
> And yet you can still go a level further, with objects like those in Moon & Spencer's holor theory that include generalizations for transformation rules beyond covariant or contravariant, generalized connections similar --for example-- to universal Christoffel symbols, and effectively creating rules for tensors, pseudotensors and stranger array-like objects.
> But this is incorrect, in general, for the definition of a tensor includes a specific dependence on coordinate transformation.
This is very much a physicist's definition of a tensor; a mathematician's definition includes no such thing (although one may prove results about co-ordinate transformations as a theorem). For example, there is no such thing, mathematically, as an intrinsically covariant or contravariant tensor, although, having chosen a fixed vector space $V$, one may certainly put a bi-grading on $T(V \otimes V^*)$ that makes that sort of distinction.
> Maxwell’s equations become literally a one-liner
There still must be a limit to this. For example, I do not find "dF = 0" especially enlightening. It's almost like saying that "42" is the answer to the question about life, universe, and everything...
I guess it depends on how you define “dF”. The less arbitrary the definition is, the more likely that you’ve found something profound by stating “dF = 0”.
Well, but if you need to define the gradient, curl and divergence and every term in the equations, the typical representation would be more than 20 lines. So I think that it's a fair comparison.
For those curious, ∇ . F is called 'interior derivative' and relates to divergence, whereas ∇ /\ F is called 'exterior derivative' and relates to curl.
GA and tensors are closely related. I would say that tensors are more like typeless programming, the fact that you pick an algebra in advance is similar to encoding your invariants in terms of types.
Tensors are cool and all but they are a pain to work with. GA is more intuitive for working with space.
> One can literally do the entire physics undergrad curriculum using GA without breaking a sweat. Trouble is, you’d have to learn some basic algebraic theory and that’s a hard sell for people who are already afraid of math because “it’s too abstract”.
How many of those people go through an undergraduate physics curriculum?
Surprisingly many. A lot of my professors tend to see math as a 'necessarily evil' and only teach enough to be able to do the calculations, and this attitude certainly rubbed off on most of my classmates.
In fact, some of them even reject papers for being too rigorous (which does make sense if you think about it).
So what are Maxwells equation in geometric algebra? In differential form language they are dF=0, d x F=xj, (x is a star, I dont know how to type star here) does it get even easier?
I picked up Linear and Geometric Algebra because I was looking into how to do sensor fusion better and thought geometric algebra might make 3D rotations easier to understand. My conclusion is that it doesn't. I might as well stick with quaternions and try to grok them better. (It is sort of neat that quaternions are a special case of something else, but not actually helpful.)
In particular, there is nothing about how to actually do calculations on a computer, with the excuse that you will usually use a library for this. This isn't helpful to me because I'm doing embedded programming, and also because I don't feel like I really understand math unless I know how to do calculations well enough to write a library myself.
All this coordinate-free manipulation of symbols leaves me feeling like I didn't really get anywhere. I was hoping for a more intuitive understanding.
Does anyone have recommendations that are more oriented towards doing calculations?
I think that geometric algebra makes describing rotations quite easy - pick two vectors in the plane you want to rotate, at a suitable angle you desire to rotate through - but makes computing them more difficult. In dimensions higher than 2, its always better (in terms of number of multiplications and additions) to use the GA rotor to generate a matrix which applies the rotation. This is the same way that one would use quaternions to track the orientation of an object in a scene, but when the pedal hits the metal you want your graphics card to be doing matrix-vector multiplications to rotate, not quaternion conjugations.
GA also makes proving properties about reflections and rotations quite nice, but that’s not usually useful for actual computation (except in a learning environment).
I tried reading this book. There are no practical examples of how to calculate anything useful with geometry algebra in Chapter 4. This seems rather odd because Chapters 1-3 are very practical, including both equations and code examples.
I'm currently working through Naive Lie Theory https://www.amazon.com/Naive-Theory-Undergraduate-Texts-Math... by Stillwell and there's a lot about quaternions in the first chapter, including their standard matrix representation. You'd need to be familiar with some group theory to fully understand the 3D rotations but the matrix operations are clear enough to start coding with, I think.
I’m curious - what are you working on that brings you to sensor fusion?
There are probably less than a hundred people today outside the defense industry who would want to build a sensor fusion system on an embedded platform. I suspect that half or more of this population lives in the Bay Area.
It’s always fun to run into one of you out here :)
That probably made it sound more sophisticated than it it is. I’m using Arduino and fooling around trying to make a musical instrument. I have accelerometer and gyroscope sensors and I’m trying to figure out the up vector and measure tilt while the player is moving it around.
This is probably overkill, but this is the most comprehensive treatment on the subject I've come across till date [1]. Note that the price on Amazon is ridiculous, the author has a very 90s looking website with a much more reasonable price. It has a very sketchy looking "enter credit card information here:" online store - I was nervous to buy from there, but I successfully got my copies, and the author emailed me to let me know he had personally dropped off the books at USPS :)
> There are probably less than a hundred people today outside the defense industry who would want to build a sensor fusion system on an embedded platform.
Unfortunately, people who promote it tend to sound like insufferable crackpots. Why is that? People who explain other cool things in math are never like that.
Although there are a few honest texts, almost every introduction to geometric algebra sounds as if they are introducing a new "school of thought" that will change mathematics forever. As if there was a worldwide conspiracy of powerful people who promote backwards concepts like exterior algebra, tensors and quaternions, and they want us to take the red pill. This may have sounded slightly funny the first time it was written, but it is an ugly, self-defeating meme right now.
To anybody who may be discouraged from learning geometric algebra due to the weird antics of its promoters: please, ignore the stupid rhetoric and study some Clifford algebra. There are a lot of neat ideas in there! (but of course, not only in there; you may still want to learn the other, well-known, equivalent definitions and notations for the same objects).
I've noticed that it's not math people who are promoting it but CS people or other engineers. E.g. those who took mostly cookbook classes (no offense intended) rather than theory classes. This tells me there is a large latent desire for deep understanding.
When I first encountered the term "geometric algebra" and read some of the articles, I couldn't understand what was new about the content. In college, we learned about tensor products, wedge products, clifford algebras, and duals, and geometric algebra struck me as a subset of this core knowledge applied to three dimensions, where you can do things like take a wedge product of two vectors to get a 2 form and then take the dual to get back to a one form, and now you have a "geometric" (more of a linear algebra) interpretation of the cross product as a dual to an exterior product -- which is how we were taught what a cross product is.
But I think a lot of people who don't study this stuff were never taught these relationships, so bundling them up in a marketing name and letting them understand why these standard linear algebra relations behave as they do can really help the working engineer, which seems to be a condemnation of the rote learning approach where you are asked to memorize a bunch of stuff but are never told that what an inner or exterior product really is. Then things like calling two vectors "orthognal" if they have an inner product of zero seems mysterious.
By the way, this isn't just linear algebra. There are many deep connections behind well known algorithms in number theory, topology, calculus, algebraic geometry, (co)homology theories, that are ripe for a snazzy name and a presentation of some of the deeper ideas, so that people can start seeing connections between stuff like complex integrals and euler characteristics and winding numbers, holonomy groups and short exact sequences, etc.
So in this sense, I applaud the geometric algebra evangelists, even though this is all standard stuff that a math undergrad should learn, and the name "geometric algebra" is a completely made up term for what are well known mathematical structures.
It is a way of thought which will change mathematics, it is new to 99.9999% of people, isn't part of any curriculum yet. Since GA is such a powerful idea, it will have a big impact on mathematics for a long time to come, when more people find out about it. Once you learn it, you will naturally wonder why such a thing wasn't ever part of what people learn yet.
Exactly. I read one of the books a long time ago, and it was like 'we have this surpressed magic here', but somehow there were no hard (math) results. I understand it is easier to calculate the product of two rotations with G.A. is there apart from that anything (substantial) else?
Thanks for this! Now I understand what geometric algebra refers to.
When people talk about it, it always sounds like a simple generalization of exterior algebra. And so it is. You allow a formal sum of a 0-form, a 1-form, and so on up to an n-form; instead of x∧x being zero, it's |x|². I assume the textbooks show how to do all this in a basis-independent manner.
Can you do the same thing with chains, and come up with an even-more-generalised Stokes theorem?
There are two things I don't get.
Why start calling things 2-vectors, when you could call them 2-forms and people would know what you were talking about?
Also, when discussing 3D rotations, the article decomposes a vector u as the sum of two vectors a + b, where a·i = b∧i = 0, and i is a 2-vector. I'm fine with b∧a, and I know that there is an inner product for forms of the same degree, though I'd have to look up how it works. But how do you take the inner product a·i of a vector and a 2-form? I'd guess this is an abuse of notation whose meaning is obvious if you're used to it.
> When people talk about it, it always sounds like a simple generalization of exterior algebra. And so it is.
Sure, both are quotients of tensor algebras, but if anything, it's the other way around. Geometric algebra is to exterior algebra as inner product spaces are to vector spaces, or as Riemannian manifolds are to smooth manifolds. I'd say that vector spaces are more general than inner product spaces.
> Why start calling things 2-vectors, when you could call them 2-forms and people would know what you were talking about?
Because they're trying to appeal to the kind of physicist that doesn't know what a 2-form is. Besides, bivector is already established terminology, too.
> But how do you take the inner product a·i of a vector and a 2-form?
Yeah, this isn't great, since there are way too many notions of "inner product" in geometric algebra (see The Inner Products of Geometric Algebra: 10.1007/978-1-4612-0089-5_2). But I guess the intent is clear enough.
> Geometric algebra is to exterior algebra as inner product spaces are to vector spaces
To get from x∧x to |x|², you need a norm. And I guess linearity must break down somewhere unless that norm is compatible with an inner product. Thanks, I missed that point.
> Why start calling things 2-vectors, when you could call them 2-forms and people would know what you were talking about?
There is more to 2-forms. Also, to be precise, they are functions from a vector space to the scalar field (usually reals). Sure, they form a (dual) vector space but I think it would be more confusing to call GA elements k-forms.
I used resources from bivector for part of implementation for excalidraw.com, and it did what I needed. It was an interesting lesson though in the difference between mathematical and software engineering priorities. I didn't need to support arbitrary dimensions (only 2D). I wanted my code to be readable, and functions to have typed arguments (so a function like orthogonalLineThroughPoint should take a line and point, and you shouldn't be able to pass the arguments in wrong order). So I wrapped the hard-to-remember and understand operators in a more friendly API, but it's a question whether I even need most of the "beauty" of the math then, or just unroll the few algos I need into the API. One counter argument is applying transformations to all objects uniformly. So I left the implementation as is in the end, but I wouldn't be surprised if it got ripped out later to save app size (if that was a big concern).
Do you have some before-and-after code that you can share?
I've seen two other efforts in that general direction:
* "Let's Remove Quaternions from Every 3D Engine"[0], which was underwhelming: let's ask other people to rename existing code for no practical benefit?
* "Klein"[1], a projective geometric algebra library that is amazingly nice to use, and claims to be very fast.
Throughout history we have resistance against the introduction of number 0 into numbering system (introduced by the Arab mathematicians in the 9th century AD based on their study on the works of Indian mathematicians).
Then we have resistance against the complex numbers introduction into numbering system (introduced by the Italian mathematicians in the 16th century AD and then made popular by Euler in his famous formula).
And now we have resistance against geometry algebra (GA) in the 21st century AD. I think the main reason Heaviside (the inventor of the modern form of Maxwell's Equations) was opposing the usage of quaternion equations utilizing "versor" is probably due the difficulty of its manual calculations compared to his rather limited "tensor" based proposal. The irony is that Maxwell himself was originally using quaternion based algebra or GA for his discovery of the EM equations. Since now we have very powerful calculators/computers compared to the 19th century AD this should be a non-issue for the practioners.
IMHO, people who work in electromagnetics (EM) have no choice but to adopt and work with GA due to the nature of the EM waves in which (unlike other waves e.g. sound) it has additional polarization property that is cumbersome to manipulate using conventional complex numbers.
Eventually, someone will come up with a better replacement of I/Q data and probably call it IQ2 data where the extra Q represents quaternion.
Yet another Geometric Algebra introduction claiming GA is better than quaternions, yet ending (section "Problem Solved" near the end) with the exact same formulas.
Really I like GA and it sure brings a unified treatment for geometric calculations, but in these tutorials I would very much like to see examples of something quaternions can not do, e.g. involving exterior algebra, subspace intersections, duality etc.
Quaternions are left handed, which is annoying but not a big deal by itself, but most importantly collapse the scalar with the pseudoscalar. For regular applications of unit quaternions in 3D euclidean space such as rotations, screw displacements and dual quaternions, that is not much of an issue since the scalar part is initially zero anyways (e.g. translation and rotations), it wholly contains the quadrature information after the operation (e.g rotation), or the scalar part is always separate from the vector quaternion (e.g. moments).
However if you want to see the quaternions as a vector subspace in problems of higher dimensions or in non-euclidean manifolds, for example path interpolation, finding integral solutions or representations, they fall short and it makes sense to use a Clifford algebra that is compatible with differential operations (e.g Lie) and gives consistent calculations across all dimensions.
A quaternion is a kind of multivector consisting of a scalar + a bivector (a bivector is an oriented magnitude with the orientation of a plane). You can get one by taking the quotient of two 3-dimensional vectors (a vector is an oriented magnitude with the orientation of a line). Note that a planar “complex number” is the same kind of object as a quaternion, just with the bivector part always oriennted in the same plane.
When people limit themselves to Gibbs-style vectors or to quaternions as a primary formalism what they are doing is pretending that vectors and bivectors are the same kind of object, and this pretense leads to massive amounts of confusion. cf. https://en.wikipedia.org/wiki/Pseudovector
The biggest thing GA adds compared to quaternions is that the same technology generalizes to higher or lower dimensions and to pseudo-Euclidean spaces (and to generalized Möbius transformations, non-metrical contexts, to modeling points and circles as multivectors, etc.), and there are many vector identities which are awkward to express when you are pretending that vectors and bivectors are the same.
Another advantage is that you don’t need any arbitrary conventional rules about multiplication of {i, j, k}, since typically the basis is specified in terms of orthonormal unit vectors e.g. e₁, e₂, e₃, so that it becomes very obvious how basic bivectors e₁e₂, e₂e₃, e₃e₁ should multiply: (e₁e₂)(e₂e₃) = e₁e₃ = −e₃e₁, etc.
I agree with all of this, and to sibling reply as well.
My point is that these advantages are seldom (if at all) discussed in most online introductions to GA I've read so far, yet to me they're the real selling points of GA.
Would geometric algebra simplify a CAD program (or are there any CAD system that use geometric algebra)? Would learning geometric algebra make synthesis of mechanical linkages easier (planar/spherical/spatial)?
This proposal is different from what's being discussed here. Algebraic geometry is, roughly speaking, about understanding the solutions to systems of polynomial equations, while linear algebra is much simpler and gives complete descriptions of the solutions of systems of linear equations. Geometric algebra is a framework for working with vectors and related objects in an algebraic way---one can "multiply" two vectors together and extract information from this.
There is a simple mechanism to model multiplication of vectors represented using elements of an orthonormal basis. Each (multi)vector is a linear combination of terms, and each term is an ordered sequence of basis elements. Multiplication of (multi)vectors works as usual. Multiplication of terms simply concatenates the 2 terms, then canonicalizes using identities vv = |v|^2 and uv = -vu. There is a cost, the number of terms in a (multi)vector is O(2^n) for an n dimensional space, but that's OK.
And then I'm lost. How does this map back to geometry? What does '2 + 3xy + 5yz - 7xyz' mean geometrically speaking? What does multiplication mean geometrically speaking? Can't build an intuition, not even for R^2 or R^3 :(
The answer depends on the kind of Clifford/Geometric Algebra being used, the structure for the vector bases and the selected result of the quadratic forms. Those choices are mostly driven by the end application where the algebra will be used, with its similarities (obviously, they are all algebras) and particularities.
Thanks. Appreciate taking the effort to answer. What I'm trying to do is to gain an intuition of what kind of objects GA manipulates, then stash it for whenever I'll need that kind of manipulation, which, sadly, can be never.
Within that context, just grokking R^2 or R^3 would be great progress. To be fair, the article does present the fundamental identity, uv = u.v + u /\ v, where u.v is the usual dot product (scalar denoting relative projective size) and u /\ v = |u||v|sinθi (something related to area and the plane induced by u and v). Even if the article does exemplify with R^2 by defining i = xy, and then dropping the ball: "i is interpreted to mean the plane generated by x and y". I can't grok this part :(
* This looks like (almost) a circular definition, defining uv in terms of the specific xy, where x and y are the two orthonormal R^2 vectors forming the basis. What is xy? Is it a scalar? A vector? A normal vector? How does a normal vector work in R^2?. A tensor of some dimensions? For example, assuming x = (1,0) and y = (0,1), how does one compute i = xy?
* Plane generated how? a u + b v, given any two scalars a and b?
* When moving to R^3, i'd think uv / xy define a plane. What does uvw / xyz mean?
* The space of multivectors in Rⁿ is actually also a vector space, and it's 2ⁿ-dimensional. The standard basis of this vector space (in 2 dimensions) is {1, x, y, x ∧ y = xy}, and that's why we care about representing uv in terms of xy. Hopefully that explains why asking how to compute "i = xy" is a meaningless question. By analogy, `(3, 5) = 3x + 5y` in the same way `xy = (0, 0, 0, 1)`, but keeping track of the order of an 2ⁿ-element basis gets annoying really fast, so we just write `xy`.
* Yes, span{a, b}.
* u ∧ v ∧ w would represent a volume — the one and only 3D subspace of Rⁿ, together with a sign (i.e. an orientation) and a magnitude.
Thank you kind stranger. The technical machinery makes much more sense now. But the intuition still lacks.
An attempt. To map an R^2 vector into G^2 is trivial, (a,b) => a x + b y = (1, a, b, 0). The reverse is hard. s 1 + a x + b y + d xy = (s, a, b, o) => (?, ?). Or perhaps G^2 is a superstructure of R^2, and most elements of G^2 can only be projected (somehow? perhaps trivially?) to R^2, but with loss of information. Which leaves the grok question: what does a vector in G^2 actually represent? Perhaps scale + x position + y position + orientation angle, as the use of 'i' suggests? If so, then in R^3 xy, xz any yz represent rotations in the respective planes? Which leaves a bonus puzzle: what does (multiplying with) xyz represent?
FWIW, at this point it's likely I can compute the answers using the code in the OP and well crafted multivectors. Time permitting :)
Minor correction: the inclusion of the vectors is (a, b) ⇒ (0, a, b, 0), not (1, a, b, 0). Also, you have an inclusion of the scalars, which is x ⇒ (x, 0, 0, 0). And yeah, G^2 can't be projected back onto R^2 without losing a lot of information.
> what does a vector in G^2 actually represent?
If you can write a multivector x as the wedge product of k independent vectors (or equivalently, the geometric product of k orthogonal vectors), then x is called a k-blade. In this case, x has a very clear geometric interpretation:
* the magnitude of x is the volume of the parallelotope spanned by the vectors.
* the "direction" of x corresponds to the space spanned by the vectors — in concrete terms, that space is the set of vectors in R^2 that the map (v ⇒ v ∧ x) takes to zero.
* the "direction" also gives an orientation of the subspace. Orientations are defined indirectly: if you have two blades representing the same space, then they must be scalar multiples of one another. If that scaling factor is positive, then they have the same orientation, and if it is negative, then they have opposing orientations.
These three pieces of information are enough to uniquely identify x. Unfortunately, I'm not sure how to go from there to interpreting other multivectors, which are sums of blades.
> If so, then in R^3 xy, xz any yz represent rotations in the respective planes?
Yes, xy represents rotation on the xy plane, as long as you apply it as described in the article.
> Which leaves a bonus puzzle: what does (multiplying with) xyz represent?
This one should be fairly easy to prove: it represents a kind of dualization, in which the space represented by mi = mxyz is the orthogonal complement of the space represented by m.
> Multiplication of all three basis vectors (e1^e2^e3) produces a three-dimensional volume segment or trivector, with unit cubed volume. This trivector has some peculiar properties that represent the scale of the whole space, it is often called the pseudoscalar of the space, I, it performs the function of the imaginary number i, but extended into three dimensions. Multiplication by the trivector represents a 90 degree turn in all three dimensions. Therefore two such 90 degree turns in each of the three dimensions results in a total reversal of direction, which is equivalent to a simple negation, and thus I2 = -1.
One issue is that most of the "online" references are crap. MacDonald's presentation is a wonderful slow ramp: it starts with really basic linear algebra and then just slowly segue's into geometric algebra.
I firmly believe that in 100 years we just won't be teaching 'bare' linear algebra, complex, quaternions, ... we'll just teach GA and then introduce the physical concepts separately.