The main problem with GA is if you want a unified transformation hiearchy, geometric algebra could quickly get very complicated, as it requires projective GA to handle translation and dual-quaternion to handle non-uniform scaling. Geometry algebra appears beautiful to simple rotation cases and explains very well concepts that feel incomplete in simple vector math, but in real application for genric game engine, you can't really code up a scene graph with unified transforms in like 2 hours. Not to mention that even the GA people do not all agree with each other how to do the more advanced GA stuff(strange inversions and ext).
Surprisingly very few people talk about this on Youtube with all those GA tutorial videos, and you can find scant information on the bivector site forum. In comparison, despite matrix represetntion's weaker mappng to geometric concepts, it handles everything in a more unified interface without too much complications.
For that matter the GA math feels much to be desired. There probably exists a undiscovered better version of GA that handles translation and scaling better and everybody could instantly agree on. Until that day GA probably won't see much general usage.
Ok, standard 4x4 matrices also implement a projective (aka d+1) model. (the 'w' coordinate is just the projective coordinate). So no difference with GA in that respect.
Setting up a unified transformation hierarchy is actually very easy, and again not really different from how you would approach it with matrices. (plus, its more performant). Simply swap the matrix with the appropriate versor.
Which (versors or matrices) are appropriate depends on the symmetry group you are interested in :
Orthogonal Group (just rotations : distance + origin preserving) in d dimensions -> use the geometric algebra R_d. (classically : complex numbers, quaternions)
Lorentz Group (rotations + boosts : spacetime distance + origin preserving) in d space dimensions and 1 time dimension -> use the geometric algebra R_{d,1}. (classically : Lorentz transformations)
Euclidean Group (translations + rotations : distance preserving) in d dimensions -> use the geometric algebra R_{d,0,1}. (classically : planar quaternions, dual quaternions)
Conformal Group (translations + rotations + dilations : angle preserving) -> use the geometric algebra R_{d+1,1}. (classically : linear fractional transformations)
General Linear Group (translations + rotations + sheering + ... : preserves parallelism/incidence) : use d+1 x d+1 matrices.
Working in a symmetry group that is 'to big' comes at a cost - both in algorithmic complexity as well as numerical precision. If you only want translations/rotations, but are using matrices you'll have to resort to things like Gramm-Shmidt or SVD to re-orthogonalize your matrices after doing numerical calculations. (you have to project it back to the solution manifold in math terms - this is almost never trivial and often impossible).
It's nice of you to point out that 4x4 matrix is projective in nature, and I understand that GA could potentially be more performant for its more compact usage of numbers.
But to really make it popular and understandable, a "simple" version of GA that handles translation, rotation & non-uniform scaling would really help, without the group thoery concepts, even better, make it in the context of a scene graph hiearchy, with a unified operator like "multiply".
Also is it possible to collapse a series of such transforms in a single versor like you can do with matrices without going into dual quaternion stuff? In generic game developemnt, translation, rotation, and non-uniform scaling are all extremely basic things that cannot be handwaved away or "too big".
Also, why the need o a dual(e12, e02, e01) to represent a point when in vector it's just a (e0, e1, e2). This is just counterinuitive. This is what I mean by "quickly gets complicated" and it feels nearly as opaque as cross product in vector math.
Just explaining my experience digging in GA for a couple of weeks.
That is correct - matrices are the natural companion of the general linear group. The GA representation exists, and is called 'the mother algebra', for general 4x4 matrices the GA equivalent is R4,4. (where general linear transformations are versors).
For other subgroups GA reps will also exist. (like e.g. if one wants to include projective transformations as versors, that would be the projective group (which preserves the cross ratio of 4 points), and has a GA representation in R3,3).
These spaces quickly become so big that efficient numerical implementations (while not impossible with enough symbolic work at compilation time, see e.g. https://www.jeremyong.com/gal/) are difficult. Other advantages of GA do of course remain.
> But to really make it popular and understandable, a "simple" version of GA that handles translation, rotation & non-uniform scaling would really help, without the group theory concepts, even better, make it in the context of a scene graph hiearchy, with a unified operator like "multiply".
The rich structure of GA (that ultimately follows from just one axiom extra) unifies a wide range of concepts and theories. Considering just one application, or one link, puts one at risk of arriving at a model that breaks these connections to other parts of mathematics. It seems unfair to expect to understand the why without considering the connections to Lie Groups, their associated geometries, differential forms, etc.
That said, I have some unpublished examples displaying and processing bvh (mocap) files that I'll try to cleanup and put online.
> Also is it possible to collapse a series of such transforms in a single versor like you can do with matrices without going into dual quaternion stuff? In generic game developemnt, translation, rotation, and non-uniform scaling are all extremely basic things that cannot be handwaved away or "too big".
Versors combine just like matrices (using just the ordinary product). (doing this for translations/rotations _are_ the dual quaternions, but you don't have to (and imho shouldn't) call them that.). Non-uniform scaling along your scenegraph (as opposed to in the beginning (object space) or at the end (view space)) is usually frowned upon in professional game development. (it makes it impossible to correct matrices using Gramm-Shmidt, and adds a lot of complexity to things like tracing hit rays etc).
> Also, why the need o a dual(e12, e02, e01) to represent a point when in vector it's just a (e0, e1, e2). This is just counterinuitive. This is what I mean by "quickly gets complicated" and it feels nearly as opaque as cross product in vector math.
This is because geometry and group theory are intricately connected. When you use matrices, you represent elements with vectors and transformations with matrices - they're separate things. In Geometric Algebra, every element also _is_ a transformation. (a plane represents a reflection in that plane, a line represents a 180 degree rotation around that line, a point represents a point reflection in that point). So now there is a strong link. Whatever you use to represent reflections should also represent planes, same for rotations/translations and lines, or point reflections and points.
It is in fact very intuitive and simple, its just different from what you're used to. For example, in 2D, given a point at euclidean position (3,4), here are the two mindsets:
* classic : it is a sum of three times the 'x' vector and 4 times the 'y' vector. (and actually than add in '1' homogeneous vector). '3x + 4y + w' (in memory : 3,4,1 )
* GA : (3,4) is a system of equations. Namely 'x=3' and 'y=4', or homogeneously : 'x-3=0' and 'y-4=0'. Such homogeneous linear equations are lines (in 2D), and represented by vectors : 'e1-3e0' and 'e2-4e0', solving such a system of equations is just the outer product: '(e1-3e0) ^ (e2 - 4e0) = 3e20 + 4e01 + e12'. (in memory : 3,4,1)
so because it is on the bivector basis, this element (3e20 + 4e01 + e12) now represents both the point at (3,4) as well as a rotation of 180 degrees around that point. Just like the line (e1-3*e0) represents both the line `x=3` as well as a reflection w.r.t. that line. For the same reason the product of two lines will give you the rotation or translation between them and the product of two points will always give you the translation.
So I'd argue its a lot more intuitive, don't factor out the time it took you to find the linear algebra approach intuitive.
Thanks for the explanations. Just want to elaborate on this point:
>> Also is it possible to collapse a series of such transforms in a single versor like you can do with matrices without going into dual quaternion stuff? In generic game developemnt, translation, rotation, and non-uniform scaling are all extremely basic things that cannot be handwaved away or "too big".
>Versors combine just like matrices (using just the ordinary product). (doing this for translations/rotations _are_ the dual quaternions, but you don't have to (and imho shouldn't) call them that.). Non-uniform scaling along your scenegraph (as opposed to in the beginning (object space) or at the end (view space)) is usually frowned upon in professional game development. (it makes it impossible to correct matrices using Gramm-Shmidt, and adds a lot of complexity to things like tracing hit rays etc).
Though frowned upon, it's important to retain the ability to do non-linear scales, as that's part of tuning things in-engine fast. It could be later corrected but without it, it becomes very cumbersome to do quick tunings of object scales or maybe simply doing quick scene mockups. I work in games in a professional capacity and find this usage very common and almost indispensible.
Algebraically we can always use the Hodge dual and its inverse to move to/from k-vectors from/to n-k vectors.
Geometrically we can always say two points define a line or two lines define a point.
Group Theory - here we can't swap. Two reflections make a rotation but two rotations do not make a reflection. So reflections are 'naturally' grade 1.
If you want everything to fit intuitively together, you want reflections to be grade 1, and by consequence hyperplanes (lines in 2D, planes in 3D, etc) to be grade 1 (i.e. vectors).
Doing it the other way around is possible but will be more verbose and less intuitive. (also with 'reflections' as natural grade 1 elements, the move from Euclidean to Conformal becomes trivial, simply reflect in (hyper)spheres instead, in this space it is also easier to see that the two approaches are not equivalent, it is natural to say that (in 2D) the 'meet' of two circles is a point pair, but not that the 'join' of two points is a point pair. (why would it not be a line just like in PGA?)).
Imho, one should learn to appreciate both halves of the picture, as for each specific problem one of them might be more natural. (for the popular Euclidean group and its associated geometry this view we're not used to may very well be more intuitive one, imho).
Isn't this just a consequence of phrasing things in terms of the wedge product instead of its 'pullback' via the Hodge? Because personally, I think the most natural representation of a reflection is the mirror plane, not its normal vector...
The grade 1 element is not the normal vector of the plane, it is the plane itself (as a homogeneous linear form, it includes the distance from the origin as an extra coefficient, which the normal vector does not).
Linear equations form a linear space, you can add them and multiply them with scalars. (creating things called line-pencils, plane-bundles etc .. classic projective geometry). Hence you call the grade-1 elements of the graded version of such a linear space 'vectors'. (just like you can make vector spaces with functions or all other sorts of objects).
So the reflection formula from GA in general, which is to reflect an arbitrary element X w.r.t. a grade-1 element a :
-aXa
is simply to be read as reflecting the object 'X' in the plane (3D) or line (2D) or sphere (3D CGA) or circle (2D CGA) 'a'.
Such a reflection should modify all other reflections, except for itself (where it should only flip orientation) :
-aaa = -a
It is easy to verify that this holds for general Euclidean planes written as homogeneous linear equations in their grade-1 element form.
And from that everything else follows. (composition of reflections gives you rotations/translations (leaving points invariant in 2D), etc).
Planes being grade 1 elements in no way implies characterizing them by a normal vector.
Another poster has already linked [1] and [2]. That's what I was getting at as well: A bias towards the wedge/'join' and against the 'anti-wedge'/'meet' crops in
via the geometric product, when the situation should be symmetric due to Hodge duality. This leads to constructions I find rather unnatural.
A bias towards 'vectors must be points' crops in as an unwritten legacy axiom, leading to a broken correspondence with Group theory (which again cannot be turned around, discrete symmetries compose into continuous ones, but not the other way around). A further restriction to the unfortunately very symmetric 3D Euclidean space hides many of the problems with that approach. (for example only in this space the even subalgebra is shared, and bivectors are self-dual).
Consider for example R_(2,0,1) in both scenarios.
Using Geometric Product as Group Composition
e1 = reflection w.r.t. x=0 axis
e2 = reflection w.r.t. y=0 axis
e12 = reflection w.r.t. y=0 axis followed by reflection w.r.t. x=0 axis
= 180 rotation around origin.
Using Anti-Geometric Product as Group Composition
e1 = not a group element, because anti-norm is zero.
e2 = not a group element, because anti-norm is zero.
e12 = not a group element, because anti-norm is zero.
So to get the same behavior you need the following elements when using the Anti-Geometric product as composition
e02 = reflection w.r.t. x=0 axis
e01 = reflection w.r.t. y=0 axis
e0 = reflection w.r.t. y=0 axis followed by reflection w.r.t. x=0 axis
= 180 rotation around origin.
So choosing the geometric anti-product as group composition operator is possible, but imho breaks the readability of your transformations completely.
All I can say is that I hope that those linked articles do not give people an excuse to not consider the model in which vectors, grade-1 elements, are reflections. This is the true half of the picture that is new and being missed.
Despite the costs of using a “too big” symmetry group (I’d say overparameterization), I bet it’s hard to beat the performance of 4*4 matrix multiplications, since the computation is so uniform, vectorizable, and perfectly sized for e.g. SIMD. Even if there are fewer math operations with another representation.
It depends. For composing multiple transforms, composing versors aka quaternions/dual quaternions is cheaper than a full 4x4 multiply. For a single application, a 4x4 will be cheaper.
Quaternions are used in animation for blending and composing transforms for a reason. You don't have to trust me, but yes... tried for maybe ten years and counting from doing graphics and animation work.
> If the quaternion is used to rotate several points (>1) then it is much more efficient to first convert it to a 3x3 Matrix. Comparison of the operation cost for n transformations:
> - Quaternion2: 30n
> - Via a Matrix3: 24 + 15n
So really it depends on what you're doing (admittedly here it is 3x3 not 4x4)
Like enkimute said, your options re translation are exactly the same with matrices and with geometric algebra: either carry a vector offset with you, or move to projective space (homogeneous coordinates).
I'm not sure what you mean. Let Ax mean rotating x by A (which is a rotation matrix, quaternion, rotor, whatever). Then, the composition of Ax + b and Cx + d is
Not with just a rotor (quaternion) and an offset, or with motors (dual quaternions). If you want nonuniform scaling, rotation and translation, then matrices are probably the nicest way to do that.
From a brief skim, it looks like this introduction has the same problem as every other introduction to geometric algebra that I've seen: a complete lack of practical examples.
Where is there an introduction that shows you how to use geometric algebra to write better code to perform calculations? Or maybe to do word problems, like when we teach kids algebra or calculus?
Even in a supposedly practical book like Foundations of Game Engine Development, Volume 1, there are code examples throughout the book, except for the chapter about geometric algebra.
I share your pain, as someone who messes around with game engine dev as a hobby and has also read through that book. He sure made geometric algebra sound like it _should_ be useful, but man is it clear as mud to me.
Of course in general I'm not surprised; for instance I'm not surprised that this paper doesn't have many practical examples. It's higher level math, of course, and for people in that space, showing a small example where you use the geometric product to solve an abstract linear algebra problem is about as close to 'practical' as you are going to get.
But I found it much more aggravating in that book. It's specifically a practical book after all, meant to help lay foundations for renderer and physics engine dev -- but as you said, that section is completely devoid of any real examples of how you could integrate it to make Problem A or B easier, or how it simplifies your transform code, or literally anything. The complete lack of code examples or any such real-world applications make the chapter feel a bit out of place, maybe even tacked on or gratuitous. (Which is a shame because he kind of builds up to that chapter, dropping hints here and there about how it will fundamentally shift our understanding.)
From my math undergrad, I feel like generally there are two feelings on this kind of thing:
- No practical application is needed, since studying math is an end in itself
- If there is a practical application, it should be fairly obvious and the author should not have to stoop to the level of trying to enumerate them
I'm not saying I agree, but I think there is part of the math community that doesn't care to enumerate practical applications. Expecting this kind of thing from this area of research may be an uphill battle.
Sure, but geometric algebra is specifically marketed as a unifying framework that makes geometric problems easier to reason about. It's not something you learn to solve previously unsolvable problems. So why is there a distinct lack of down-to-earth problem solving?
I think one would have to have a background in modern algebra and physics to see where the "unification" is useful. Then it becomes a lot more obvious, for example Maxwell's equations can be compressed down to one equation with this type of algebra.
In any case, check section 10 for another application for physics.
For those who happen to like both geometric algebra and general relativity, my former student Joey Schindler wrote this quite beautiful paper extending geometric algebra to calculus on manifolds. https://arxiv.org/pdf/1911.07145.pdf
This gets directly to my main question each time I see geometric algebra show up: how does it fit in with the "normal" notation of differential geometry? What assumptions does it make for a metric, what is the equivalent of parallel transport, Lie brackets, how does it represent gradients (and other things that are naturally 1-forms), etc., etc.?
All the treatments I've seen jump in to manipulation without really going in to the axioms used. That paper seems to do a lot better at fitting it together, so I'll certainly read it, thanks.
There are good ideas that gain traction super quickly. GA is not that. Possibly because there's already an ok solution (dot/cross-products), the benefits of switching are pretty incremental (or negative in terms of net effort for people who use dot/cross products a few times a year).
Category Theory is an example of a unifying theory that caught on. It took time, but my brain was able to assimilate to it quite quickly. With GA none of it sticks for me - I have other tools that suffice (I work on games and have reasonably regular 3D exposure, but the traditional tools of calculus textbooks mostly suffice, and if I need fancy concepts they tend to come from differential geometry).
I remember another big war being to push gauge integrals into Calculus textbooks (as an alternative/augmentation to Riemann integrals that vastly increase the scope of what can be integrated). I assume that went nowhere, but that was a valiant struggle.
My breakthrough with the cross product was coming to understand it as a weird but practical tool rather than representing some fundamentally beautiful geometrical concept. It might be one of the most well known weird mathematical tools. The cross product of two vectors has no real meaning to me intrinsically (I don't think it has or deserves a platonic identity) but it's useful for talking about orientations/picking right-angles/planarity.
I can imagine at some point educators might become convinced about the utility of GA and things flipping, but it feels far from that tipping point still.
From your description, you have not experienced [what I consider] a breakthrough with the cross product. In 3D there is a one-to-one mapping between vectors and oriented areas (feel free to think of it as a plane along with scalar denoting an area of the plane). When I learned about vector calculus, this one-to-one mapping was used liberally and implicitly, even though the vector a×b behaves differently from the vector a and the vector b.
Even if you take the magnitude of a×b to get a scalar |a×b|, that scalar does not behave the same as e.g. the scalar a ⋅ b. The SI system cannot distinguish between a scalar torque Newton-meters and scalar work Newton-meters, though they are different.
Geometric Algebra makes the difference explicit. For me, it's not about giving a meaning to a×b after the fact, but rather to ensure it has meaning by construction.
I think it's valid to assume the operation a×b and then figure out what it means (or doesn't mean, or call it a weird tool). But Geometric Algebra gives one way to think about it more clearly. It's not about fundamental beauty for me. It's about having a consistent and clear system to operate within.
The cross product is just an infinitesimal rotation. If you rotate b around the axis a, then the derivative of the new vector is axb. So the cross product is the product in the Lie Algebra of rotations. (Thats why you have the Jacobi identity). I find this gives much more insight than labeling it a bivector.
Edit: also this view can generalize. You can ask what other reprensentation of the rotation Lie algebra exists and find e.g su(2), this leads you to spin-bundles then.
This is not very clear to me. Can you be more precise?
If we're talking about the Lie algebra of space rotations, then the elements of that are infinitesimal rotations. Those elements form a vector space, and there's also the Lie bracket, which is the cross product as you say.
I don't see the sense in saying "rotate b about a" when both a and b are to represent infinitesimal rotations. If we're taking b to be a position in space, and a to be an infinitesimal rotation, then your explanation makes sense (though in a backwards way from how I think of it. a x b is the vector you rotate a about in order to get the derivative of the tip to be b). How do you reconcile that with the Lie Algebra?
More importantly, what if your vectors don't model rotations to begin with? Then you still are left with giving a meaning to the cross product. How can you clarify anything by taking your vectors to be infinitesimal rotations, whose cross product gives a third infinitesimal rotation, which carries information about the non-commutativity of the [non-infinitesimal] rotations?
It is one thing that the cross product does, but I don't see how it explains, for example, what the cross product has to do with calculating volumes of parallelepipeds, or torques.
Geometric Algebra is certainly not the only theory to make this distinction. Most treatments of tensor algebra will cover Hodge duality and emphasize the importance of ensuring all quantities have well-behaved "types". Category theory helps to organize everything a bit more neatly, too.
I think there are two main reasons why we don't see more of the applications of Geometric Algebra.
One is that most people who would be using it are already satisfied with Differential Geometry and the coordinate-free formulations of physical laws we have there already. Having done some work in computational GR, I haven't seen anyone mention it and none of the texts I have read do. I think for people advanced enough in their field they are already past any pain points that this solves, so they don't learn it and don't teach it to their students.
Two I think that the people who this is most useful for, students just starting out in physics, aren't really in a position to go out and learn a new representation for doing their problems in. Especially when they are struggling enough with the basic concepts and their instructor is highly unlikely to be familiar with it.
Because of those two things and the cycles they mutually perpetuate, without a huge PR campaign or some well known teacher using it at a big name school for an introductory class I don't think we will ever see this become popular
While not specifically related to Geometric Algebra, much of even simple vector algebra itself is usually taught and presented in a confusing manner, is what I think. Elementary questions like whether a vector has a start and end point, has it got an orientation along its own axis, is it possible to determine if two given vectors are in the same frame of reference, etc are simply not obvious from most treatments of vector algebra I have seen.
I do see a pressing need for a full treatment of the topic without notational or conceptual ambiguities. Atleast mathematics owes it to the other sciences that much, is what I feel.
I’m someone who never really internalised nor had use for any arithmetic, let alone mathematics, at all in life. And yet I see mathematics like this and I’m blown away at the apparent beauty of it.
I look at equations I just _want_ to understand and navigate in my mind. But without knowing the path to get there I’m stuck at zero.
Take the Riemann hypothesis for example. I wouldn’t mind understanding all the notation with examples and exercises for each part of the notation so I could bring it back together myself.
That has about 42 pages of prose describing the problem in layman terms. I'd think someone who tackled high school math should be able to follow it. Dr. Devlin also has a good Coursera course on an introduction to mathematics.
Honestly your best bet would be to just go to individual university courses if you don't want a full degree. Alternatively just follow along public courses. A lot of the "nonsense scribbles" often only make sense if there is an actual human talking about them.
Riemann Sums != Riemann Hypothesis. That said, there is a decent book for the layman on RH: "Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics"
Surprisingly very few people talk about this on Youtube with all those GA tutorial videos, and you can find scant information on the bivector site forum. In comparison, despite matrix represetntion's weaker mappng to geometric concepts, it handles everything in a more unified interface without too much complications.
For that matter the GA math feels much to be desired. There probably exists a undiscovered better version of GA that handles translation and scaling better and everybody could instantly agree on. Until that day GA probably won't see much general usage.