> For some reason I can't quite put my finger on, some experienced game programmers are reluctant to use atan2() and prefer the dot product.
It's because atan2, along with all the other inverse trigonometric functions, is incredibly slow. On a GPU, sin and cos may be a couple of cycles, but asin/acos/atan/atan2 may be upwards of 30 or 40.
Often times, angles are best avoided when possible for this reason. Normalized vectors end up faster and simpler in many cases.
Relevant anecdote: I recently wrote some code to compute normals in 2D and thought I would be clever by encoding the normal as an angle to save space. It turned out that the atan2 call at the end was so slow that it outweighed the cost of everything else I was doing several times over. Lesson learned…
Yeah, there are multiple reasons. Perf is important: inverse trig functions are generally much slower both on the CPU and on the GPU [1]
But also - most of the time gamedev happens in 3D space not in 2D space (even if it gets rendered down to 2D). So it's more natural to do a dot product of two 3d vectors, than try to mess around with trig functions that work on scalar angles. Same with rotations: it's easier and better to use quaternions in 3D space than try to rotate along each axis separately.
I've more often encountered the need to integrate a quaternion, e.g. from inertial measurements or a rigid body physics model, but this is also a textbook lookup away.
> 1/x, sin(x), cos(x), log2(x), exp2(x), 1/sqrt(x) - 0 or close to 0, as long as they are limited to 1/9 of all total ops (can go up to 1/5 for Maxwell).
This is what surprised me to learn recently, that some transcendentals are free, as long as you don't use too many of them.
I also think the dot product and other vector operations are more naturally the thing you're actually trying to do in the end. We generally start with a vector (say a movement command to take our unit to a certain location) and want to end up with a vector (acceleration to apply) so going through angles as the intermediate step makes little sense logically. This is true even in 2D and without speed constraints. Moreover, angles often leave you with a tricky case to handle where you are going "over 360 degrees"and there is an unnatural discontinue discontinuity.
I'm half way through writing a blog post on how transcendental functions are computed in glibc, and more importantly how to extend the methods to compute things like
sin(x) / x
(1 - cos(x)) / x^2
directly without incurring the numerical problems arising from the division.
One thing to understand is that the stdlib math functions are accurate to machine precision, i.e. the closest representable value to the actual answer.
For 32-bit floats, this is achievable using a 5 or 6 order polynomial approximation. However, there was one surprise I was previously unaware of.
Obtaining the correct precision becomes difficult when the result is close to zero, since any absolute error is divided by the magnitude of the result. Silly example, if
sin(0.000001) ~= 0.1
Then the absolute error is small (~0.1), but the result is still off by four orders of magnitude.
Sometimes this is not a problem. When x is small, sin(x) can be computed by simply returning x. This rule is valid until x^3/6 > precision ~= 1e-38, or x = 4e-16. So exact precision is obtainable by simply returning x if x < 1e-16.
But now consider:
x = pi/2; cos(x) = 0
The logical way to compute this is as
-sin(x - pi/2)
But this creates another problem, since x - pi/2 ~= 0, a catastrophic cancellation occurs. Now recall that glibc is required to be accurate to machine precision, which goes down to 1e-38. But x is order 1 and only represents 7 digits, down to 1e-6. So glibc is forced to scour lookup tables to recover the missing digits of pi.
This makes computing cos(x > pi/4) about twice as expensive than cos(x < pi/4), but there is a certain irony since it is very unlikely x is known down to 7+ digits, so those CPU cycles are wasted. How many times have you seen x = sqrt(aa + bb + c*c); ?
On my 4 year old laptop:
time ./a.out 0.5 4.12s
time ./a.out 1.5 7.31s
On my raspberry pi:
time ./a.out 0.5 1m15.829s
time ./a.out 1.5 1m47.522s
> Sometimes this is not a problem. When x is small, sin(x) can be computed by simply returning x. This rule is valid until x^3/6 > precision ~= 1e-38, or x = 4e-16. So exact precision is obtainable by simply returning x if x < 1e-16.
"This rule" seems a lot like the "skinny triangle" rule. I recently ran across it on Wikipedia after going through astrophysics articles. Specifically, to my understanding, the parsec could be calculated without using a trig function thanks to the skinny triangle rule. (To a certain level of error, of course. And, the parsec is now a defined value so the original trig-derived definition is invalid.)
Do you have advice for which fast, good-enough approximations one can use instead in the situations where the precision is not needed?
In many use-cases we would pre-multiply the value we throw into sin and cos with two times Pi anyway, so couldn't one skip the circle constant altogether and a write function like sin_one(t) and cos_one(t), where each call "implicitly multiplies" t by two times Pi? (perhaps, if constructed well, the function could even be more precise, since it avoids the rounding errors, however small, of multiplying by the pi constant)
I don't have advice on the specific `sin(x) / x`/`cos(x) / x^2`, but the Sleef vectorization library [0,1] supports operations with varying precision. (0.506, 1.0, and 3.5 ULP)
If you're working in a case where you can afford slightly lower accuracy, the 3.5ULP method can be a very significant speedup, especially since you're vectorizing at the same time. I got dozens of times in speedup when testing just the 1.0 ULP bound when performing sin/cos on a large vector. (I don't remember the precise number and should just re-run the tests.)
To generically dispatch this library, in case there's any interest, I wrapped it in some template metaprogramming in [2], so that the widest vectorization available is selected at compile-time when selecting a given operation.
In most cases a Chebyshev approximation will just work. There is advice out there which says to use Clenshaw's summation formula to sum the Chebyshev polynomials, however I've found that expanding out the polynomial coefficients to get something like a5 x^5 + a4 x^4 + a3 x^3 to be more accurate in practice.
Scaling the polynomial, as you suggest, would simply require reevaluating the coefficients. For example, a5 (2pix)^5 would become 32 a5 pi^5 x^5.
In the cos case where regions of interest approach zero, divide by another polynomial with the same zeros to create limits. cos(x)/(pi^2/4 - x^2) -> 1/pi as x -> pi/2. Then approximate that function, and evaluate approx(x) \* (pi^2/4 - x^2), so you are multiplying the approximated value by something close to zero. This way you get precision in a way similar to sin(x) = x when x small.
How old is this article? I ask because the examples use Allegro and DJGPP, and reference the 1991 games Micro Machines and F-Zero. And also the giveaway sentence "Floats are so slooooooooow! Why don't you use fixed-point numbers?"
I learned games programming as a teenager in that era, and I present my fixed-point 3D code from 1996, which does sin and cos through the medium of 512-entry 32-bit integer lookup tables:
Allegro is also the library currently being used by Factorio, although the dev team is currently porting it to a custom game engine to try and get more performance.
The follow-on article [0] references this article as appearing in "issue 5" of Pixelate, so it seems you found the original source. Issue 4 [1] covered June 2001, probably published in July, and so it seems that issue 5 was probably published in August 2001.
Original reply:
And from the presence (even in that 2002 version) of the 'allegro.cc' domain, we can narrow the range of dates from the other end, too: The .cc domain was introduced in October 1997, and Allegro.cc claims visitors since January 1999.
The article was specifically written for Pixelate (issue 5), so it was around that time. I only started hosting the articles myself after Pixelate became defunct.
Ah, the race-car example is bringing back memories!
In the early days of Macromedia Flash, I wanted to build a GTA I clone. I didn't know about trigonometric functions yet, so I had a line of height 1, width 0 that I rotated around one of its ends at a fixed angular velocity while the user pressed the left and right arrow keys.
Then I read the height and width of the rotated object and added it to the x and y coordinates of the car, respectively.
For some reason, trigonometry is treated pretty late in Swiss schools (often only during the Matura, where students are 16-20y/o). I never understood this, since understanding trig can be really useful in a lot of everyday situations.
Not to mention necessary to solve a lot of intermediate steps in Further (intermediate, advanced and research) Mathematics.
Maybe it's possible to learn about sines and cosines from Euler's identity up and forget about triangles altogether. This would enable you to do calculus, statistics, etc. But there's some value to teaching the theory of triangles as a beginner's introduction to pure mathematics (and hence the beauty in mathematics).
---
What follows has to do with triangles, but is mostly offtopic.
One of the clearest memories of my childhood, somewhere between ages 8 and 10, is a German-made (but dubbed into my language) TV educational show that had men in togas walking around Classical Greek ruins and dramatizing some scenes -- Diogenes and Alexander and so on. But what has sticked in my mind and still burns red hot was Pythagoras and his mates, crouched and drawing lines in the sand. After Pythagoras explains his hypothenuse theorem, someone interjects:
"But this holds for all triangles?"
"All triangles. All triangles that have ever been drawn -- and all triangles that will never be drawn."
A sufficiently powerful brain scanning device probably can find this line somewhere inside my skull. It got me in big trouble: I'm not particularly intelligent, let alone disciplined, but I'm slogging through a Masters in mathematics, maybe dreaming about a PhD in physics because... I don't know. It's like when ghetto kids see some violent scene right in front of them and 20 years later their criminals. It sticks with you.
You rely om the results of trigonometry in the thousands of engineered object you encounter every day. Almost anything having to do with physics will have a fair amout of trig underlying it, and a lot of calculus relys on trig. So if you want to understand a lot of what's going on, you'll need trig.
It's like asking "what everyday situations is addition useful in".
I think that’s like saying one “uses” advanced physics every day because they use a cell phone. The OP question was probably asking more about what practical uses are there for normal people to actually use trig, the way you use arithmetic to calculate a tip at a restaurant.
To that I’d say: finding the height of something only knowing the length of the base and an angle. Or triangulating your position, surveying your property to make a garden, etc.
This seems like an exaggeration, at least from the point of view of calculus in the classroom, where the prevalence of trigonometric functions reflects only their convenience as examples. Did I misunderstand? Perhaps more apposite is differential equations, where trigonometric functions (or exponentials more generally) are ubiquitous.
If you want to work in Engineering knowledge of trig is a must have.
For instance calculating the deflection of a structural beam as you load it with weight (a discipline known as statics), depends heavily on trig. Anything involving rotation and angular momentum (such as gears and axles) require knowledge about trig.
Other than that triangulation- computing the distance between points is a useful practical application.
I was trying to do some calculus the other day, and ended up reading about the history of logarithms. As you may know, logarithms were valuable in the pre-mechanical era because they allowed people to reduce multiplication to addition and a couple of table lookups, which is much faster to do - a huge boon if you're doing navigational calculations on a rapidly-moving ship, or calculating vast astronomical ephemerides.
What i hadn't realised is that before logarithms were developed, people used to do it with sines and cosines, exploiting some identities from spherical geometry, and the tables they already had for navigation:
I WISH this is how we learned math in school. Knowing how stuff was used / why it was invented would have gone a very long way in making everything much more grounded and interesting.
I once got a two week survey course in the history of mathematics. I can confirm it was riveting the whole way through, seeing the evolution of the toolset laid out.
Hey, nice to see my venerable tutorial get some renewed attention. I got a lot of positive feedback on it over the years and that's why I kept it around, even though it's a bit outdated.
I'd like to do an update or write a follow up article. What would you be interested in seeing? More examples? Different concepts? Should I port the examples to a different framework?
Hmm it's been my opinion that most of the time that trigonometry is used, one should work with vectors instead. This goes doubly for inverse trig. If you have trig then inverse trig, you can almost always turn it into a geometrically meaningful vector operation and avoid the difficulty of handling the ambiguous range of inverse trig functions properly. Remember your sohcahtoa for doing this.
I'd love to find a math cheat sheet poster with comparative mnemonics for the office wall. There are a ton of them. I imagine they vary depending on the pedigree of the teacher.
The example code uses Allegro (http://liballeg.org/) which was one of my _favorite_ libraries when I was learning to make games as a teenager in the early 2000s.
I had come from a QBasic background and was trying to learn C++. QBasic was wonderful because it put everything a budding game developer could want at their fingertips—full screen graphics and text modes, drawing primitives, sprites, rudimentary sound and music using the PC speaker, etc.
In 1999 I got my first taste of C++ with MSVC++ 6.0, but soon lost interest. I had to limit myself to text-based command-line programs, or attempt to learn the Windows APIs, which seemed way over my head at that point. What I really wanted was C++ but with QBasic-like APIs.
Probably around 2002-2003 I discovered Dev-C++ and the Allegro library. It was exactly what I had been looking for and more! It reinvigorated my interest in programming and helped me learn tricky concepts like pointers and memory management. I probably wouldn't be an engineer today if it weren't for Allegro.
Sin and cos were the first mathematical concepts I truly understood through programming rather than through high-school mathematics. The traditional method of teaching them through triangles feels very unintuitive.
Agreed re: triangle method. At the very least, the triangle should be explored in the context of a unit circle. Otherwise, most students struggle with negative sin / cos / tan values ("how do the quadrants work again?"), the connection to Cartesian coordinates, etc. It's a shame that we present so much of mathematics as arcane magic requiring incomprehensible mnemonics to understand - especially in trigonometry, where there are much more intuitive / visual representations of the basic ideas.
(Source: I volunteer with the Toronto Public Library to help high school students with homework, mainly math.)
When using sin and cos in graphical programming, I wish code completion for them came up in the context of a triangle and unit circle so I wouldn’t keep referring back to Wikipedia.
If, like me, you have the graph of the functions burned into your memory, then the anti-symmetry of sin makes it's geometric interpretation easy to remember.
Sin and Cos correspond to the rejection and projection of two unit vectors. When they rotate past each other, the projection still has the same direction, but the rejection changes sign, so it must correspond to the anti-symmetric sin, not the symmetric cos.
Not who you responded to, but as he stated, the best method is the unit circle.
Take a circle of radius 1 around the origin. Draw a line with angle x through the origin. The point where this line intersects the circle has vertical position sin x and horizontal position cos x. Moreover, if you measure the angle in radians, x is how much of the unit circle your angle covers.
This also immediately makes the identity (sin x)^2 + (cos x)^2 = 1 obvious.
For me it was associating (sin(r), cos(r)) with a direction (on the unit circle), instead of with a triangle. After that, the relation with a triangle where the longest side has unit length is obvious.
I was taught the triangle method in high school and never got a good grasp of it. In college, we used the unit circle method. It is so much more intuitive. Memorizing the unit circle and all the values of sin and cos is so much easier because you just visualize the circle.
Yeah. I remember trying out sin and cos on a calculator with random numbers, getting back seemingly random results and wondering what exactly they meant. Then, some years later, studying them on computer by running them in a loop—and realizing that they let you draw circles!
Hmm, I wouldn't be so sure... Because it is by the way of geometry that people normally find explanations of mathematical concepts that are the most intuitive to them. (To be fair, some prefer dealing with symbolic algebra instead.)
I remember my amazement watching our CS teacher in highschool expand the sine function into its Taylor series.
I just couldn't wrap my head around the idea that you could simply calculate this using basic arithmetic operations.
One thing I learned from this experience was that these functions are expensive to calculate and it's actually pretty easy to write your own, faster but less precise implementation.
CORDIC is cheap and fundamentally simpler than Taylor series since it works on the essential principal of unit circle rotations that the trigonometric functions derive from. It isn't just a close enough approximation.
> Floats are so slooooooooow! Why don't you use fixed-point numbers? On newer computers, there is not much speed difference, but on older computers the speed gain of using fixed numbers is significant.
This is now even more true than when this article was written, enough that I've been surprised lately and feel like I need to unlearn things I thought I knew about floats and transcendentals. They only cost a handful of clocks now in the worst case, with dependent instructions before and after. Depending on what else is going on, I think you can sometimes get sin & cos for the same cost as 1 add instruction; as long as you have a lot of other independent math nearby and/or memory access. Sin & cos & sqrt are crazy fast on GPUs, when using ShaderToy, for example.
Logarithm too. I had written a fractal program in python, and one optimization for adding up a lot of logarithms was to multiply several values together first, then take the log of the batch. When I converted it to a shadertoy, I found that I could strip out the crufty batching code, converting hundreds of multiplies into logarithms with no noticeable change in performance.
I recently wrote a routine to do fast IKs[1] for a new robot arm and let me say, I am filled with profound love for atan2. It saved me from having to explicitly dealing with so many corner cases.
I remember the exact time and place I first understood sin, cos, tan and SOH CAH TOA after months of being confused. The guy next to be at school Shaun Marshall just explained it to me in a sentence and bam, I got it, and have been using it in various forms of programming for about 20 years since then. Thanks Shaun, wherever you are now.
Slightly unrelated to this: I've always wanted to play with simple demoscene graphics that make use of sine waves (think scrollers) - what would be the simplest programming language to give this a shot?
I'd also recommend processing. But if you wanted to try your hand at GLSL you could also play around with shadertoy [1], though the learning curve is likely to be much steeper. It has genuine demoscene heritage, coming from a talented demoscener named 'iq' [2].
Processing.org is pretty much the go-to visual programming environment, I think, but there's lots; openframeworks.cc is another popular one that I've used and liked.
It's because atan2, along with all the other inverse trigonometric functions, is incredibly slow. On a GPU, sin and cos may be a couple of cycles, but asin/acos/atan/atan2 may be upwards of 30 or 40.
Often times, angles are best avoided when possible for this reason. Normalized vectors end up faster and simpler in many cases.
Relevant anecdote: I recently wrote some code to compute normals in 2D and thought I would be clever by encoding the normal as an angle to save space. It turned out that the atan2 call at the end was so slow that it outweighed the cost of everything else I was doing several times over. Lesson learned…