Hacker News new | past | comments | ask | show | jobs | submit | fbanon's comments login

Still alive and kicking. You just need to know where to look.


It is painful to even just skim this thread, seeing basically every comment brimming with frustration from being stuck in a paradigm that's straightforward to leave behind. So much wasted human potential.

It's a solved problem - torrent your damn entertainment. Movies are just basic files sitting in a directory. Files that can be rewatched whenever you'd like. If you are traveling, copy to your laptop. If you move and haven't quite set up your entertainment center or Internet connection, watch it on a computer. If your friend is interested in something, copy it to their USB drive. No fucking nonsense of some third party capriciously disrupting your life precisely when you're trying to relax.

Any business trying to sell me some productized solution needs to beat torrenting for ease of use. So far none of them have even attempted, because they all end up warping the user experience to appease Hollywood's delusion of control. Just say no.


TPB has added really annoying “on-click” BS lately, super shady. Any alternatives you might have heard of? Not for me, for a friend of course…


YIFY provides movies of decent quality at reasonable file sizes.

For movies that YIFY doesn't have, 1337x is a good site.


qbittorrent + jackett. Click on however many pub trackers you want to search in jackett then set the search plugin on qbit to point to your jackett instance.


Tell your friend about 1337x.to


QBitTorrent has a search feature built in.


I just use NoScript.


Yeah, but that's a real problem for those of us who don't watch a lot of movies. Many people aren't capable of figuring out "where to look" at all anyway. I am, to some degree, but it's just not worth spending that much of my time to hunt them down on the rare occasions I really want to watch something.

Also, I'll just note that we never got the 21st century we were promised, which was this - any movie ever made: https://www.youtube.com/watch?v=xAxtxPAUcwQ The fundamental problem is that the people who own movie IP rights are truly evil.


>the people who own movie IP rights are truly evil.

That is wildly overstated. The worst they can do to you is not let you watch their entertaining movie. Possibly even after you've paid for it (the subject of the OP) - and in that case the culprit isn't even the IP owner, it's the distributor! And honestly it sounds like the problem with digital ownership is simple fraud that is a) covered by existing law and b) too expensive for anyone to litigate. Maybe a class action could do it.

Here's the really interesting part - you're railing against artificial scarcity. Someone has a good that they could give away, and they aren't, and you're calling them evil for doing that, but I ask you, in all honesty, how else do you make money from movies? If you can't make money from it, how will you convince investors to fund your next movie? (Now substitute "album book software" for movie and ask the same question.)

That's not to say that IP owners can't be "evil". George Lucas believed it was his right to keep changing Star Wars over time, and it's impossible to find a legit copy of Star Wars that is the original theatrical release. That's some 1984-level memory hole bullshit and although the stakes are low, it's evil. Disney is arguably quite evil for a variety of reasons, e.g. it's unholy influence over Congress, it's unhealthy consolidation of huge chunks of the American movie market. But neither of them are evil for using artificial scarcity to profit from their work, because that's the only way to profit from data goods.

(Professional open source tries to square the circle by giving away the data goods but charging for (actually scarce) knowledge. It's a good model but cannot apply to entertainment goods, since viewers don't need scarce knowledge to enjoy a movie.)


The stakes are not low. Denying access to, or altering, content in this way is equivalent to vandalising the cultural commons (and yes, even if it's not in the public domain - it's still cultural commons). People should be more upset about this.


On the scale of evil things in the world, modifying ~10 minutes of a popular fantasy movie does not rank highly. The implication is scary, but the act itself is profoundly unimportant.


Is it relatively as important as the looming threat of climate change? No.

Is it, in absolute terms, still a very important concern worth addressing at some point? Sure.

We should start addressing it now. We can work on more than one problem as a species, and moreover, we should. The big problems never stop coming, so focusing 100% of our energies on that means that the smaller problems won’t get solved.


PS6 will be written in Rust.


And PS7 will be fought with sticks and stones


I have zero idea why my brain also went to this quote.


So it's never coming out?


Case in point: don't choose a hobby that can be taken away from you on the whims of some dickhead Google exec.


>But math never decreed that sine and cosine have to take radian arguments!

Ummm, actually it did. The Taylor-series of sine and cosine is the simplest when they work with radians. Euler's formula (e^ix = cosx + isinx) is the simplest when working with radians.

Of course you can work in other units, but you'll need to insert the appropriate scaling factors all over the place.

"Turns" don't generalize to higher dimensions either. With radians you can calculate arc length on a circle by multiplying with the radius. This extends naturally to higher dimensions: a solid angle measured in steradians lets you calculate surface area on a sphere by multiplying with the radius. How do you do the same with "turns" on a sphere? You can't in any meaningful way.


>> Ummm, actually it did. The Taylor-series of sine and cosine is the simplest when they work with radians. Euler's formula (e^ix = cosx + isinx) is the simplest when working with radians.

That's nice, but as the article points out most implementations of trig functions on computers don't use things like Taylor series.

Another terrific use of turns is in calculating angle differences, where you take a difference and just use the fractional part of the result. No bother with wrap around at some arbitrary 2*pi value. Since it wraps at integer values we simply discard the integer part. This can even be for free when using fixed-point math.


That's an obfuscation from the blog post. If you read further down in the code that is mentioned, the actual computation of sin is done by a polynomial expansion in x (radians), not y (turns). The purpose of y is mainly in case x is more than pi, and if so, what the corresponding angle in [0,pi/4) is.


You can if you want make a polynomial in turns. The CPU isn’t going to care one way or the other.

Implementations which are accurate in terms of turns even for values close to half a turn can be useful for avoiding numerical issues that sometimes pop up because π is not exactly expressible as a floating point number. These functions usually names like sinpi, cospi, etc. It would be nice if they were provided more often in standard libraries.


Based on the article, CUDA has a sinpi instruction (or whatever they call them in CUDA-land). Does anyone know -- is sinpi commonly provided in the CPU assembly extension ecosystem (avx & friends)? Light googling showed me some APIs that had implementations, but I didn't dig in enough to see if they are directly implemented in assembly (this seems like the sort of info a wizard here would know about, and probably whether these types of instructions tend to be well-implemented...).


Right, radians are the "natural" units of angle, others generally just make a circle into some integral number of units for convenience, but you always have to go back to radians to actually do calculation.

In the next installment, maybe he'll propose that turns can be limiting because diving up a circle requires the use of fractions, and suggest instead of 1 turn per circle, we make a number that's easily divisible into many integer factors. Maybe 216, or I don't know, 360?


The point of the original post is that depending on your field (e.g. game engine), maybe all the calculations you need can be done easier in the unit of convenience (e.g. sine of a turn is easier to calculate than sine of radian), so if that is the case you should stick with the unit of convenience thru all the layers and forget about converting to radians in your code.

And using fraction of a turn is also a very good option, much better than radians in many cases, especially if you chose a power of two fraction (e.g. 1/256), in this case all the modular arithmetic needed for angles comes for free as simple integer overflow, and lookup tables became a simple array access.


If you are making something like a game engine using computer hardware from the past 30 years, you should avoid angle measures to the extent possible.

It is much computationally cheaper and more robust (and easier to reason about) to use vector algebra throughout. Then you have no transcendental functions, just basic arithmetic and the occasional square root. You need the dot product and the wedge product (or combined, the geometric product), and derived concepts like vector projection and rejection.

If you need to store a rotation, you can use a unit-magnitude complex number z = x + iy, where x = cos θ, y = sin θ, without ever needing to calculate the quantity θ directly. If you need to compress it down to one parameter for whatever reason, use the stereographic projection s = y / (1 + x) = (1 – x) / y. Reverse that by x = (1 – s²) / (1 + s²), y = 2s / (1 + s²). [If starting from angle measure for whatever reason s = tan ½θ, sometimes called the "half-tangent".]

The angle measure is the logarithm of the rotation, θi = log z. In some contexts logarithms can be very convenient, but it’s not the simplest or most fundamental representation.

With units of "radians" angle measure is the logarithm of base exp(i) [related to the natural logarithm], and with units of "turns" it is the logarithm of base 1 (sort of).


I mean that’s all well and good until you have an object in your game and you’re like “I’d like this object to be leaning at 45 degrees, oops I mean 0.70710678118 + i*0.70710678118


At a high level you should be expressing something like one of "turn this by the angle between vector (1,0) and vector (1, 1)"; "point this in the direction of vector (1, 1)"; or "turn this by the square root of the rotation i" (i = a quarter turn).

If you use angle measures (of whatever units), when you say "rotate by an eighth of a turn" you are instead going to end up with something internally like: multiply some vector by the matrix

[cos ¼π, –sin ¼π ; sin ¼π, cos ¼π]

which is ultimately the same arithmetic, except you had to compute more intermediate transcendental functions to get there.

If you just have to do this a few times, angle measures (in degrees or whatever) are a convenient human interface because most people are very familiar with it from high school. You can have your code ingest the occasional angle measure and turn it into a vector-relevant internal representation immediately.

P.S. If you write 1/√2 in your code compilers are smart enough to turn that into a floating point number at compile time. :-)


In most game engines, constructing the rotation matrix is trivial. Something like Quaternion.Euler(0, 45, 0). The ultimate position / rotation of any given object in a game is usually a compound transform computed via matrix multiplication anyway, e.g. a model view projection matrix. I'm not sure it's the best way, but that's just how most game engines work.


If your game engine is using quaternions as a canonical internal representation for rotations, it is already following my advice from above.

(Game engine developers are smart people and have lots of practical experience with the benefits of avoiding angle measures, as do developers of computer vision, computer graphics, robotics, physical simulations, aerospace flight control, GIS, etc. etc. tools.)


Yea, I see. I originally interpreted your comment as being about game developers, but you were actually talking about game engine developers. In which case, we agree :)


Yes and no.

The Taylor expansion works out like

   sin θ = θ - θ³/₆ + θ⁵/₁₂₀ - θ⁷/₅₀₄₀ + ⋯
if θ is in radians. This is ideal for small θ but if you want to cover, say, 0<θ<2π you are more likely to use something like

https://en.wikipedia.org/wiki/Chebyshev_polynomials

which are optimized across the range. You could rewrite these just as easily to work in degrees as radians.

One of the best ways to calculate sin and cos is CORDIC,

https://en.wikipedia.org/wiki/CORDIC

which is really based on turns, half-turns, quarter-turns and so forth.


CORDIC is not based on radians or turns; it is based on decomposing the angle into a sum of:

phi_n = atan(2^-n)

and then using an abbreviated sum formula where computing cos(theta + phi_n) depends only on sums and bitshifts.

The small-angle approximations sin(x) ≈ x and cos(x) ≈ 1-x^2/2 are the real killer feature of radians, though, because when you can deal with the loss of accuracy you get to avoid using any loops whatsoever. They're also fundamental to understanding simple physical systems like a pendulum.


"One of the best ways to calculate sin and cos is CORDIC" This is extremely false. Cordic is 1 bit per iteration, while polynomials (Chebyshev or minmax) converge exponentially faster.


You are of course right about the speed, when a fast hardware multiplier is available for the computation of the polynomials.

On the other hand with CORDIC it is extremely easy to reach any desired precision, and in cheap hardware it does not require multipliers.

So CORDIC may be considered as "one of the best ways" depending on how "best" is defined.

Even when developing a polynomial approximation for the fast evaluation of a trigonometric function, it may be useful to use an alternative evaluation method by CORDIC, in order to check that the accuracy of the polynomial approximation is indeed that expected, because for CORDIC it is easier to be certain that it computes what it is intended.


> you always have to go back to radians to actually do calculation.

The article actually argues the opposite: that the common implementations of sine and cosine start by converting their radian based arguments to turns or halfturns by dividing by pi.


That's just because the power series would take ages to converge for large arguments, so you take advantage of periodicity. But the implementation in a floating point world is a different thing than the definition in an infinite series world.

For example, e^x can be implemented by handling the integer and fractional parts separately, for similar reasons. But no one really cares about the functions e^floor(y) and e^(y-floor(y)). They are only useful as part of an implementation trick.


That's really not the only thing going on. Yes, it allows you to take advantage of periodicity. But many common function approximations work best (i.e. not requiring any transform of the argument) over the interval [-1, 1].


This is not the common "implementation" of sine and cosine, its the common argument h, in his use case, he tends to want to calculate turns and half turns most often. He might be able to refactor his functions to optimize for this, but its not exactly something I would expect to be a good idea for library code, people do want to calculate other angles.


sin(x) ~~ x only in radians, so honestly that's reason enough.

Once in a while we get programmers wanting to disrupt mathematical notation for whatever reason... Worst I've seen so far was one arguing that equations should be written with long variable names (like in programming) instead of single letters and Greek letters. Using turns because it's a little easier in specific programming cases is just as short-sighted, I'd say, it doesn't "scale out" to the myriad of other applications of angles.


Those perfect radians use 2*pi, aka tau, though, a different math notation issue, where mathematicians have chosen the wrong option (imho) and a case for disrupting that part of math notation, to make radians easier to teach: 1/4th of a circle could be tau/4 radians, 1/8th could be tau/8, etc..., instead of confusing halved factors with radians expressed as amount of pi.

Regarding long variable names: I'd rather have long variable names, than a mathematician using some greek symbol in formulas without telling what the meaning of it is (and it could be different depending on their background). But I have no issues with the single letter variables if they're specified properly.


As far as I know, the whole tau disruption wasn't proposed by programmers, so I think we're safe on that.

And proposing to write equations in books and articles with long variable names... Well, Algebra was invented for a reason.


Just out of curiosity, where did tau come from? I never heard of it used for 2pi, and frankly, it seems like a poor choice because in engineering it is one of the most common symbols used (time constant tau).


It apparently was chosen because it's the starting sound of "turn": Hartl chose tau to represent 2pi because it nicely ties in with the Greek word “tornos,” meaning “turn,” and “looks like a pi with one leg instead of two.”

https://blogs.scientificamerican.com/observations/the-tao-of...

There was an earlier effort that used a new "two pi" symbol consisting of a "π" with an extra leg in the middle: https://www.math.utah.edu/~palais/pi.pdf.


Funny coincidence: π with an extra leg is the Cyrillic cursive letter for the sound t.


https://tauday.com/ is a good entrance to this particular rabbit-hole.


Doesn’t Tau (the letter) look like half of Pi? Isn’t this a lost cause already?


I think it's a won cause already.


Look up the Tau Manifesto: it’s all explained there.


That is a completely different matter. The definition of sin/cos in radians doesn't change if you prefer to use 2 * pi or tau - it's still x - x^3/3! + [...]. sin(pi/2) = sin (tau/4) = 1.


> Worst I've seen so far was one arguing that equations should be written with long variable names (like in programming) instead of single letters and Greek letters.

That could never work. If anything the words comprising mathematical texts should be defined once and thereafter truncated to their first letter to reduce cognitive burden and facilitate greater comprehension.

c = "could"; d = "don't"; f = "for"; g1 = "go"; g2 = "great"; i = "it"; i2 = "i"; m = "me"; s = "see"; w = "works"; w2 = "what"; w3 = "wrong"

i w g2 f m; i2 d s w2 c g1 w3.


Reference Error: s is not defined.


Thanks for the correction. I will be sure to credit you in the acknowledgements.


That is the way to do the math, but not the way to write the code.

That said, I would like for my compiler to combine any multiplications involved down to one factor for input to the fastest sin/cos operations the machine has. And, to treat resulting multipliers close enough to 1, 1/2, and 1/4 as exact, and then skip the multiplication entirely.

But the second part is a hard thing to ask of a compiler.


Yeah, seems to me that languages should allow way more semantic expression than most do today.

I wish I had done CS, those kinds of compiler optimization sounds so fun. I'd love to work on that


Good news, optimization is engineering, not CS. CS is all about what a program would eventually do, if you were ever to run it. Once you run it, you have moved to the domain of technicians. Engineering is about making it run better.


Compiler optimization is very much CS.


Only in practice. And, mostly implemented by engineers.


What really bothers me is that mathematicians seemingly never distinguish between doing and presenting mathematics.

You can do your own scribbles with single letters, so do I, it works fine.

But when you present maths in a scientific article, maths book, Wikipedia article or similar, your convenience as a writer should be secondary. Your task is to present information to someone who does not already know the subject. Presenting an equation as six different Greek letters mashed together means that the equation itself convey almost no information. You need a wall of text to make sense of it anyway.


Uh oh. I see you didn’t yet encounter the Einstein-notation for tensors :-)


Or just defining the result of division by zero as zero "for safety": https://www.hillelwayne.com/post/divide-by-zero/

It boggles the mind, truly!


Are you claiming the author is incorrect that x/0 = 0 is mathematically sound?


Depends how you define “soundness”, but the idea of prolonging a function out of its definition domain with an arbitrary value that doesn't make it continuous is arguably a curious one.

From an algebra perspective (the one given in the blog post) it may be fine, but from a calculus perspective it's really not.

The lack of continuity really hurts when you add floating points shenanigans into the mix, just a fun example:

When you have 1/0 = 0 but 1/(0.3 - 0.2 - 0.1) = 36028797018963970. Oopsie, that's must be the biggest floating point approximation ever made.


But for 1/x you have that issue anyway. If x is on the negative side of the asymptote but a numerical error yields a positive x, you'll still end up with a massive difference.


I don't know about "mathematically sound" but I would rather retain the convention that any number divided by itself equals 1.


That's only for small angles though (stems from Taylor's expansion). With other units, you have a conversion factor, but it remains true enough at small angles.


Whats wrong with long variable names?


Did you ever need to do involved mathematical manipulations using pen and paper? How would you judge the readability of the following expressions:

  zero_point equals negative prefactor divided_by two plus_or_minus square_root_of( square_of(prefactor divided_by two) minus absolute_term )

  zero_point = -prefactor/2 ± √((prefactor/2)² - absolute_term)

  x = -p/2 ± √((p/2)² - q)


I might be strange but the second seems far more readable than the third to me. The first is of course nonsense.


In my opinion, it puts too much emphasis on the variables compared to the operators and numbers and makes the expression as a whole harder to parse at a glance as I have to actually read the names.


Yeah, when doing it with hand, I surely would shorten it. But when doing math on the computer with help of autocomplete, why not? But well, I do not really know if that in pure math shape exists, I am only doing Math in the context of programming.

And for pedagogic purposes, I do would like more meaningful names at times.


It's definitely a lot harder to read and make sense of an equation that is sprawled out. In some domains, I would contend that using greek letters in code would increase readability, especially for those familiar with the underlying formula, and especially if the code is not edited frequently (e.g. implementing a scientific formula which won't change).

A good compromise might be to put the equation in the comments in symbol-heavy form, and use the spelled out names in code.


Try to solve the Schrodinger Equation for even an infinite well using long variable names.

I'm not talking about using it in code, I'm talking about someone arguing that books and articles should do it as well.


If you go watch math lectures, there's a bunch of "x means Puppy Constant" or, "let's substitute in k for the Real component", or "let's signify <CONCEPT> by collecting these terms into a variable". My argument wouldn't be to replace ALL the variables with meaningful names, just the ones with a lot of meaning that a reader might not understand. It'd also be great if constants, variables, and functions all got naming conventions. Lowercase letters are variables, all caps for constants, etc. It saves a little bit on writing to shorten the variable names, but if the goal of math is to share and spread knowledge within the community or without, better naming and less-memorization would both help. You can also rename things for the working out and use friendlier names for the final equations, just tell people how you're renaming them and everyone will follow along and the programmers will stop trying to sell you one readable code.

Most importantly the flat dismissal and horror that many express when someone brings up adjusting the symbolic traditions of Maths should be investigated. Engage with why you feel so strongly that anything other than rigid adherence to tradition is sacrilege. Based on what I've heard, in order to be a great Mathematician, you need to hold onto tradition lightly and think outside the box. Rigid adherence to tradition doesn't sound like that to me.


> Engage with why you feel so strongly that anything other than rigid adherence to tradition is sacrilege

Who’s saying that? Inventing good notation is a big part of mathematics (and that also frequently gets criticized on HN because it may introduce ambiguities)

Also, there’s nothing wrong with texts that target an audience with a certain level of understanding.

It’s not as if adding, for example, “By Hermetian matrix we mean a complex square matrix that is equal to its own conjugate transpose” will make a paper much easier to understand, just as adding a comment “this is where the program starts running” doesn’t help much in understanding your average C program, or adding a definition of “monarchy” to a history paper.

In the end, any scientific paper has to be read critically, and that means making a serious effort in understanding it. A history paper, for example, may claim that Foo wrote “bar” but implied “baz”. A critical reader will have read thousands of pages, and (especially if they disagree with the claim) then think about that for a while, and may even walk to their bookshelf or the library to consult other sources before continuing reading.


Again, try to solve the Schrodinger Equation for even an infinite well using long variable names.


Got a reference for what that looks like with current notation? The internet is basically just showing the starting equation and ending equation and skipping all the intermediaries.


You can use whatever notation you want for your own work, but documenting with, at least, formal variable definitions would be a significant boon for math literacy.


Nothing, but their use in mathematical equations will certainly conflict with the implicit multiplication in equations (i.e. `abc` in a formula means `a * b * c`, not a variable abc).


This is already a problem, sin is the sine function, not sin.


You can use a different font.


LOL


> > But math never decreed that sine and cosine have to take radian arguments!

> Ummm, actually it did.

No, it didn't. Some specific uses looking better with radians does not mean you have to use radians always.

When I first learned sine and cosine, we used degrees, and that worked fine. Later we switched to radians, but there's no reason why you shouldn't use turns, and the article gives a very good argument why in some cases you definitely should.


>Some specific uses looking better with radians does not mean you have to use radians always.

It's not just some specific use cases, it's the majority of cases if you look across all of math and science. Switching to turns would be stupid, especially once you start doing differentiation and integration. The fact that we use radians almost across the board isn't some accident.


The simplicity of the Taylor series of sine and cosine is irrelevant, there are no important applications for those series.

There is only one consequence of those series that matters in practice, which is that when the angles are expressed in radians, for very small angles the angle, its sinus and its tangent are approximately equal.

While this relationship between small angles, sinuses and tangents looks like an argument pro radians, in practice it isn't. There are no precise methods for measuring an angle in radians. All angle measurements are done using an unit that is an integer divisor of a right angle, and then the angles in radian are computed using a multiplication with a number proportional with the reciprocal of Pi.

So the rule about the approximate equality of angles, sinuses and tangents is at best a mnemonic rule, because to apply the rule one must convert the measured angles into radians, so no arithmetic operations can be saved.

"Turns" generalize perfectly to higher dimensions.

To the 3 important units for the plane angle, i.e. right angle, cycle and radian, there are 3 corresponding units for the solid angle, i.e. the right trihedron (i.e. an octant of a sphere), the sphere and the steradian.

The ratio between the right trihedron and the steradian is the same as between the right angle and the radian, i.e. (Pi / 2).

The ratio between the sphere and the right trihedron is 2^3, while that between cycle and right angle is 2^2. In N dimensions the ratio between the corresponding angle units becomes 2^N.

Moreover, while in 2 dimensions there are a few cases when the radian is useful, in 3 dimensions the steradian is really useless. Its use in photometry causes a lot of multiplications or divisions by Pi that have no useful effect.

There is only one significant advantage of the radian, which is the same as for using the Neper as a logarithmic unit, the derivative of the exponential with the logarithms measured in Nepers is the same function as the primitive, and that has as a consequence similarly simple relationships between the trigonometric functions with arguments measured in radians and their derivatives.

Everywhere else where the radian is convenient is a consequence of the invariance of the exponential function under derivation, when the Neper and radian units are used.

This invariance is very convenient in the symbolic manipulation of differential equations, but it does not translate into simpler computations when numeric methods are used.

So the use of the radian can simplify a lot many pen and paper symbolic transformations, but it is rarely, if ever, beneficial in numeric algorithms.


> The simplicity of the Taylor series of sine and cosine is irrelevant, there are no important applications for those series.

The addition theorems for trigonometric functions can easily be shown by the multiplication theorem for Taylor series (and adding two Taylor series). This proof would be more convoluted if the Taylor series were not so easy.

Also, because of the simplicity of their Taylor series, one immediately sees that sin and cos are solutions of the ODE y'' = -y.

Another application of the Taylor series is that by their mere existence, sin and cos (as real functions) have a holomorphic extension.


The proof of any property of the trigonometric functions is trivial when the sine and the cosine are defined as the odd and even parts of the exponential function of an imaginary argument, and the proof uses the properties of exponentiation.

Any proof that uses the expansion in the Taylor series is a serious overkill.

Moreover, those proofs become even a little simpler when the right angle is used as the angle unit, instead of the radian.

In this case, the sine and the cosine can be defined as the odd and even parts of the function i ^ x.


Those are useful symbolically, but not computationally.


> there are no important applications for those series.

Excuse me? Have you done any computation in Physics? Have a look at the pendulum equation, for a start...


Only in school exercises you can solve a differential equation by expanding a sine function into a Taylor series.

In practical physics computations, the solution of differential equations requires numerical methods that do not use the Taylor series of specific functions, even if the theory used for developing the algorithms may use the Taylor series development of arbitrary functions.

For accurate prediction, the simple pendulum equation also requires in practice such numerical methods, which do not rely on the small-angle approximation that enables the use of the Taylor series of the trigonometric functions, for didactic purposes.


> Only in school exercises you can solve a differential equation by expanding a sine function into a Taylor series.

> In practical physics computations, the solution of differential equations requires numerical methods that do not use the Taylor series of specific functions, even if the theory used for developing the algorithms may use the Taylor series development of arbitrary functions.

I'm sorry, but you have no idea what you're talking about. Series expansions is one of the most widely used techniques in Physics. Obviously some equations require full blown numerical methods to be solved, but one can do a whole lot with analytical techniques by doing series expansions and using perturbation theory.

Saying that this is only used "in school exercises" shows that you're completely out of touch with reality.


You have replied to something that I have not said.

I have said that the Taylor series of arbitrary functions have various uses, but there is no benefit in knowing which are the specific Taylor expansions of the trigonometric functions, with the exception of knowing that the first term of the sine and tangent expansions when the argument is in radians is just X.

Solving physics problems using the expansion of an unknown function in the Taylor series has nothing to do with knowing which is the Taylor series of the sine function.


> with the exception of knowing that the first term of the sine and tangent expansions when the argument is in radians is just X.

There are more terms in the expansion that you can use, that's the whole point of using an expansion...

> Solving physics problems using the expansion of an unknown function in the Taylor series has nothing to do with knowing which is the Taylor series of the sine function.

I hope you're aware that the sine function appears quite often in Physics problems.


> there are no important applications for those series.

I cannot believe I just read this.


When have you ever used the Taylor series of sine and cosine for anything (outside school) ?

When you approximate functions by polynomials, including the trigonometric functions, the Taylor series are never used, because they are inefficient (too much computation for a given error). Other kinds of polynomials are used for function approximations.

The Taylor series are a tool used in some symbolic computations, e.g. for symbolic derivation or symbolic integration, but even in that case it is extremely unlikely for the Taylor series of the trigonometric functions to be ever used. What may be used are the derivative formulas for trigonometric functions, in order to expand an input function into its Taylor series.

The Taylor series of arbitrary functions (more precisely, the first few terms) may be used in the conception of various numeric algorithms, but here there are also no opportunities to need the Taylor series of specific functions, like the trigonometric functions.

The Taylor series obviously have uses, but the specific Taylor series for the trigonometric functions do not have practical applications, even if they are interesting in mathematical theory.


> When you approximate functions by polynomials, including the trigonometric functions, the Taylor series are never used, because they are inefficient (too much computation for a given error). Other kinds of polynomials are used for function approximations.

Can you point me to some implementation of sin that’s not actually using Taylor expansion in some form? Because most that I am aware of do in fact use Taylor series (others are just table lookup). See glibc for example:

https://github.com/bminor/glibc/blob/release/2.34/master/sys...

And here is musl

https://git.musl-libc.org/cgit/musl/tree/src/math/__sin.c

(The constants are easily checked to be -1/3!, 1/5! Etc)

This might have something to do with the Taylor’s theorem. You know, that the Taylor’s polynomial of the order n is the only polynomial of order n that satisfies |f(x)-T(x)|/(x-a)^(n+1) -> 0 as x -> a. In other words, the Taylor polynomial of order n is the unique polynomial approximation to f around a to the order n+1. This means you cannot get any better than Taylor close to the origin of the expansion. This causes implementers to focus on argument reductions instead of selecting polynomials.


If any of those libraries uses the Taylor expansion for approximation, that is a big mistake, because the approximation error becomes large at the upper end of the argument interval, even if it is small close to zero.

What is much more likely is that if you will carefully compare the polynomial coefficients with those of the Taylor series, you will see that the last decimals are different and the difference from the Taylor series increases towards the coefficients corresponding to higher degrees.

Towards zero, any approximation polynomial begins to resemble the Taylor series in the low-degree terms, because the high-degree terms become negligible and the limit of the Taylor series and of the approximation polynomial is the same.

So when looking at the polynomial coefficients, they should resemble those of the Taylor series in the low-degree coefficients, but an accurate coefficient computation should demonstrate that the polynomials are different.


I did some testing, and you are correct that they are slightly different:

  >>> print("{:.30f}".format(sin(pi/4, 0, 0, *standard_coeff)))
  0.707106781186567889818661569734
  >>> print("{:.30f}".format(sin(pi/4, 0, 0, *musl_coeff)))
  0.707106781186547461715008466854
The difference at the very edge of the interval occurs at the 14th digit of decimal expansion, and it's at the edge of accuracy of double, at 16th digit: after ...6547, the exact value starts with ...6547_5244, instead of 4617. I wouldn't exactly call it a big mistake, as the difference would not be relevant in almost all practical uses, but that would be a mistake nevertheless, and I'm sure someone would be bitten by this. Thanks, I learned something new today!


> When have you ever used the Taylor series of sine and cosine for anything (outside school) ?

I've used them a few times, mostly in the embedded space, and mostly in conjunction with lookup tables and/or Newton's method, but yes I've absolutely used them outside school (years ago, I forget the exact details).

- implementing my own trig functions for embedded applications where I wanted fine control over the computation-vs-precision tradeoff

- implementing my own functions for hypercomplex numbers (quaternions, duals, dual quaternions, and friends).

- automatic differentiation

Does the Taylor series form survive to the final application? Usually not, usually it gets optimized to something else, but "start with Taylor series and get back to basics to get a slow but accurate function" has gotten me out of several pickles. And the final form usually has some chunks of the Taylor series.


I agree that using the Taylor series can be easier, especially during development, mainly because convenient tools for generating approximation polynomials or other kinds of approximating functions are not widespread.

However, the performance when using Taylor series is guaranteed to be worse than when using optimal approximation polynomials, according to appropriate criteria.

Still I cannot see when you would want to use the Taylor series of the trigonometric functions, even if for less usual functions it could be handy.

There are plenty of open-source libraries with good approximations of the trigonometric functions, so there is no need to develop one's own.

In the case of a very weak embedded CPU there is the alternative to use CORDIC for the trigonometric functions, instead of polynomial approximations. CORDIC can be very accurate, even if on CPUs with fast multipliers it is slower than polynomial approximation.


> So the use of the radian can simplify a lot many pen and paper symbolic transformations, but it is rarely, if ever, beneficial in numeric algorithms.

If only computers could do a bit of symbolic algebraic manipulations before issuing the machine code.

Wait, isn't that what optimizing compilers can do? That requires an optimization across library calls and thus a form of inlining, which doesn't see far fetched for a math library call. Or some optimizations can't be done due to floating point error propagation (which could be relaxed)?


A simple example: suppose we want to compute cos(x)-1 near x=0, with high accuracy, in single precision fp. How to do this? Its very easy: google "taylor series for cosine", lobb off the fist term (1), and you're done.


I already learnt in school to calculate trigonometry using radians or turns depending on the situation. It was part of the general math curriculum in Bavaria. As far as I am aware both are mathematically sound and there is no reason to religiously use one of them over the other. Let your use-case or input parameters decide. The examples given in the article definitely make no sense in radians.


"I already learnt in school to calculate trigonometry using radians or turns depending on the situation. It was part of the general math curriculum in Bavaria. "

Out of interest, when did you go to school in Bavaria and in which grade did you learn about turns? I was in school in Bavaria a long time ago and I don't remember learning about turns there. Could very well be that I forgot or our teacher forgot to teach it.


That should've been about 15 years ago. I don't remember the grade, but based on the subject probably 8th or 9th? I thought it was in the textbook but possibly our teacher just added it himself.


The shocking thing with some of these articles is somehow the author asked “why do people use radians” and ended up with an answer of “it was an arbitrary decision and the world would be better of not using it”.

I feel a bit of humility would have helped the author and perhaps they would have considered the possibility that they didn’t think of the problem deep enough rather than hastily write a blog post about it.

It speaks to the hubris and the superficiality of thinking for some authors.


The shocking thing about some of these comments is somehow they didn't consider that the original author spoke in a specific context.

Casey didn't say the world would be better with turns instead of radiants. He said that game engine code would be better with turns instead of radiants. Be more charitable.


The writer don’t seem to realise that radian is not an arbitrary unit but a dimensionless one which is defined so that 1rad is actually just 1.

Reading the submission and the comments here, I’m under the impression that trigonometry is not extensively taught in middle schools and high schools in the USA. While I’m slightly envious you might not have to suffer developing powers of cosine and sine but that would explain the lack of familiarity with radian I see here. Am I wrong?


> Am I wrong?

Yes. Trigonometry is extensively taught in the US. People forget this stuff if they don’t use it.

Ask some 30 year old chef in whatever country you fantasize teaches properly to compare and contrast turns vs radians and you’ll get similar responses.


I'm a 50 yo programmer. I have a CS degree. I don't even remember my college calculus much less my high school trig. I just haven't had cause to use it in my career, not as a sysadmin, not as a programmer. My son is taking calc 3 and I knew I happened to have my calc 3 notes from the mid-90s, so I pulled them out of the filing cabinet and my very carefully taken notes, my proofs, my hand drawn graphs, it was all gibberish to me. That was stuff I knew like the back of my hand when I graduated but it quickly faded away.


By far the most annoying myth I face when trying to discuss the pros and cons of various education techniques is the pervasive idea that everybody is a magical knowledge sponge and will go to their grave still remembering how to integrate by parts and every detail about some particular battle they covered in seventh grade, and therefore, if we slightly tweak a curriculum plan to drop something that was included on theirs we'll be stealing that knowledge from all the 70 year olds who will eventually have been on that plan.

Where this idea comes from I have no idea. Personally looking around in school itself it was plainly obvious this was all going in one ear and out the other for the majority of students even at the time. The better students retained it long enough to spew it out on the test but that was already above average performance. That doesn't mean there isn't still a certain amount of value in that in terms of what that knowledge may do to their brain during the brief period of time it is lodged in there. (I think there's a lot of value in just learning the "shape" of all this stuff, and perhaps having some index of what might be valuable to know.) But the idea that we can spend 15 minutes and a one-page homework assignment on something and expect that to last 60+ years is just nonsensical.

I mean, honestly, anyone over the age of 22 or so ought to be able to notice a distinctly sub-100% retention rate simply by looking inside themselves.

Yes, to a first approximation everyone with a normal education in the US has been present while some sort of trig was discussed. Not all of them, but still quite a lot of them, were present for the Taylor expansion discussion. The vast bulk of them have had it decay by 25, and there simply isn't anything to be done about that if you're talking about humans and not some homo educationous who mythically retain all knowledge they were exposed to even for 30 seconds just as the mythical as homo economicus perfectly rationally conducts all their economic business at all times. Perhaps they're actually the same species.


I remember being amused by this same observation when my own country decided to reduce mandatory education from k+12 to k+10 (cutting two years of high-school). They immediately began re-arranging the curriculum in high-school, for example to move organic chemistry from 11th grade to 10th grade, on the basis that it's important for students who only finish the mandatory 10th grade to know some organic chemistry as well, instead of the old curriculum which would have only taught them inorganic chemistry after 10th grade (this has the bonus of making the chemistry curriculum inorganic I -> organic I -> inorganic II -> organic II, for maximum confusion).

To me, even though I was barely out of high-school at the time, this was obviously absurd - expecting especially someone who wants to drop out of high-school early to retain any notion of organic chemistry taught in a school year, that they couldn't learn on the job if it was really required, seems so obviously nonsense that I couldn't help but laugh. Especially since the same thing was done to basically every other subject as well, with the same intentions.

One note: in my country, the curriculum is completely centralized; there is some small amount of choice, but it amounts to, at most, 1-2 classes per semester; everything else is fixed.


Think it's part of the equality/blank-slate myth that everyone is the same and has the same potential and natural abilities.


Actually, it's almost entirely the opposite—the idea that students are a "sponge" that can soak up knowledge perfectly is then taken directly to mean that some students are better at soaking up / retaining knowledge then others, and that the "smart" kids who do the best on the tests are the ones who are going to retain the knowledge the best. And then the ones that were the best knowledge-sponges will eventually go on to become the next generation of teachers, since they know the most information. Whereas for most kids it's completely the opposite—they memorize the information in their short-term memory without understanding the fundamentals, they do great on the tests, and then they forget all of it immediately. But they stand out from their peers as better students, because they're able to play the "game" of school better and optimize for being a knowledge-sponge that will absorb the most information as possible and forget it as quickly as possible.


>>> simply isn't anything to be done about that if you're talking about humans and not some homo educationous who mythically retain all knowledge they were exposed to even for 30 seconds just as the mythical as homo economicus perfectly rationally conducts all their economic business at all times. Perhaps they're actually the same species.

Yeah they belong to the genus homo mythicus


I just wanted to say I deeply appreciate the eloquence of this comment. Thank you


On a related note, it bothers me that there’s so much urgency to teach younger kids more and more advanced math. I use more and higher math on a day-to-day basis than practically anyone I know, but it’s very rarely even calculus, and even then it’s typically just discrete integrals or derivatives.

There’s just an absolute ton of math being taught that’s going completely to waste, and it’s at the expense of the humanities.


My biggest “Screw everything” moment about math was the first lecture of my numerical methods class in college when the professor said: “All that calculus you’ve been learning your whole lives? It’s useless. Carefully curated set of a few dozen problems that are doable by hand. Here’s how it’s really done for anything remotely practical”

And then we learned a bunch of algorithms that spit out approximate answers to almost anything. And a bunch of ways to verify that the algorithm doesn’t have a bug and spat out an approximately correct answer. It was amazing.

But the most long-term useful math class (beyond arithmetic and percentages) has been the semester on probabilities and the semester on stats. I don’t remember the formulae anymore, but it gave me a great “feel” for thinking about the real world. We should be teaching that earlier.


When I took 400 level Real Analysis: “All that calculus you’ve been learning your whole life? It’s a lie. Those epsilon delta proofs? They were fake - none of you were smart enough to challenge us on ‘limits’. And now we’re gonna do it all again only this time it’s really gonna be rigorous.”


Is there any somewhat simple explanation of what are the limitations of the epsilon-delta definition of limits that make it non-rigorous? I've been trying to find some information about your comment, but have so far come up empty.


I'm shaky on this - it's been thirty years - but I believe the Calc I epsilon delta proofs relied on the notion of an open and closed intervals on the real line, which we all intuitively understood.

The upper level Real Analysis made us bring some rigor as to what an interval on the real line actually meant going from raw points and sets to topological spaces to metric spaces, then compactness, continuity, etc. all with fun and crazy counterexamples.


100% agreement on teaching stats as the "pinnacle" of high school mathematics. Those are what directly rule the lives of the average non-engineer.


I think much of math 'education' is constructed as a filter to identify a small handful of math prodigies. The general population suffering anxiety and youth lost in the filter is seen as an acceptable sacrifice for the greater good of finding the math prodigies so those can be given a real math education.


Yes, this is a very good point. In my experience from, uh, several decades ago, it also felt like a lot of math educators watched (and showed in class...) Stand and Deliver way too many times and the only message they took away was "we should teach everyone calculus!"


I doubt the students would actually learn humanities in the extra time allotted if it's not used for math. I remember a distinct refusal to internalize, especially in my male peers, during "English" classes.


Forget humanities. The hours a week after school that highschool students spend on calculus homework would probably be better spent socializing with their friends. They'll never be young again, wasting the time of a teenager with unproductive busywork is a horrible thing to do.


I’m the opposite, Im 15 years into my career of applied research which for me is like an extension of university. I tend to lean on Mathematica to do my calculus though. I think high school curriculum was optimized to expose a lot of people to things they won’t need on the off chance that a few will end up as researchers of some sort. It would be more efficient to identify such people earlier and split them off. I think historically that was the idea but there has been an egalitarian push to broaden the pool.


I think the point of high school is to make kids' brains do work, and what you are learning is secondary.

People love to hate on their school curriculum and all the useless knowledge they had to acquire but I'm positive it makes you a smarter person overall, and the body of high school knowledge makes learning more specialized knowledge easier (even if that's baking bread or whatever)

(People also love to talk about how little they remember from school, yes the brain is a muscle and you stopped working out, congratulations.)


I'm 27, educated in the UK, all I remember about trigonometry is SOHCAHTOA.


USA here, same acro. I still start off solving by writing it off and drawing slashes through O/H A/H O/A for reference.

Came to use trig functions quite frequently while playing video games, and that was a big surprise to me. Not to assume you've played it, but I've recently discovered that Stormworks is a programmer's game - you can write microcontroller code in LUA for your vehicle designs. And, wow, does it ever use my trig knowledge everywhere.

Realized the transponder beeps can be triangulated, tick being 1/60th of a sec and that's a distance estimate resolution of up to 5-10 km. And that's when cos and sin came back to be useful because you can do intersection of circles and figure out where to do a sea rescue more precisely. So video games, trig. Who would've thought?


The Dutch version was SOSCASTOA, with a picture of a ship called the Castoa sending out an SOS because it was sinking. That picture really helped.

And I even remember what it means:

SOS: sine = opposing side divided by diagonal (schuine) side

CAS: cosine = adjacent divided by diagonal

TOA: tan = opposing divided by adjacent.

I don't think I've ever used it for anything practical, but I can still reproduce it after all this time (I'm 47 now).


> I don't think I've ever used it for anything practical.

Just the other day I wanted to compute viewing angle and did it by hand even though there are plenty of calculators like this out there:

http://www.hometheaterengineering.com/viewingdistancecalcula...

I’d say I use it for something practical/random like that a few times a year?

Another example was placing some ceiling speakers whose tweeters had a 15° angle so that they were pointed directly at a seating position below. How far did I need to place them in front of the seating position from directly overhead.

I would guess any sort of construction you’re using it fairly often.


The one that I still use is the 3-4-5 rule to ensure a right angle. Still use that one to chalk off sporting fields of play.


I use that one all the time, but that's Pythagoras, not sin and cos.


Yeah, I know it's not the same type of math, but it's one of the few things that I still use today. To be honest, I can't think of one time in my professional career that I have needed to calculate the area under a curve to solve a life problem. Geometry has been the most used branch of math past basic arithmetic, oh, and algebra. It amazes me the number of people that don't realize how many times in a day they have solved for X.


It was by no means uncommon when I was taught in the US but I somehow missed it, instead just internalizing the various relationships directly, and was briefly confused when classmates started talking about SOHCAHTOA working together in college math courses.

What I remember from trig is to draw a unit circle. Most of the rest falls out of that.


I’m handy outside of work and use sohcahtoa often enough to remember it. Triangles are everywhere and sometimes you need to compute angles and lengths of sides.

Statistics is also useful and applicable to everyday life, but I didn’t learn that till college as best I can recall.

I don’t regret having spent time learning calc, or physics or chemistry or biology for that matter. If you asked me to come up with a curriculum I’d have a really hard time prioritizing. Maybe the one thing I’d like to see kids learn better is how to be self-directed learners. I’m still fairly surprised at the number of colleagues I have who seem unable to problem solve and figure something the fuck out. Even knowing when and how to ask for help.


I'm a 50+ year old American, of british decent... I never managed to remember the 'american' mnemomic, but my dad taught me one the used to use in England around WWII: Percy has a bald head, poor boy

Perpendicular/hypotanuse = Sin

Base / Hypotanuse: CoSin

Perpendicular / Base: Tan

edit: try to fix the HN god awful formatting


Ha I also remember the mnemonic but I don’t have a clue how to use it.


it's to remember the ratios for trigonometric functions on a right triangle:

Sine: Opposite over Hypotenuse

Cosine: Adjacent over Hypotenuse

Tangent: Opposite over Adjacent


Otto had a heap of apples.


There's lots of stuff I knew well and then forgot, but can re-learn quickly. For example, nearly all of calculus (useful when dealing with machine learning). Other bits I've retained and never forgotten, such as everything I've seen involving matrices. There are even things which I had conveniently completely "forgotten" but later emerged as suppressed latent memories- for example, set theory. I was so unhappy with the lead-up to Russell's paradox that I actively suppressed thinking about sets, groups, rings, and fields for several decades.

There are even other bits that I was shown, never incorporated into my brain at all, but later recognized as truly important (Taylor series expansions, the central limit theorem, the prime number theorem, etc).


Informally, big-O and limits have a similar smell to them, calc might have helped get some wheels turning in your head for that.

I do recall taking a "probability in CS" as an electrical engineering student -- it was pretty mind-blowing to me the extent to which the CS students did not like to talk about any continuous math. It makes sense, though, these are different specialties after all.


Honestly the typical developer needs a solid understanding of algebra, but not much beyond that. Though any time I get into game dev stuff I start ripping my yair out over quaternions


Thankfully you didn't publish a blog post claiming you discovered a new way to represent coordinates, and then assert that programmers should switch.


I'd argue it's not so much taught in the US as it is tested. The common core standards say [0] that students should:

> Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle.

> Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise around the unit circle.

and so on. However, in practice, this means that students need to be able to answer "C" when presented with the question:

> One radian is:

> A) Another word for degree.

> B) Half the diameter.

> C) The angle subtended on a unit circle by an arc of length 1.

> D) Equal to the square root of 2.

A surprising number of students can get through without ever really comprehending what a radian is. They might just choose the longest answer (which works way too often), identify trick answers and obviously wrong answers, and eventually guess the teacher's password from a lineup by association of the word salad of "radians" and "subtended."

They might not even have a clue what the word "subtended" means, but they know it's got something to do with "radian" and that's enough. It is more important for the school that the students answer (C) than that they understand what a radian is.

[0] https://web.archive.org/web/20220112000314/http://www.corest...


No, you’re missing the point. I went to school in the 80s and learned this stuff without multiple choice and fully understood it all of the way through undergrad where I took up through calc 3 and differential equations. Then I spent nearly 30 years as a SWE not using it and forgot nearly all of the details within maybe 15 years.

This happens with very basic things like human languages. Bilingual people can forget an entire secondary language if they don’t use it for a decade+.


It's taught extensively in the US, but what's never done is showing how terrible many of those identities or integrals are in degrees.

Derivative of sin(x) is cos(x). Many people probably think this works for degrees, but it's actually some abomination like pi cos(pi x/180)/180.

Of course, turns are very reasonable units sometimes for sure.


> Derivative of sin(x) is cos(x). Many people probably think this works for degrees, but it's actually some abomination like pi cos(pi x/180)/180.

That's what it would be if you are using sin in degrees and cos in radians. But if you are using degrees for both then the derivative of sin(x) is pi/180 cos(x).


Yup, I knew I'd mess it up. :D


Like most here, I've learned and forgotton lots of trig and calculus.

However, I still remember that "eureka!" moment of realizing that radians were special, that the small angle approximation of sin(x) = x, and many related math rules, work only when x is expressed in radians. I guess that's a credit to my math teacher, who basically led the class in deriving mathematical formulas rather than just presenting them to us.

I think the article is still valid and interesting, as "turns" in some use cases might improve performance and accuracy. But radians aren't at all "arbitrary" - if we ever encounter technologically advanced aliens, they certainly won't use degrees, but they will understand radians.


Sure, but are people here chefs? I would expect most programmers in mu country to remember this, but most chefs to have forgot it.


I would only really expect programmers who work with angles regularly (those working in 3D) to remember it. Even then, you’re likely just smashing quaternions together anyway.


Effectively yes. A tiny subset of programmers need trig.


> The writer don’t seem to realise that radian is not an arbitrary unit but a dimensionless one which is defined so that 1rad is actually just 1.

First, I would be cautious about suspecting someone of Casey Muratori's calibre didn't consider something just because he didn't directly addressed it.

Second, the choice of unit is kind of arbitrary, even if the unit itself is not. Radiants are nice because the length of a 1 radiant arc is the same as the length of the radius. But turns are also nice because angles expressed in turns are congruent modulo 1 instead of modulo 2π.

Third, he talks in the context of video games. Such games use code, that have to be read by humans and executed by the CPU. And that's the main point of his article: in this context, expressing stuff in terms of (half) turns reduces the amount of code you have to write & read, reduces the number of multiplications & divisions the CPU has to make, and makes some common operations exact where they were previously approximated.

Do we even care at this point whether the definition of radians is arbitrary or not? I love the elegance of radiants, but for game engine code I'm willing to accept they're just the wrong unit for the job.


> radian is not an arbitrary unit but a dimensionless one

This is by convention, but has been and is still being debated because calling it dimensionless causes some problems. https://en.wikipedia.org/wiki/Radian#Dimensional_analysis

Furthermore, the whole reason to treat radians as dimensionless, the problem, is with angles, not with radians specifically. Degrees are also considered dimensionless. So, a turn could be treated as dimensionless too, with a conversion constant to radians & degrees, just like between degrees and radians.

Of course, the declared dimensionlessness of angles like radians isn’t something generally discussed in pre-college trig courses, that’s a subtle subject that matters more in physics. In my high school trig, we all understood radians to be a unit of angle and never pondered whether angles had dimension.

Also subtle point, but dimensionless doesn’t mean unitless. It’s another separate convention to drop the units when working with radians.


> "I’m under the impression that trigonometry is not extensively taught in middle schools and high schools in the USA. While I’m slightly envious you might not have to suffer developing powers of cosine and sine but that would explain the lack of familiarity with radian I see here. Am I wrong?"

It varies by school, but overall I think this prediction is incorrect. Trigonometry was an important subject in high school — for all of the math, physics, and possibly chemistry courses — and then if you take calculus in university, it's very, very important to learn trigonometry well (or you'll really struggle as a student).

So, even on the off-chance that trigonometry is not taught in high school (which I predict is rare), a first-year student taking calculus in university must learn it on their own time. Good calculus textbooks (e.g. Thomas Calculus) even account for this, having fairly comprehensive textbook sections on what you need to know about trigonometry to succeed in the calculus course.

Most students who therefore took math to pre-calculus or calculus (or physics and possibly chemistry), should therefore have a good exposure to the definition of the radian.


I was a bad student through 8th grade, but managed to get selected for a STEM magnet school. I was supposed to enter 9th grade with Geometry, then algebra II, trig, Calc for the 4 years. But they discovered i'd never passed algebra prior, they put me in algebra, which means i would have finished in trig. Due to a crazy 3.5 years, i never got a high school math education. Calculus makes my eyes glaze over, and all i know about triangles is sohcahtoa.

Every couple of years i try to get some higher math education, but nothing makes sense. It's one of the reasons i [think] i suck at programming - i should note that another reason is i first learned BASIC, then qbasic, then fortran, and then C never made sense to me. At least i can putter around with python and R.

however i can do "basic" math things that generally everyone else has to dig out a calculator app for in my head, percentages, fractions, moving decimals, "making change". Since i suck at higher math, i'm only able to help my kids with basic math, and i try to ensure that they know it fairly well.


"[..] and then C never made sense to me"

I had my fair share of higher math, but C never made really sense to me either. It's not us, it's C that's to blame.


C puts the C in Cthulhu.


So which programming language makes sense to you? Dare I say… Rust?


Yes, Rust does indeed and a long time before that it was Pascal. I really love Pascal's syntax, it makes a lot of sense when you approach it with a math background.

- '=' is for equality only

- assignment is ':=' which is the next best symbol you can find in math for that purpose

- numeric data types are 'integer' and 'real', no single/double nonsense

- 'functions' are for returning values, 'procedures' for side effects

- Function and procedure definitions can be nested. I can't tell you what shock it was for me to discover that's not a thing in C.

- There is a native 'set' type

- It has product types (records) and sum types (variants).

- Range Types! Love'em! You need a number between 0 and 360? You can easily express that in Pascal's type system.

- Array indexing is your choice. Start at 0? Start at 1? Start at 100? It's up to you.

- To switch between call-by-value and call-by -reference all you have to do is change your function/procedure signature. No changes at the call sites or inside the function/procedure body. Another bummer for me when I learned C.

Pascal wasn't perfect but I really wish modern languages had syntax based on Wirth's languages instead of being based on BCPL, B and C.


It sounds like you just don’t like C — a perfectly reasonable position — not that you don’t understand it.


It's time-consuming, but there are great resources to learn high school math to a very high level (likely much more effectively in many cases, than actually taking a high school math course, due to thoughtful exercises and more control over the pace of learning).

I learned a lot from the Art of Problem Solving book series because they're highly focused on the reader solving problems to learn, versus giving explanations. Even if you don't finish all of it, you can strengthen any problem areas.

For a less-comprehensive but still great introduction to precalculus (with a great section on trigonometry in particular from memory), Simmons' Precalculus in a Nutshell has a great introduction to this. Then you can read a book like Thomas Calculus, which has a great introduction to trigonometry in the first review chapter.

I would even say that you would be better off working through the books above than if you had the high school classes; the best math students probably took the same approach too (working through books instead of focusing just on the class material). The main obstacle is time, because it's hard to find time when you have work and children to take care of.


Calculus made a lot of it “click” for me but I somehow got through high school and college without ever doing or understanding trigonometry.


>I’m under the impression that trigonometry is not extensively taught in middle schools and high schools in the USA

Basic trig is taught in middle school, but exclusively using degrees. Advanced trig is optional in high school if you take the "hard math" track.


> Advanced trig is optional in high school if you take the "hard math" track.

This depends on the state. NYS absolutely requires everyone to learn "advanced trig" in high school.


I think I would have put the quotes around "learn."

I don't think every high school student can master advanced trig.


Wow. You people went to crap schools. We got the derivation of modern trig functions w/ maclauren/Taylor series in 9th grade (though yeah... that was the "hard core math track".) And a year of proofs and derivations in 11th grade. Quaternions and their application in physics was 12th grade.


>> The writer don’t seem to realise that radian is not an arbitrary unit but a dimensionless one which is defined so that 1rad is actually just 1.

It's been a while, but I used to have an argument that rad should be a unit. This even plays well in physics where it allows torque to not have the same units as a joule.


I don't see how radians come into the discussion of torque and energy, both of which are N*m in SI.

That discussion has to do with the failure of SI to notate the directions of vectors. When it's torque, the N and the m are at a right angle. When it's work, they are both in the same direction.


>> That discussion has to do with the failure of SI to notate the directions of vectors.

Well if radian is a unit then torque become Nm/r and is no longer Nm like energy. Then when multiplied by an angle in radians you get energy. It was *something like that*.



Trig is generally called per-calculus in US high schools. It is not a required course, but it is one of the courses everyone on the college track is expected to take.

Though most people haven't used any of that since college and so don't know it very well anymore. I smelled BS when I read the blog, but couldn't put my finger on why - the comment you replied to explained what I knew was the case but couldn't remember.


    s/per-calculus/pre-calculus/g


The percalculus ion is just calculus in its highest oxidation state.


> I’m under the impression that trigonometry is not extensively taught in middle schools and high schools in the USA

Education quality and quantity vary greatly across the country. Many schools don't require trig at all or lump it in with other classes. I memorized SOH CAH TOA and brute forced a CLEP test (the state of MN is required to allow you to test out of classes and to write a test if one doesn't exist; usually AP and CLEP tests are accepted, and they don't count for/against your GPA).

It's also culturally accepted to "be bad at math," with undertones of defeat and that it's the world doing that to you and not something you can change (maybe the blame lies elsewhere like with how math is taught as a sequence of dependencies and bombing one course makes the rest substantially more difficult). I don't know how many people scrape by a D in trig and subsequently forget it all, but I'd wager it's a lot.


No, like many others you have been confused by the incapacity of those who vote the modifications of the International system of units to decide what kind of units are the units for plane angle and for solid angle: base units or derived units.

A base measurement unit is a unit that is chosen arbitrarily.

A derived measurement unit is one that is determined from the base units by using some relationship between the physical quantity that is measured and the physical quantities for which base units have been chosen.

While there are constraints for the possible choices, the division of the units into base units and derived units is a matter of convention.

Whenever there are relationships between physical quantities where so-called universal constants appear, you can decide that the universal constant must be equal to one and that it shall be no longer written, in which case some base unit becomes a derived unit by using that relationship.

The reverse is also possible, by adding a constant to a relationship, you can then modify its value from 1 to an arbitrary value, which will cause a derived unit to become a base unit for which you can choose whatever unit you like, e.g. a foot or a gallon, adjusting correspondingly the constant from the relationship.

There are 3 mathematical quantities that appear frequently in physics, logarithms, plane angles and solid angles (corresponding to the 1-dimensional space, 2-dimensional space and 3-dimensional space). All 3 enter in a large number of relationships between physical quantities, exactly like any physical quantity.

For each of these 3 quantities it is possible to choose a completely arbitrary measurement unit. Like for any other quantities, the value of a logarithm, plane angle or solid angle will be a multiple of the chosen base unit.

For logarithms, the 3 main choices for a measurement unit are the Neper (corresponding to the hyperbolic a.k.a. natural logarithms), the octave (corresponding to the binary logarithms) and the decade (corrsponding to decimal logarithms).

Like for any physical quantities, converting between logarithms expressed in different measurement units, e.g. between natural logarithms and binary logarithms is done by a multiplication or division with the ratio between their measurement units.

The same happens for the plane angle and the solid angle, for which arbitrary base units can be chosen.

What has confused the physicists is that while for physical quantities like the length, choosing a base unit was done by choosing a physical object, e.g. a platinum ruler, and declaring its length as the unit, for the 3 mathematical quantities the choice of a unit is made by a convention unrelated to a physical artifact.

Nevertheless, the choices of base units for these 3 quantities have the same consequences as the choices of any other base quantities for the values of any other quantities.

Whenever you change the value of a measurement unit you obtain a new system of units and all the values of the quantities expressed in the old system of units must be converted to be correct in the new system of units.

The fact that the plane angle is not usually written in the dimensional equations of the physical quantities in the International System of Units, because of the wrong claim that it is an "adimensional" quantity, is extremely unfortunate.

(To say that the plane angle is adimensional because it is a ratio between arc length and radius length is a serious logical error. You can equally well define the plane angle to be the ratio between the arc length and the length of the arc corresponding to a right angle, which results in a different plane angle unit. In reality the value of a plane angle expressed in radians is the ratio between the measured angle and the unit angle. The radian unit angle is defined as an angle where the corresponding arc length equals the radius length. In general, the values of any physical quantity are adimensional, because they are the ratio between 2 quantities of the same kind, the measured quantity and its unit of measurement. The physical quantities themselves and their units are dimensional.)

In reality, the correct dimensional equations for a very large number of physical quantities, much larger than expected at the first glance, contain the plane angle. If the unit for the plane angle is changed, then a lot of kinds of physical quantity values must be converted.

To add to the confusion, in practice several base units of the 3 mathematical quantities are used simultaneously, so the International System of Units as actually used is not coherent. E.g. the frequency and the angular velocity are measured in both Hertz and radian per second, the rate of an exponential decay can be expressed using the decay constant (corresponding to Nepers) or by the half-life (corresponding to octaves), and so on.


Thanks for writing that. While I don't automatically believe it all, I think it's important to see what's arbitrary and what's natural in our units. I've struggled with the Hz vs rad/s before and I think I resolved it by including the cycle as a quanitity, so Hz = cycle/s and rad/s = 1/s. You don't seem to agree and I'm not confident of my decision, but it's now part of a big technical debt :P

A clear sign of how wrong people can be about the naturalness of units is Avogadro's constant which was recently demoted from a measured value to an exact arbitrary value. Chemists often believe that N_A, moles, atomic mass units, etc. are all somehow important or fundamental and don't realize that it's all based on a needlessly complicated constant with an (until 2019) needlessly complicated definition that could have just been a simple power of 10 if history had gone differently. Luckily the people defining SI have finally moved away from the old two independent mass units to just the kg that can now be exactly converted to atomic mass units by definition.


> in the USA

While it might be something you are now realizing, the US is not a single entity in many ways. Rather, it's some 50 states that form a country. Each state has it's own laws and ways of doing things. While there are many similar ways of doing things, none are exactly the same. On top of that, even within the state you'll have different school systems with different policies.

And we aren't even going to discuss going to American schools in Europe.


Use it or lose it. Most people have no reason to need knowledge of trigonometry, so even if they’re taught it they quickly forget it.

I never really learned trigonometry until I started doing game programming in my spare time when suddenly that knowledge and linear algebra became necessary to understand. They only way I learned it was by needing to know it.

In fact, I regularly forget knowledge I don’t need to know. The stuff I do need to know remains fresh in my mind.


any angle unit is dimensionless, radian is no exception


But they're not equal to 1, for example a degree is 0.01745...


1° is actually 0.0174... radians. 1 radian is 57.295...°. The choice of unit to specify an angle is arbitrary.


But the angle is an adimensional unit (it's the ratio of two distances, one along the circumference and one along the radius) so 1 rad = 1. Therefore 1 degree is 0.0174... radians but it is also just 0.0174.


No, you're describing one particular way to measure angles. Radians express such a ratio, but degrees don't. 1° is not a ratio between distance along a circumference and radius, it's a ratio between amount rotated and complete revolution. 1° actually stands for 1/360 (of a revolution).

Which is why it's important to add the unit after the measurement. If someone tells you an angle measures 1, can you tell whether it's 1/360 of a revolution or the angle that would be formed by traveling along a circumference a distance equal to the radius of the circle?


Angles in the SI are a ratio of two lengths (and solid angles are a ratio of two surfaces), so degrees are also a ratio of two lengths. 1 degree is a ratio of pi/180=0.01745, which happens to be 1/360th of a revolution; and you have to write down the unit to indicate the multiplicative factor. But writing down radians is just for clarity.


I don't understand the debate, radians & degrees are just 2 proportional units, just like meter & kilometer


Correct, radians are a "fake" unit made up to understand better formulas (the same way we use types in programming languages)


While it is a fake unit, it was made to make the math easy. You could call the origin of everything the place where I'm standing - but good luck calculating a path for the mars rovers to travel if I happen to walk to the bathroom.


I drive a mars rover and this cracked me up. Understanding reference frames is indeed a big part of the job. We do have to deal with "site frame updates" based on rover observations of the sun -- important but annoying. I will bring your person-centered frame suggestion to the team :-)


Speaking of reference frames, I deal with quite a few for Earth-bound things, and the primary ones we use are ECEF (Earth-Centered, Earth-Fixed) and ECI (Earth-Centered, Inertial), which then we will often move to a relative local frame for whatever object matters.

Is the equivalent set available for Martian Nav (MCMF/MCI, I guess), or do you have different/specialized/etc. frames based on something unique to Mars.


Yes... there are similar frames for Mars and other bodies. A good intro: https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/Tutorials/pd...

For the rover, we're pretty much always dealing in local coordinate systems based on reference frames defined using the rover's observations of the sun and alignment of local imagery with orbital imagery. The two frames used most frequently are called RNAV (centered on the rover) and SITE (centered on where we last did a sun observation). But then there is a tree of frame transformations for knowing the location and orientation of each part of the rover with a lot of named frames (especially important for operating the robotic arm, which I also do).


I don't understand your comment. What I meant is that we can use the numeric values of radians without ever writing the radians unit, it is indeed dimensionless (it is length / length = 1, no unit)


I agree with you. People confuse their edge case with Math as a whole.

To sum it up: "Computer Science has nothing to do with Math!" ;)


Well it's not exactly surprising, the US is fundamentally built on arbitrary baseless measurement units so getting out of that mindset is probably difficult.

A unit that could be inherently defined by math itself and not a farmer looking at their hands and feet? Preposterous!


> A unit that could be inherently defined by math itself and not a farmer looking at their hands and feet?

Where do you think base ten comes from?


Base 2, 10, 3, 8, hex, radians stay radians.


Radians aren't a unit, and degrees have nothing to do with peoples' hands or feet.

Imagine using an angle measurement for physical things such that angles with rational measurements stack to a full circle. Preposterous.


> Of course you can work in other units, but you'll need to insert the appropriate scaling factors all over the place.

You probably take out more scaling factors than you introduce.

> Euler's formula (e^ix = cosx + isinx) is the simplest when working with radians.

Euler’s still simple:

e^(2 i pi y) = cosy + isiny

Or if you start noticing c = e^(2 pi) showing up all over the place:

c^iy = cosy + isiny

> How do you do the same with "turns" on a sphere?… You can't in any meaningful way.

Why not do the same thing? One steradian is 1/(4 pi) of a sphere’s solid angle. What if one “steturn” or whatever just covered a full solid angle? And similarly for higher dimensions?

Neither definition seems more natural to me, especially being used to all the factors of 2 and pi that pop over all over the place in the status quo.


I do not see how higher dimensions invalidates the concept. Steradians are replace by a scaled unit-less number that I will called sterturns that goes from 0 to 1.


The sine and cosine that are defined with Taylor series are not the same sine and cosine that are defined for right triangles.

The former are R->R functions, while the latter are defined on Angles (Angle is unfortunately not an SI physical dimension yet, but I expect it soon to change), and they don't care about the measurement unit.

I have no idea what you mean by radians generalizing for higher dimensions, but not turns.


In 2D you can measure the solid angles using steradians.

I guess that turns interpreted as parts of whole circles generalize to parts of whole spheres, and you should divide by 4pi instead of 2pi???


sine and cosine are functions from ℝ->[-1,1]. They don't take in a value which has a unit, or even a dimension, they take in a real number.

sin(x) is precisely the unique function f(x) such that f''(x) = -f(x). Similar to how exp(x) is the unique function g(x) such that g'(x) = g(x).

Sine does not operate on 'angles measured in radians'. It operates on real numbers. It is zero whenever the real number passed in is a multiple of pi. It happens to have applications in relating angles to distances in circles and triangles, and in order to use sine in that context it is useful to introduce the concept of a 'radian' as a specific, constructed angle of a particular size, such that when you express an angle in terms of multiples of a radian, you can just use the sine function to generate useful values.


I was taught that Eulers formula defined complex exponents?

If we used turns for cos and sin we could redefine what e^ix means so it works without radians. From the other answer I guess this is completely wrong...

(I do understand it is nuts to redefine, i'm just interested as a theoretical thought)

Now, how is Eulers formula is deduced? How did we figure out what e^ix means?


One way to understand where the formulas come from is the power series of e^x, remembering that that function is (can be) defined as the function whose derivative is itself. Sin and cos are functions whose second derivative is -sin and -cos respectively. If you plug in ix to the power series for e^x, the complex exponential comes right out.

There are a couple other "paths" to this result, and the choice we have is by far the most elegant.


That's not really ray-tracing. That's just regular GPU rendering with physically-based materials and some ray-tracing effects for extra chrome.


Any non-trivial project would quickly reach RLIMIT_NOFILE as inotify is non-recursive, so you'd need to open a new fd for each monitored file.


This guy is very smart!


The fedora-tipping is strong in this one.


Step 1: nearly bankrupt your country

Step 2: sell out your country to wealthy foreigners to "stimulate the economy"

Step 3: act surprised that you can't afford living in your country, that got sold out to wealthy foreigners


Step 4: say "oops, we can't afford to pay you the stuff we promised, but you should have known that, because you are smart investors and you checked our balance sheet and houseflow first" and then default on the investments. I wonder why these places are so desperate not to do that.


Because the international class that sets policy - and is heavily represented on this site - would lose money if they did so. Better to focus on idpol like ingroup/outgroup blame.


In this case, the 'wealthy foreigners' are large corporations (and foreign governments). Not the occasional gringo that decides to bridge the language barrier and buy property there.


Lol! You're pretty unhinged.


I was well downvoted.

Could you explain to me why I'm unhinged? I didn't feel like I had posted anything controversial or whatever.


PinePhone couldn't even create a phone that can make voice calls properly like a Nokia 3310, yet you're envisioning them beating Apple (!) with rolling out some futuristic technologies.


>PinePhone couldn't even create a phone that can make voice calls properly like a Nokia 3310, yet you're envisioning them beating Apple (!) with rolling out some futuristic technologies.

I believe I understand why you feel I am unhinged.

I said, "Sure sounds like iphone will get there first, pine could beat them to it."

Obviously I said iphone will win, I optimistically or enthusiastically said pine could put in the work and get there first.

You feel I am unhinged because of optimism?


Yeah the other poster is rude. Unhinged when you didn’t claim anything insane doesn’t compute.

For me. My love for webOS is so great it’s all I want


having unrealistic expectations may be considered unhinged by some. we are still a bit far from having 3d scanners that fit on your pocket.


>having unrealistic expectations may be considered unhinged by some.

I feel like I wasn't being unrealistic at all.

solid state batteries exist today and can be bought. Obviously early in the commercialization but they do exist. Still unclear how safe they are.

88mp camera is less than what I can go buy from costco right now.

3d scanners totally fit in your pocket. I don't mean photogrammetry neither. The newer iphones have lidar that can scan. Newer androids have depth sensors or TOF sensors.

https://www.youtube.com/watch?v=r26OhSxBUXM

https://www.youtube.com/watch?v=_3UXeJWmEn8


interesting, i had no idea. thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: