For a while now, I've been trying to improve my understanding of working with logarithms. Technically, I knew their purpose, but I wasn't comfortable past reciting their definition. This bugged me, especially since we use them so much in CS. This post sort of collects what I've learned, both for my future self and for anyone else, who feels the same way. Hope you like it :D
Logs are linearized exponents so all the laws of exponents are the log laws: log(xy) is logx + logy. Logs are also a way to scale which is what they were originally used for to do large multiplication without electronic assistants. We log scale graph axis/plots all the time today. They also describe the shape of some processes hence come up in compsci like tree algos. If you have a logarithmic process and work increases by a million new inputs then actual work is only 6x increase. So log is also the number of zeros in a number
There's a book called Calculus Reordered by Bressoud that describes Napier logs/natural ln(x) too
This article is great for getting a practitioner to understand how to calculate a logarithm. For non-math friends, I still reach for this[0] as a great way to get an intuition for the utility of logarithms.
While I read this, I kind of assumed that this was written by a math-enthusiastic 9th grader and it felt wonderful! Write down your new-found math skills, that will make you remember them even better for the future and also trains writing skills!
Then I learned that he'd been out of school for quite a few years and is a programmer now. That makes me pretty sad. Is it really such a novel insight to learn about the relationship between exponentiation, roots and logarithms that you need to write a blog post? Isn't this basic school math anymore that everybody doing anything even remotely related to math should know by heart? (Programming definitely counts into that..) How do you even do things like basic finance, interest rates, inflation, without groking this?
It makes me worried not just about the future of society in general, but about our industry in particular. I'd feel uncomfortable working at a place where any of my coworkers would feel compelled to write a blog post about the relationship between exponentiation and logarithm that explains that exponentiation is not commutative.
You might want to give this more thought -- on reading the piece, my take-away was, "this guy is a half-step away from inventing the slide rule".
IMO, what the slide rule does is replace the memorization of squares and roots, or of logarithms of digits. I've never used a slide rule professionally, but I've noodled around with a few (I inherited a fancy one from my grandfather), and it really forces you to think about scaling and precision in a way that's different from both calculator and pencil-and-paper arithmetic, and the idea of knowing how to do the mantissa, and then using logical rules to infer the order of magnitude, is central. I think that's the through-line of the article, too, or at least it seemed so to me.
Yes, I see what you mean. To me, doing those sorts of calculations by hand is just really fun. I agree with you about the importance of applying logical rules in this way. Thanks for this suggestion, maybe I'm going to add a paragraph about the slide rule as an afterthought later on :D
I think it's fun to know how to compute log2(x) or log10(x) using a 4 function calculator.
Log 2 is a matter of dividing by two (and adding 1 to the log2 value) until X is in the range [1,2)... (or multiplying by 2 and subtracting 1). The to handle the fraction, square the number, if it's above 2, divide by two and add a 1 to the binary fraction, if not, add a 0.... and continue.
Log 10 is similar... divide by 10, then once you're in the [1,10) range... take X to the 10th power (square, square, *X, then square)... count the number of times to get the number back to the [1,10) range, and add that digit after the decimal, and repeat.
Why not start with a simple graphical explanation on the 2D plane? E.g. feed this to the chatbot: "Logarithmic and exponential functions have distinct characteristics, and their behavior around the slope of a line defined as y=mx+b can be analyzed as follows..."
In computer science, I read log as levels. n log n is n time levels(n, 2). levels is the number of time n can be repeatedly halved until result is less than 2. levels has something to do with recursion of problem into 2 smaller sub problems.
levels(n, 10) is approximately num_digits(n).
knowing that levels(10, 2) is approximately 3.32, I know that to represent 1000 (3 digits) needs 3 *3.32 approximately 10 bits. 1 million (6 digits) needs 20 bits. 1 billion needs 30 digits. bits are just binary digits.
One of the most common usage of logarithms that I encounter is digits. In that case I usually just write dig instead of log.
Another important case when logarithm shows up, that multiplication on (0,inf) and addition on (-inf,inf) have the same structure, and logarithm creates an isomorphisms between them. Usually it makes multiplication easier/more familiar. Vi Hart has an entertaining video about it [0] where she explains how she likes to smell numbers, and visualizes the exponential function on a line written values over it, instead of a 2d graph.
There are a lot of other cases where logarithm may show up. I usually treat them separately. For example if a logarithm shows up as the integral of 1/x, then chances are that I better think of it as the integral of 1/x, and I don't gain anything if I think of it as the number of digits. That doesn't happen often.
Linear values are "displacement metrics", exponents are "scale metrics".
It thus follows that power functions becomes "linear displacements" when log-log-transformed.
Logarithm should really be called "scale-metric displacement-ifier" or "linearizer-for-quantities-that-have-an-absolute-zero" or something along those lines
That square root estimation is new to me and freaky accurate. The error bounces and is generally decreasing. It's also always an underestimate. Here are the first six local maxima of errors.