Hacker News new | past | comments | ask | show | jobs | submit login
An intuition for logarithms (thasso.xyz)
118 points by D4ckard on Sept 2, 2023 | hide | past | favorite | 37 comments



For a while now, I've been trying to improve my understanding of working with logarithms. Technically, I knew their purpose, but I wasn't comfortable past reciting their definition. This bugged me, especially since we use them so much in CS. This post sort of collects what I've learned, both for my future self and for anyone else, who feels the same way. Hope you like it :D


I can recommend using them for mental maths[1] as a way to improve understanding.

[1]: https://two-wrongs.com/learning-some-logarithms.html


Hahah this post of yours was the whole reason I got into this topic!


I did not realise you were the author of TFA!


Logs are linearized exponents so all the laws of exponents are the log laws: log(xy) is logx + logy. Logs are also a way to scale which is what they were originally used for to do large multiplication without electronic assistants. We log scale graph axis/plots all the time today. They also describe the shape of some processes hence come up in compsci like tree algos. If you have a logarithmic process and work increases by a million new inputs then actual work is only 6x increase. So log is also the number of zeros in a number

There's a book called Calculus Reordered by Bressoud that describes Napier logs/natural ln(x) too


This article is great for getting a practitioner to understand how to calculate a logarithm. For non-math friends, I still reach for this[0] as a great way to get an intuition for the utility of logarithms.

[0] - https://betterexplained.com/articles/demystifying-the-natura...


Reminder that in TeX you want to escape functions so they aren't rendered in italics as a product of separate variables, i.e. "\log" not "log"


Thank you for pointing out this mistake. Fixed it


And `\operator{name}` if `name` isn't already a defined operator macro.

Or `\DeclareMathOperator{\name}{value}` to declare a new operator.


While I read this, I kind of assumed that this was written by a math-enthusiastic 9th grader and it felt wonderful! Write down your new-found math skills, that will make you remember them even better for the future and also trains writing skills!

Then I learned that he'd been out of school for quite a few years and is a programmer now. That makes me pretty sad. Is it really such a novel insight to learn about the relationship between exponentiation, roots and logarithms that you need to write a blog post? Isn't this basic school math anymore that everybody doing anything even remotely related to math should know by heart? (Programming definitely counts into that..) How do you even do things like basic finance, interest rates, inflation, without groking this?

It makes me worried not just about the future of society in general, but about our industry in particular. I'd feel uncomfortable working at a place where any of my coworkers would feel compelled to write a blog post about the relationship between exponentiation and logarithm that explains that exponentiation is not commutative.



Curiously, in Russian the name for a "slide rule" is in fact a "logarithmic rule".


That sounds much more sophisticated than the Swedish word: counting stick.


Tried to create a slide rule UI/object/thing for the web. My knowledge of Web is limited: got as far as this:

https://github.com/EngineersNeedArt/SlideRule


Yeah, I think they’re pretty cool, too. I didn’t include them to avoid going too much into history …


You might want to give this more thought -- on reading the piece, my take-away was, "this guy is a half-step away from inventing the slide rule".

IMO, what the slide rule does is replace the memorization of squares and roots, or of logarithms of digits. I've never used a slide rule professionally, but I've noodled around with a few (I inherited a fancy one from my grandfather), and it really forces you to think about scaling and precision in a way that's different from both calculator and pencil-and-paper arithmetic, and the idea of knowing how to do the mantissa, and then using logical rules to infer the order of magnitude, is central. I think that's the through-line of the article, too, or at least it seemed so to me.


Yes, I see what you mean. To me, doing those sorts of calculations by hand is just really fun. I agree with you about the importance of applying logical rules in this way. Thanks for this suggestion, maybe I'm going to add a paragraph about the slide rule as an afterthought later on :D


I think it's fun to know how to compute log2(x) or log10(x) using a 4 function calculator.

Log 2 is a matter of dividing by two (and adding 1 to the log2 value) until X is in the range [1,2)... (or multiplying by 2 and subtracting 1). The to handle the fraction, square the number, if it's above 2, divide by two and add a 1 to the binary fraction, if not, add a 0.... and continue.

Log 10 is similar... divide by 10, then once you're in the [1,10) range... take X to the 10th power (square, square, *X, then square)... count the number of times to get the number back to the [1,10) range, and add that digit after the decimal, and repeat.


Also if you know how to compute log 10 of a value you can multiply it by 3.3 to get the log 2 value! 2.3 to get log e.


You can in general convert any log base a to log base b by multiplying it by a constant factor of log base b (a)


Why not start with a simple graphical explanation on the 2D plane? E.g. feed this to the chatbot: "Logarithmic and exponential functions have distinct characteristics, and their behavior around the slope of a line defined as y=mx+b can be analyzed as follows..."

https://people.richland.edu/james/lecture/m116/logs/log2.gif

Graphical visions of functions are more intuitive especially if your're thinking continuously.


What does the line have to do with anything?


The easiest explanation I can conjure for logarithms is: the number of digits needed to write a number in a given base.


It's just the inverse of the exponential function, that's all


In computer science, I read log as levels. n log n is n time levels(n, 2). levels is the number of time n can be repeatedly halved until result is less than 2. levels has something to do with recursion of problem into 2 smaller sub problems.

levels(n, 10) is approximately num_digits(n).

knowing that levels(10, 2) is approximately 3.32, I know that to represent 1000 (3 digits) needs 3 *3.32 approximately 10 bits. 1 million (6 digits) needs 20 bits. 1 billion needs 30 digits. bits are just binary digits.


One of the most common usage of logarithms that I encounter is digits. In that case I usually just write dig instead of log.

Another important case when logarithm shows up, that multiplication on (0,inf) and addition on (-inf,inf) have the same structure, and logarithm creates an isomorphisms between them. Usually it makes multiplication easier/more familiar. Vi Hart has an entertaining video about it [0] where she explains how she likes to smell numbers, and visualizes the exponential function on a line written values over it, instead of a 2d graph.

There are a lot of other cases where logarithm may show up. I usually treat them separately. For example if a logarithm shows up as the integral of 1/x, then chances are that I better think of it as the integral of 1/x, and I don't gain anything if I think of it as the number of digits. That doesn't happen often.

[0]: https://youtu.be/N-7tcTIrers


Linear values are "displacement metrics", exponents are "scale metrics".

It thus follows that power functions becomes "linear displacements" when log-log-transformed.

Logarithm should really be called "scale-metric displacement-ifier" or "linearizer-for-quantities-that-have-an-absolute-zero" or something along those lines


Logarithms really should be called something like that…and they are! It’s Greek for “ratio number”.


That square root estimation is new to me and freaky accurate. The error bounces and is generally decreasing. It's also always an underestimate. Here are the first six local maxima of errors.

    2: 0.0809
    6: 0.0495
    12: 0.0355
    20: 0.0277
    30: 0.0227
    42: 0.0192
And there's an obvious pattern there. Interesting stuff.


It comes from the first two terms of Newtons's expansion of (1+x)^(1/2) the generalized binomial series. https://en.m.wikipedia.org/wiki/Binomial_series


Thanks!


A potential heuristic for the optimal number of management layers in any company is the base10 log of the total number of employees.

Assuming 1 manager oversees 10 people (then -1 to subtract lowest level of engineers).

Google 190,000 => ~4 levels of managers (5 - 1 = 4)

Apple 164,000 => ~4

etc.


The most underrated insight about logarithms is that for almost all practical applications, log2(n) is smaller than 32.

The difference between a O(1) and O(log n) algorithm is usually not that big!


Depends on constant factors. The difference between an hour and a day is substantial.


The major problem with logarithms is that there's no verb for "to take the log of" analogous to exponentiate

Logarize?



"take the log of" is shorter than "exponentiate"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: