Hacker News new | past | comments | ask | show | jobs | submit login
An animated introduction to the Fourier Transform [video] (youtube.com)
287 points by e0m on Jan 28, 2018 | hide | past | favorite | 31 comments



3blue1brown’s videos are excellent. They build intuition in a calm and friendly way with an appropriate amount of useful animation. This is how we make mathematics accessible.

I’m currently considering moving back into academia and there are a lot of topics in my field that I know students often struggle with that would be greatly helped by some simple animations. Fortunately I’m pretty competent with blender and I relish the idea of developing something worthwhile.


Do you have any links showing how to do that with blender?


Anything in particular? I was thinking of things like viscosity and stress analysis for fluid mechanics. It would be easy enough to animate a stress tensor and show how each term behaves when prodded. Similarity the basic concepts behind laminar boundary layers would be equally straight forward.

Mimicking 3b1b’s style would be trickier since he uses a lot of 2D plots. Of course you can run python directly from blender so you never know.


He has an entire series on "the essence of linear algebra". I'm a PhD student in a technical field, and rewatch that series at least once a year. It's brilliantly accessible, clear, and visually explained. I recommend the series to anyone who asks me about anything to do with matrix operations



He touches on it - but I’d love to see an intuitive explanation of why the response of each frequency to the input function is linearly independent. i.e the fact that Fourier transform of the sum is equal to the sum of the Fourier transforms. This is “why it works” - it’s what makes the frequency space an orthonormal basis - but it’s never been intuitively obvious to me. Otherwise, there would be more than one way of decomposing a function into a superposition. e.g. what would be useful is to give an example of a set of functions which are not linearly independent.


It actually follows from the 'centre of mass' explanation. If you take the centre of mass as described of f + g you get the centre of mass of f plus the centre of mass of g. One way to explain this is to just say the + can be moved out of the integral.

Alternatively, consider the centre of mass only over the horizontal axis. Now say we only look at the 'contribution' of f + g at time t (ignoring the issues of that contribution being infinitesimal). That contribution is (f(x) + g(x)) * sin (theta) where theta is the angle of our point. Clearly this equals f(x) * sin (theta) + g(x) * sin (theta). These are the separate contributions of f(x) and g(x). The same argument holds for the centre of mass over the vertical axis (replacing sin with cos).

If we were to make the alternative explanation formal, we get back to the + being able to move outside the integral. Note that our decomposition into the horizontal and vertical part is an alternative way to de the fourier transform without complex numbers. The vertical part here is essentially the imaginary part of the fourier transform.


Take your wrapping function from t1 to t2, at a frequency ƒ (signal) != Fs (sampling freq), then take the limit as t1/t2 goes to -∞/+∞. As your window gets longer, more cycles of the oscillation "cancel out", moving the center of mass towards 0+0i. This means the peak around ƒ narrows and raises. At ∞, ƒ becomes infinitely narrow and high (Dirac delta * ).

This is also why peaks on an fft are gaussian (finite window), and get sharper as the fft window is increased.

* for cosine, technically there is a peak at -ƒ too. this is because a real cosine signal is ambiguous whether it is "moving forward or backwards in time". Hence it has a peak at +/-ƒ. A complex exponetial (helix through time) has chirality due to the real and imag components, so it has a single peak at ƒ. And if you take a +ƒ (lefthanded) and -ƒ helix (righthanded) and add them, the complex part cancels out, leaving only a real "up and down" wave.


The orthogonality is essentially follows from (1) integer frequency complex sinusoids have an average value of zero over [0,2π], and (2) if you multiply two distinct integer frequency complex sinusoids, you get another integer frequency complex sinusoid. I'm not sure that this is any more intuitive.

> what would be useful is to give an example of a set of functions which are not linearly independent.

See [1,2] for example, which (I believe) has applications in compressed sensing and dictionary learning.

[1]: https://en.wikipedia.org/wiki/Frame_(linear_algebra)

[2]: https://en.wikipedia.org/wiki/Overcompleteness


>The orthogonality is essentially follows from (1) integer frequency complex sinusoids have an average value of zero over [0,2π], and (2) if you multiply two distinct integer frequency complex sinusoids, you get another integer frequency complex sinusoid. I'm not sure that this is any more intuitive.

i think these kinds of explanations are hilariously pointless. and i don't mean to disparage because you're just trying to answer op's question but all you've done is restated the proof in english - i.e. of course it follows from that because what you've just said is the inner product of basis functions is 0. well yes of course that's definition of orthogonal.


Here's how I think about it.

You can play the individual notes of a chord on one piano or several, but they still come together to produce the same chorus of frequencies.

The Fourier series representation of a waveform is itself a sum, since trigonometric functions are waveforms as well. Thus, a combination thereof should be commutative because a sum of two sums retains the properties of addition.


BetterExplained (Kalid Azad) has a good written article that covers the Fourier transform in a similar manner to the 3Blue1Brown video: https://betterexplained.com/articles/an-interactive-guide-to...

I have an article explaining step by step how to implement code for the discrete version of the Fourier transform: https://www.nayuki.io/page/how-to-implement-the-discrete-fou...


I’ll just leave this here

http://tomlr.free.fr/Math%E9matiques/Math%20Complete/Analysi...

Mathematics of the discrete Fourier Transform by Julius O. Smith. (O stands for Orange I hope)



I really wish this stuff existed when I was learning about FFTs - this video describes the theory far better and in far less time than my broken-english college professors ever could.


Sound waves don't add up linearly. However, it is a good enough idealization for many uses.

Fourier analysis is also approachable from the discrete setting of finite vectors instead of functions, where the fourier analysis is just an orthogonal (orthonomal when sanely defined) linear function, i.e. it acts by matrix multiplication and is represented as that matrix.

This appropriately extended to the continous setting leads to the fourier transform on functions, and also gives intuition why the fourier transform uses integrals.


This one is related and (I think) quite good:

https://www.youtube.com/watch?v=r18Gi8lSkfM


I think it's much easier and more direct to visualize the time-domain as superposition of helical components and the transform as an exploration of what happens when you twist the "cylinder" with varying "intensities". You avoid the vague center-of-mass spike depicted here and start from the get-go with the terms of the transform.


> I think it's much easier and more direct to visualize the time-domain as superposition of helical components and the transform as an exploration of what happens when you twist the "cylinder" with varying "intensities".

That doesn't sound very clear at all to me.

> You avoid the vague center-of-mass spike depicted here and start from the get-go with the terms of the transform.

The center-of-mass spike is the result of summing across all the different complex points/vectors, this is stated very clearly by the FT formula (sum (int_^) of points on a circle (exp(t)) amplified by signal strength (f(t))). Seems very explicit to me.


Perhaps you can explain what you mean by “exploration of what happens” and “terms of the transformation”? That’s pretty vague as a description of a visualization.

Maybe you’re talking about a visualizing a discrete Fourier transform?


Yep, it's vague, sorry, I'll need to try my hand at a video.


the source code brown brown blue uses for animations is open source


In fMRI data, we refer to frequency space of volumetric image data as K-Space.

I would like a general term for frequency space of a signal, without the use of the word `frequency` . This is because `frequency` is also used when describing histograms in general image processing, and is in general an overloaded term.

Any established words or phrases in the corpus? any tips?


K-Space is sufficiently general, because "k" is defined as the wavenumber (2pi over wavelength). That's simply related to frequency in nearly every case unless your medium is interstellar hydrogen or shockwaves in air or something. If you need to talk about frequency specifically you could talk about the period (inversely proportional) or the angular frequency (factor of 2pi).


This fourier transform simulation example from shadertoy is good. https://www.shadertoy.com/view/ltKSWD


This fourier transform simulation example from shadertoy is good.

https://www.shadertoy.com/view/ltKSWD


The link kills my MacBook Pro. Makes the system unresponsive.


How do you animate something like this?


He wrote his own tools in Python to achieve this (repo: https://github.com/3b1b/manim).

FAQ: http://www.3blue1brown.com/about/


Thanks!





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: