Hacker News new | past | comments | ask | show | jobs | submit login
Researchers thought this was a bug (Borwein integrals) [video] (youtube.com)
110 points by andersource on Nov 16, 2022 | hide | past | favorite | 18 comments



For those who'd like to read a quick description of what this video is about:

It shows a sequence of integrals following a very simple pattern. The first seven integrals in the sequence all evaluate to pi. The eighth integral inexplicably evaluates to pi - 0.0000000000462... and from that point on the pattern deviates from pi.

The video goes on to explain how such a seemingly perfect pattern can suddenly break by relating this sequence of integrands to a simpler one where it's easier to see what's happening.


One-line summary is to look at the fourier transforms of the integrand, fhat(x), then the integral is just fhat(0). The pattern is to multiply the integrand by sinc(x/k) for increasing k, and by fourier transform this is krect(komega). Since multiplication in time-domain is convolution in fourier domain, convolving rect with itself keeps eroding the edges of the rect, until eventually the erosion reaches the center and fhat*(0) dips.


I understood electronic differentiators and integrators long before we did calculus in high school and I've always had more of a practical than mathematical leaning with that sort of stuff.

I guess what you're saying here is that if I pass a square wave through a lowpass filter - reducing the amplitude of the harmonics and rounding off the corners - then the peak amplitude will stay pretty much constant until I pull the cutoff of the filter down sufficiently close to the fundamental that it starts getting attenuated too.

Makes sense I guess.


> pass a square wave through a lowpass filter

I suppose: in the original problem we want to see what a sinc multiplied by sinc looks like (in time domain) or rect convolved with rect (analyzing in fourier domain). Also the width of the rects we're convolving with is shrinking after each time.

As you mentioned, to see what a rect convovled with a rect looks like, I think you can treat a convolution with rect as a lowpass filter (this should not be confused with convolving with sinc which gives an ideal" lowpass filter), and this gives intuition for why the erosion occurs.

What's not clear a priori to me is that the erosion will indeed actually reach the center. I think this depends on how fast the rect you're convolving with is shrinking, if it shrunk faster than {1, 1/3, 1/5, ...} then I don't think it would. I guess the easiest way to see this visually is to use the sliding method Grant showed, where the width of the y=1 peak after convolution is the overlapping width of the two rects. Thus we get 1, 1-1/3, 1-1/3-1/5,... which eventually < 0


Don't you mean convolution in the time domain is multiplication in the "Fourier" domain?


Both are true, the situation is pretty much symmetric, as the Fourier transform is almost the same as the inverse Fourier, except for a sign change in the formula and some constant factor.


I’m shocked how succinctly you were able to summarize the ideas here. Bravo. Any chance you have a blog?


I'd like to know what function this converges to:

rect(x) • rect(x/2) • rect(x/4) • ...

Where • is the convolution operator.

Unlike the series in the video, 1 + 1/2 + 1/4 + ... converges. So this function has compact support, and the value at 0 does not dip.

I expect it to be a https://en.m.wikipedia.org/wiki/Bump_function


Do you mean `rect(2x) • 2rect(2x) • 4rect(4x)` .., both so that the limiting function is a dirac delta and the convolutions remain area preserving? Otherwise since the function you're convolving with keeps getting wider wouldn't the result also keep getting wider, so the support is no longer finite?

It might be possible to get a closed-form solution via an approach like [1]. (Out of curiosity, for rect convolved with itself, the intermediaries seem to be knwon as b-splines: https://www.chebfun.org/examples/approx/BSplineConv.html)

[1] https://math.stackexchange.com/questions/1254392/the-maximum...


Yes that is what I meant, thanks!


I guess it's a Gaussian, by the central limit theorem.


No, it can't be. This function has compact support, the Gaussian clearly does not.

You get a Gaussian by repeated convolution of the same function (and normalizing the width).

The equivalent question to what I asked is what's the pdf of X1 + X2/2 + X3/4 + ...

where Xi is iid unit uniform.


I’m waiting for the convolution video he promises in the video and the comments.



I love this.

I sometimes regret not studying pure math in college, and going down the software engineer (ahem, code monkey) route. There's so much mathematical beauty out there to be discovered and admired.

But I guess money's better this way.


I went with applied maths route that had a lot of pure math, yet I'm a code monkey anyway. There is beauty in maths but while I was able to graduate I'm too dumb for advancing it, and I'm ok with that.


I love all the videos on this channel. The partial differential equations tour is incredible too.


Grant Sanderson redeems the entire Internet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: