Not sure if this way of comparing programming languages (i.e. do A in language X and do B in language Y => Y is not slower than X) makes enough sense to draw conclusions. If you can use clever algorithms, in the majority of cases you'd do so in any language and in the majority of cases C would lead to more performant code than Python. Then again, wheteher this matters in the scope of the actual application is something else.
I suspect his comment meant to imply that a level of skill exists for which a programmer could build an FFT in python, a plain DFT in C, but not a proper FFT in C. That programmer would benefit from using python.
Theoretically this is true, but in practice, those who need to write very efficient code competitively, rarely use naive algorithms. (Pure C/C++ also isn't enough nowadays though, the processor isn't competitive with the GPU in a lot of algorithms, so CUDA/OpenCL needs to be also used in most cases.)
I think the replies to this post are getting confused between the FFT/DFT, which is O(n log n), and the "naive Fourier transform", O(n2).
My experience with numerics in regular python is that they're generally 50-500x slower than the equivalent in C/C++, this just pushes back the point at which the asymptotics take over.
My point is that a good algorithm, a good strategy can give you a several orders of magnitude speedup, more than the speed differences among languages. If a higher level language makes it easier and faster to develop good algorithms, then you should use that. After that, if you have time or really need it, you can re-implement it in hand-coded assembly, or even in hardware. But you generally don't have the time and don't really need it.
Because the FFT in python is almost certainly implemented in C, and probably more cleverly done than a naive FFT that I/you/someone would whip up as part of a C program.
Is this a joke? Something written with the same algorithm in c will be faster than python. Why not use someone elses non naive FFT impl in c then as well?
I think the argument isn't that Python code out performs C code, it's that code written by mediocre Python programmers often outperforms code written by mediocre C programmers. C code is fast enough the mediocre programmers get used to letting the language bail them out. Python programmers know that their language is slow and that they have to work around it.
I've encountered this several times in my own career. A co-worker who writes in C will be implementing a process in parallel with my Python implementation. A week later, my O(N) Python code is outperforming my colleagues O(N^3) C code, since I chose a more complex algorithm which is trickier to get right. The C programmer then re-implements my method in C, which would completely trounce my own code, except I've spent that time leaning on BLAS and LAPACK, speeding up my operations again. The C programmer then starts using fast libraries instead of her own code, again beating my old source, only to find that I've now pushed a good chunk of the processing onto the GPU.
Eventually, I will run out of tricks. The final draft of the C code will trounce the final draft of my Python code. However, during most of the creation process, my Python usually out performs their C. Also, a truly talented C programmer would write my colleague's final draft as her first draft, negating every advantage that I had in the process. However, that's not a situation that I'm likely to run into, because places hiring truly talented programmers aren't likely to be hiring me.
Makes literally no sense. The c developer would simply use the same c/fortran library the python implementation is based on. You are creating a false dichotomy for the sake of it.
The c developer should use the same c/fortran library that the python implementation is based on. A good C developer would use that library. In my experience, mediocre C developers will not use that library and will implement their own, naive version.