What is modern Fortran good at? It seems like Fortran's niche has been scientific computing, specifically high performance linear algebra. (I wonder if that makes Fortran good for neural networks.)
In college, I remember attending a great talk by a visiting Stanford compiler prof. who talked about being able to produce faster C++ code by transpiling to Fortran first than by optimizing the C++ or IR directly. Not sure if that's still true, but at the time it seems Fortran was a more restrictive language which permitted stronger optimization.
> Not sure if that's still true, but at the time it seems Fortran was a more restrictive language which permitted stronger optimization.
There is no aliasing in Fortran. In C (and using compiler extensions in C++) you manually use restrict to give the same information to the compiler, but it is a trickier process than in Fortran. Rust inherits this benefit from Fortran, but last I checked they had all related optimizations disabled (and most numerical computing friendly features in Rust tend to be on the back burner).
That was due to a bug in LLVM when noalias was used in conjunction with LLVM’s stack unwinding facilities. It’s since been re-enabled now that the upstream bug has been fixed: https://github.com/rust-lang/rust/pull/50744
> most numerical computing friendly features in Rust tend to be on the back burner
Yes and no, we’ve been working on underlying stuff that will be needed in order to ship those features, and while a roadmap for next year has not been decided, most believe it will be a major component.
Definitely still scientific computing ("formula translator" after all). In particular it's still frequently used in the supercomputing/HPC space for large parallel numerical simulation codes.
The issue with optimizability in C is mostly solved by the C99 `restrict` keyword, but Fortran compilers are still very good for numerical work.
In practical terms: what kind of performance difference would one see between optimized C(++) and optimized Fortran when running exactly the same algorithm?
According to the Julia benchmarks (and I assume Julia folks are relatively unbiased re: C vs Fortran), it depends on the algorithm [1], but less than an order of magnitude in either direction. I'm kinda surprised Fortran doesn't do better there actually -- but at this level it probably also depends on compiler and architecture.
Thats why we still use it at my work (CFD). Its easier to produce fast matrix/linear algebra code than C++ for a scientist who is not a Software Engineer.
This has been my experience. Although I know other languages, when I go to write a solver for CFD, radiation transport, heat transfer, etc I normally have gone with OpenMPI and Fortran. For me, Fortran is almost like writing psuedocode.
Fortran produces efficient out-of-the-box code for dealing with multidimensional arrays. That said, most hand-written fortran doesn't compete with an optimized BLAS or tensor library using CPU-tuned intrinsics. For GPUs, hand-optimized libraries in CUDA rule and those are generally called from C++. Hence why you see deep learning generally in C++ (wrapped in python).
I would love to see a specific example problem, I assume it would be a numerical / linear algebra problem, that is well suited for Fortran. I want to get to the bottom
of where the speed / accuracy comes from w a Fortran compiler.
In college, I remember attending a great talk by a visiting Stanford compiler prof. who talked about being able to produce faster C++ code by transpiling to Fortran first than by optimizing the C++ or IR directly. Not sure if that's still true, but at the time it seems Fortran was a more restrictive language which permitted stronger optimization.