I think the argument you're making is compelling and interesting, but my two concerns with this are: 1) how does it affect compile time? and 2) how easy it to make major structural changes to an algorithm?
I haven't tried Rust, but my worry is that the extensive compile-time checks would make quick refactors difficult. When I work on numerical algorithms, I often want to try many different approaches to the same problem until I hit on something with the right "performance envelope". And usually memory safety just isn't that hard... the data structures aren't that complicated...
Basically, I worry the extra labor involved in making Rust code work would affect prototyping velocity.
On the other hand, what you're saying about compiling everything together at once, proving more about what is being compiled, enabling a broader set of performance optimizations to take place... That is potentially very compelling and worth exploring if that gains are big. Do you have any idea how big? :)
This is also a bit reminiscent of the compile time issues with Eigen... If I have to recompile my dense QR decomposition (which never changes) every time I compile my code because it's inlined in C++ (or "blobbed together" in Rust), then I waste that compile time every single time I rebuild... Is that worth it for a 30% speedup? Maybe... Maybe not... Really depends on what the code is for.
If code is split in sufficiently small crates compile times are not big of a deal for iterations. There is a faster development build and I would think that most time will be spent running the benchmark and checking perf to see processor usage dwarfing any time needed for compilation.
I think the argument you're making is compelling and interesting, but my two concerns with this are: 1) how does it affect compile time? and 2) how easy it to make major structural changes to an algorithm?
I haven't tried Rust, but my worry is that the extensive compile-time checks would make quick refactors difficult. When I work on numerical algorithms, I often want to try many different approaches to the same problem until I hit on something with the right "performance envelope". And usually memory safety just isn't that hard... the data structures aren't that complicated...
Basically, I worry the extra labor involved in making Rust code work would affect prototyping velocity.
On the other hand, what you're saying about compiling everything together at once, proving more about what is being compiled, enabling a broader set of performance optimizations to take place... That is potentially very compelling and worth exploring if that gains are big. Do you have any idea how big? :)
This is also a bit reminiscent of the compile time issues with Eigen... If I have to recompile my dense QR decomposition (which never changes) every time I compile my code because it's inlined in C++ (or "blobbed together" in Rust), then I waste that compile time every single time I rebuild... Is that worth it for a 30% speedup? Maybe... Maybe not... Really depends on what the code is for.