Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice result! Arnoldi is a beautiful algorithm, and this is a good application of it.

What are you using this for and why are you working on it?

I admit I'm not personally convinced of the value of Rust in numerics, but that's just me, I guess...





Hi there, thanks! I started doing this for a university exam and got carried away a bit.

Regarding Rust for numerical linear algebra, I kinda agree with you. I think that theoretically, its a great language for writing low-level "high-performance mathematics." That's why I chose it in the first place.

The real wall is that the past four decades of research in this area have primarily been conducted in C and Fortran, making it challenging for other languages to catch up without relying heavily on BLAS/LAPACK and similar libraries.

I'm starting to notice that more people are trying to move to Rust for this stuff, so it's worth keeping an eye open on libraries like the one that I used, faer.


Nice. I'd be curious to see if this has already been done in the literature. It is a very nice and useful result, but it also kind of an obvious one---so I have to assume people who do work on computing matrix functions are aware of it... (This is not to take anything away from the hard work you've done! You may just appreciate having a reference to any existing work that is already out there.)

Of course, what you're doing depends on the matrix being Hermitian reducing the upper Hessenberg matrix in the Arnoldi iteration to tridiagonal form. Trying to do a similar streaming computation on a general matrix is going to run into problems.

That said... one area of numerical linear algebra research which is very active is randomized numerical linear algebra. There is a paper by Nakatsukasa and Tropp ("Fast and accurate randomized algorithms for linear systems and eigenvalue problems") which presents some randomized algorithms, including a "randomized GMRES" which IIRC is compatible with streaming. You might find it interesting trying to adapt the machinery this algorithm is built on to the problem you're working on.

As for Rust, having done a lot of this research myself... there is no problem relying on BLAS or LAPACK, and I'm not sure this could be called a "wall". There are also many alternative libraries actively being worked on. BLIS, FLAME, and MAGMA are examples that come to mind... but there are so many more. Obviously Eigen is also available in C++. So, I'm not sure this alone justifies using Rust... Of course, use it if you like it. :)


Sorry for the late answer.

The blog post is a simplification of the actual work; you can check out the full report here [1], where I also reference the literature about this algorithm.

On the cache effects: I haven't seen this "engineering" argument made explicitly in the literature either. There are other approaches to the basis storage problem, like the compression technique in [2]. Funny enough, the authors gave a seminar at my university literally this afternoon about exactly that.

I'm also unfamiliar with randomised algorithms for numerical linear algebra beyond the basics. I'll dig into that, thanks!

On the BLAS point, let me clarify what I meant by "wall": when you call BLAS from Rust, you're essentially making a black-box call to pre-compiled Fortran or C code. The compiler loses visibility into what happens across that boundary. You can't inline, can't specialise for your specific matrix shapes or use patterns, can't let the compiler reason about memory layout across the whole computation. You get the performance of BLAS, sure, but you lose the ability to optimise the full pipeline.

Also, Rust's compilation model flattens everything into one optimisation unit: your code, dependencies, all compiled together from source. The compiler sees the full call graph and can inline, specialise generics, and vectorise across what would be library boundaries in C/C++. The borrow checker also proves at compile time that operations like our pointer swaps are safe and that no aliasing occurs, which enables more aggressive optimisations; the compiler can reorder operations and keep values in registers because it has proof about memory access patterns. With BLAS, you're calling into opaque binaries where none of this analysis is possible.

My point is that if the core computation just calls out to pre-compiled C or Fortran, you lose much of what makes Rust interesting for numerical work in the first place. That's why I hope to see more efforts directed towards expanding the Rust ecosystem in this area in the future :)

[1] https://github.com/lukefleed/two-pass-lanczos/raw/master/tex...

[2] https://arxiv.org/abs/2403.04390


Thanks for clarifying.

I think the argument you're making is compelling and interesting, but my two concerns with this are: 1) how does it affect compile time? and 2) how easy it to make major structural changes to an algorithm?

I haven't tried Rust, but my worry is that the extensive compile-time checks would make quick refactors difficult. When I work on numerical algorithms, I often want to try many different approaches to the same problem until I hit on something with the right "performance envelope". And usually memory safety just isn't that hard... the data structures aren't that complicated...

Basically, I worry the extra labor involved in making Rust code work would affect prototyping velocity.

On the other hand, what you're saying about compiling everything together at once, proving more about what is being compiled, enabling a broader set of performance optimizations to take place... That is potentially very compelling and worth exploring if that gains are big. Do you have any idea how big? :)

This is also a bit reminiscent of the compile time issues with Eigen... If I have to recompile my dense QR decomposition (which never changes) every time I compile my code because it's inlined in C++ (or "blobbed together" in Rust), then I waste that compile time every single time I rebuild... Is that worth it for a 30% speedup? Maybe... Maybe not... Really depends on what the code is for.


If code is split in sufficiently small crates compile times are not big of a deal for iterations. There is a faster development build and I would think that most time will be spent running the benchmark and checking perf to see processor usage dwarfing any time needed for compilation.

The advantage of having stuff in C and Fortran is that it can easily be used from other languages. I would also argue that your algorithm written in C would be far more readable.

Have you looked into Julia at all? IMO it's a pretty great mix of performance but with a lot fewer restrictions than what Rust ends up with.

BLAS/LAPACK don't do any block level optimizations. Heck, they don't even let you define a fixed block sparsity pattern. Do the math yourself and write down all 16 sparsity patterns for a 2x2 block matrix and try to find the inverse or LU decomposition on paper.

https://lukefleed.xyz/posts/cache-friendly-low-memory-lanczo...

I mean just look at the saddle point problem you mentioned in that section. It's a block matrix with highly specific properties and there is no BLAS call for that. Things get even worse once you have parameterized matrices and want to operate on a series of changing and non-changing matrix multiplications. Some parts can be factorized offline.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: