As someone who has been playing around with (and enjoying) Mojo, I have my doubts about how useful Mojo will end up being for your average scientist. You can't get performant code out of Mojo if you're not willing to learn some deeper programming concepts like SIMD or tiling.
I don't have the exact quote on hand, but in the Mojo Discord, Chris Latner explicitly said he wants no "compiler magic" in Mojo. With that idea in mind, Mojo makes it a lot easier to do optimizations like SIMD vectorization by hand, but you will still have to do it manually. My guess is that many scientists who don't like programming would find it annoying to hand-write those kinds of optimizations. If you want a language that gives you nice, performant code on your first attempt, Julia is always a decent option.
Julia has to bootstrap an ecosystem. If mojo can borrow all of the successful Python libraries, that is worth a lot.
Still an enormous uphill battle, but slightly more tractable. Regardless, it is a rough place to be - for a staggering number of uses, Python is fast enough. The organizations who absolutely require top tier performance already have the ability to use FFI. Instagram runs on Django and I believe is still used for YouTube.
OTOH,Rust has shown that there is a reasonable size market for performant, safe system programming languages. I suspect it is being held back from larger adoption due to its complexity. I see an opportunity for Mojo to get into that, with a simpler model and ability to gradually opt in to more complex but high performance parts of the language.
Mojo has a good chance to target Python programmers who would have gone for Go or Java for better performance, and C++ programmers who need something like Rust, but simpler.
That wouldn't be my guess at what "no compiler magic" implies.
Compiler magic means the implementation doing things that the application can't. It reflects limitations or constraints on the target language. If the language is expressive enough you can do everything through library code.
This would mean you're able to do simd vectorisation by hand, but you're also able to run a compile time transform that vectorises your code without needing to bind that transform into the implementation of the compiler.
Thus the non-programmer scientist can use libraries written by someone more on the boundary that do autovec etc, without needing to wait for the core mojo implementation to do it.
Won't the right abstractions like NumPy make it possible for researchers to obtain generally performant code, and lower the bar to write specialized, optimized code without having to drop all the way down to CUDA?
I think I would agree with you. In my opinion, that already exists and is decently mature. CuPy [0] for Python and CUDA.jl [1] for Julia are both excellent ways to interface with GPU that don't require you to get into the nitty gritty of CUDA. Both do their best to keep you at the Array-level abstraction until you actually need to start writing kernels yourself and even then, it's pretty simple. They took a complete GPU novice like me and let me to write pretty performant kernels without having to ever touch raw CUDA.
I don't have the exact quote on hand, but in the Mojo Discord, Chris Latner explicitly said he wants no "compiler magic" in Mojo. With that idea in mind, Mojo makes it a lot easier to do optimizations like SIMD vectorization by hand, but you will still have to do it manually. My guess is that many scientists who don't like programming would find it annoying to hand-write those kinds of optimizations. If you want a language that gives you nice, performant code on your first attempt, Julia is always a decent option.
These are the docs for some of Mojo's higher order functions that implement vectorization, parallelization, tiling, loop switching, etc. https://docs.modular.com/mojo/stdlib/algorithm/functional
I do think they are a good idea and relatively easy to use; I'm just not convinced that the non-programmer scientist will like them.