Our desire is to provide a general test-running and job-running framework, not to built the world's next test runner. We think nextest is great, and we were inspired by some of the things they did.
We've designed Maelstrom to be usable as a library. So you can build your own test runner or job runner. We've been in contact with Rain, the primary developer of nextest, regarding how we can make it so that nextest can use Maelstrom. We'd love nothing more than to have nextest be Maelstrom-ized (Maelstrom-ified?).
We definitely have a little bit of work to do, but we plan to make big steps with the API for the next release. Currently, the client library doesn't give per-test updates until the test finishes. This means that you don't know how long a test is taking to run until it's completed (though we do provide a timeout feature). This is fine for our currently limited UI, but is probably insufficient for nextest.
Maelstrom in standalone mode running on a single machine is usually a bit slower than nextest. Maelstrom and nextest are similar in that they both run each test in their own processes, and they both do a good job of running enough test processes in parallel to keep the machine busy. Maelstrom has to do a little bit more work each time it starts a new process to set up the namespaces, so it's always going to be a bit slower than nextest, but not by much.
One thing that Maelstrom does that I don't think nextest does is to use Longest Processing Time First (LPT) scheduling (https://en.wikipedia.org/wiki/Longest-processing-time-first_...). When the runtimes of tests varies a lot within a project, using LPT can result in big wins and more predictable runtimes. Maelstrom itself actually has some pretty long-running integration tests, and once we added LPT, running Maelstrom tests on Maelstrom is usually faster than running them on nextest. But again, we're not talking about huge differences in single-machine cases.
I think cargo test is usually slower than both Maelstrom and nextest for the reasons described in the nextest documentation: cargo test doesn't always keep enough test threads running to keep the machine busy. However, if you have a lot of really small tests all in a single crate, then cargo test can and does outperform both Maelstrom and nextest. The clap project (https://github.com/clap-rs/clap)is a good example of this.
I think Maelstrom does most of the performance things that nextest does. However, nextest obviously has a lot more features and integrations than Maelstrom.
We've designed Maelstrom to be usable as a library. So you can build your own test runner or job runner. We've been in contact with Rain, the primary developer of nextest, regarding how we can make it so that nextest can use Maelstrom. We'd love nothing more than to have nextest be Maelstrom-ized (Maelstrom-ified?).
We definitely have a little bit of work to do, but we plan to make big steps with the API for the next release. Currently, the client library doesn't give per-test updates until the test finishes. This means that you don't know how long a test is taking to run until it's completed (though we do provide a timeout feature). This is fine for our currently limited UI, but is probably insufficient for nextest.
Maelstrom in standalone mode running on a single machine is usually a bit slower than nextest. Maelstrom and nextest are similar in that they both run each test in their own processes, and they both do a good job of running enough test processes in parallel to keep the machine busy. Maelstrom has to do a little bit more work each time it starts a new process to set up the namespaces, so it's always going to be a bit slower than nextest, but not by much.
One thing that Maelstrom does that I don't think nextest does is to use Longest Processing Time First (LPT) scheduling (https://en.wikipedia.org/wiki/Longest-processing-time-first_...). When the runtimes of tests varies a lot within a project, using LPT can result in big wins and more predictable runtimes. Maelstrom itself actually has some pretty long-running integration tests, and once we added LPT, running Maelstrom tests on Maelstrom is usually faster than running them on nextest. But again, we're not talking about huge differences in single-machine cases.
I think cargo test is usually slower than both Maelstrom and nextest for the reasons described in the nextest documentation: cargo test doesn't always keep enough test threads running to keep the machine busy. However, if you have a lot of really small tests all in a single crate, then cargo test can and does outperform both Maelstrom and nextest. The clap project (https://github.com/clap-rs/clap)is a good example of this.
I think Maelstrom does most of the performance things that nextest does. However, nextest obviously has a lot more features and integrations than Maelstrom.