Hey everyone, James from Silogy here.
We’re excited to open-source our test runner, Smelt. Smelt is a simple and extensible test runner optimized for chip development workflows. Smelt enables developers to:
* Programmatically define numerous test variants
* Execute these tests in parallel
* Easily analyze test results
As chip designs get more complex, the state space that needs to be explored in design verification is exploding. In chip development, it's common to run thousands of tests, each with multiple hyperparameters that result in even more variation. Smelt offers a straightforward approach to generating test variants and extracting valuable insights from your test runs. Smelt integrates seamlessly with most popular simulators and other chip design tools.
Key features:
* Procedural test generation: Programmatically generate tests with python
* Automatic rerun on failure: Describe the computation required re-run failing tests
* Analysis APIs: All of the data needed to track and reproduce tests
* Extensible: Define your tests with a simple python interface
Yves (https://github.com/silogy-io/yves) is a suite of directed performance tests that we brought up with smelt – check it out if you’d like to see smelt in action.
Repo: https://github.com/silogy-io/smelt
We built Smelt to streamline the testing process for chip developers. We're eager to hear your feedback and see how it performs in your projects!
That said, it seems like Smelt is far too early in development to be practically used at this point. Some basic table stakes that I didn't see:
- Test weighting. Having a way to specify a relative repeat counts per test is essential when using constrained random tests like we do in the ASIC world.
- Some form of tagging to identify a test as belonging to multiple different groups. When combined with a way to intersect and join groups when specifying the tests to run.
- Control over the random seed for an entire test run. I was glad to see some support for test seeds. However when invoking smelt to run multiple tests, it would be nice to be able to reproducibly seed the generation of each individual test seed. Maybe this is outside the scope of this project?
Great things to see:
- Procedural test generation is a key feature.
- Extendable command invocation
- SLURM support is in the roadmap, also an important feature for groups that use SLURM.