This is looking very nice, can't wait to try it out. But I feel it's a missed opportunity to not provide some simple examples [1], as well as a benchmark to quickly estimate the amount of flops that can be expected. The main reason to use a library like this one over pyTorch or autograd/scipy is speed, after all.
[1] The test cases can be seen as examples, obviously, but they are not written in an easily accessible form IMHO.
EDIT: Looking more closely, it seems you need to provide an explicit gradient, no autodiff included, which makes it a complete nonstarter for me.
[1] The test cases can be seen as examples, obviously, but they are not written in an easily accessible form IMHO.
EDIT: Looking more closely, it seems you need to provide an explicit gradient, no autodiff included, which makes it a complete nonstarter for me.