Tom from Tensil here - happy to answer questions!
We developed Tensil to bring custom ML accelerators to people who don't have the resources of companies like Google, Facebook and Tesla. Currently, we're focused on supporting convolutional neural network inference on edge FPGA (field programmable gate array) platforms, but we aim to support all model architectures on a wide variety of fabrics for both training and inference.
Tensil is different from other ML accelerators in that it is open source and really easy to use. For example, you can generate a custom accelerator with one command:
$ tensil rtl --arch <my_architecture>
You can compile your ML model targeting that accelerator like so:
$ tensil compile --arch <my_architecture> --model <my_model>`
Running your model on FPGA is as simple as doing the following:
$ tcu.load_model(<compiled_model>)
$ outputs = tcu.run(inputs)
The accelerator generator was developed in Chisel and we built our own parametrizable compiler to target it. The link in the post takes you to the documentation, and here's a link to the Github repository:
https://github.com/tensil-ai/tensil/