> I think that the ML community's decision to write core operations in C or C++ and provide Python wrappers is the way to go about it if you want the flexibility of scripting.
Isn’t Google’s “Swift for TensorFlow” move in direct opposition to this statement? I think you are right that it gets you 95% of the way, but that the final 5% of performance and portability simply will not be there.
It's not just 5%. If you have any API which takes a higher order function, for example a differential equation or optimization package, then even if your code is compiled and the user's input is compiled, you still have to hit Python in the middle, and that context switch can be the most costly part of an optimized code, making Numba+SciPy about 10x slower than it should be. So yeah, Python + compiled code is not a viable solution in all cases, or in fact what seems to be most of scientific computing (but not data science where things tend to not include function input).
There can be a significant hit in performance due to using Python. I generally prefer developing pure C++, but being able to test/prototype in Python is great. pybind11 makes interoperability easy and efficient.
Isn’t Google’s “Swift for TensorFlow” move in direct opposition to this statement? I think you are right that it gets you 95% of the way, but that the final 5% of performance and portability simply will not be there.
https://github.com/tensorflow/swift/blob/master/docs/DesignO...