Hacker News new | past | comments | ask | show | jobs | submit login

> The solution to Python’s GIL bottleneck is not some trick, it is to stop using Python for data-path code.

At least for the PyTorch bits of it, using the PyTorch JIT works well. When you run PyTorch code through Python, the intermediate results will be created as Python objects (with GIL and all) while when you run it in TorchScript, the intermediates will only be in C++ PyTorch Tensors, all without the GIL. We have a small comment about it in our PyTorch book in the section on what improvements to expect from the PyTorch JIT and it seems rather relevant in practice.




The JIT is hands down the best feature of PyTorch. Especially compared to the somewhat neglected suite of native inference tools for TensorFlow. Just recently I was trying to get a TensorFlow 2 model to work nicely in C++. Basically, the external API for TensorFlow is the C API, but it does not have proper support for `SavedModel` yet. Linking to the C++ library is a pain, and both of them cannot do eager execution at all if you have a model trained in Python code :(

PyTorch will happily let you export your model, even with Python code in it, and run it in C++ :)


The underappreciated (in my view/experience) part is that it also gets rid of a lot of GIL when used from Python because the part inside the JITed doesn't use Python anymore.

When you have multithreaded setups, this typically is more significant than the Python overhead itself (which comes in at 10% for the PyTorch C++ extension LLTM example, but would be less for convnets).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: