"Automatically" isn't the best word, as TensorFlow won't make use of multiple GPUs unless you explicitly tell it to (at this time). That said, there are a number of benefits to using TensorFlow (including the ability to use multiple GPUs, if not automatically :) )
- Several common gradient optimization algorithms (Momentum, AdaGrad, AdaDelta, Adam, etc) are implemented already, which makes it a bit faster to get your training logic in place
- Going along with the above, there is more in the TensorFlow API focused specifically on training models, as opposed to being purely a math engine. Some might consider the extra funtionality "bloat", but I think it serves a good purpose
- The afforementioned multi-GPU functionality is nice, once you get used to it. It's good for either training multiple versions of a model in parallel or doing data parallel updates of parameters
- There are tools for compiling your trained models as static C++ binaries on mobile devices
- The TensorFlow ecosystem is quite nice: TensorBoard for visualizing training, the topology of your model, and various statistics (most recently visualizing projections of embeddings). TensorFlow Serving for deploying trained models. TF Slim for a more Keras-like layer by layer approach to model building. Several pre-trained models to jump start your own work.
- No compile times. There is a "no optimizations" option in Theano to remove the compilation, but many people's experience with Theano is having to wait to iterate on their code.
- I think the community is pretty swell too :) The Google team does a good job of responding to and working with folks who open issues or PRs
Generally, I'd say TensorFlow is really good when you want to minimize the amount of time between researching, training, and deploying your model.
The readme has a general overview of how you'll approach using it. Note that you'll want to optimize for inference (remove unnecessary operations from the graph) [0] and freeze your graph (convert Variables into constant tensors) [1] to drop in your own model for the pretrained Inception model that's used as an example.
- Several common gradient optimization algorithms (Momentum, AdaGrad, AdaDelta, Adam, etc) are implemented already, which makes it a bit faster to get your training logic in place
- Going along with the above, there is more in the TensorFlow API focused specifically on training models, as opposed to being purely a math engine. Some might consider the extra funtionality "bloat", but I think it serves a good purpose
- The afforementioned multi-GPU functionality is nice, once you get used to it. It's good for either training multiple versions of a model in parallel or doing data parallel updates of parameters
- There are tools for compiling your trained models as static C++ binaries on mobile devices
- The TensorFlow ecosystem is quite nice: TensorBoard for visualizing training, the topology of your model, and various statistics (most recently visualizing projections of embeddings). TensorFlow Serving for deploying trained models. TF Slim for a more Keras-like layer by layer approach to model building. Several pre-trained models to jump start your own work.
- No compile times. There is a "no optimizations" option in Theano to remove the compilation, but many people's experience with Theano is having to wait to iterate on their code.
- I think the community is pretty swell too :) The Google team does a good job of responding to and working with folks who open issues or PRs
Generally, I'd say TensorFlow is really good when you want to minimize the amount of time between researching, training, and deploying your model.
Edit: line formatting