Do 1080tis have fp16 support? Seems like a waste if the model can be fp16 trained and you're using full 32bit.
Similarly you should probably try a bunch of other frameworks (caffe2, cntk, mxnet) as they might be better at handling this non standard configuration.
yep 4x int8 (44 TOPS)on 1080ti. Is the framework support for there for inference at 4x speed int8 on 1080ti? How about training - I thought you need fp16 minimum for training. I've seen some research into lower precision training (XNOR) but unsure how mature it is.
Being able to use 44 TOPS for training on a single 1080ti would be pretty awesome.
AFAIK, there's still a bit of a performance gap between just using TF and using the specialized gemmlowp library on Android, but that part's getting cleaned up.
Haven't seen much in generalized results on training using lower precision.
I'm not sure - I believe it depends on getting cuDNN6 working, and from this bug, I can't quite tell if it works or not (but it's probably not officially supported yet): https://github.com/tensorflow/tensorflow/issues/8828
Similarly you should probably try a bunch of other frameworks (caffe2, cntk, mxnet) as they might be better at handling this non standard configuration.