Hacker News new | past | comments | ask | show | jobs | submit login

yep 4x int8 (44 TOPS)on 1080ti. Is the framework support for there for inference at 4x speed int8 on 1080ti? How about training - I thought you need fp16 minimum for training. I've seen some research into lower precision training (XNOR) but unsure how mature it is.

Being able to use 44 TOPS for training on a single 1080ti would be pretty awesome.




Yes - here's a doc about doing quantized inference in TensorFlow, for example: https://www.tensorflow.org/performance/quantization

AFAIK, there's still a bit of a performance gap between just using TF and using the specialized gemmlowp library on Android, but that part's getting cleaned up.

Haven't seen much in generalized results on training using lower precision.


Does that work with Pascal CUDA8 INT8 out of the box?


I'm not sure - I believe it depends on getting cuDNN6 working, and from this bug, I can't quite tell if it works or not (but it's probably not officially supported yet): https://github.com/tensorflow/tensorflow/issues/8828




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: