Hacker News new | past | comments | ask | show | jobs | submit login

"Reality", in our case, is based on conversations with a few dozen customers who use our products as well as the actively publishing researchers we know personally. The majority of sampled customers are training convnets of some fashion, the most common being Yolo/SSD and ResNet.



Most people I know are just using Google Cloud. Directly integrated with Tensorflow, and way more scalable.

I can run 10 GPUs on my model training runs and finish in an hour now when they used to take at least 2 or 3 days, and it took absolutely no work on my end. It’s been absolutely wonderful for my productivity. The price doesn’t matter either because compared to how much the people are being paid at these sorts of companies, it really doesn’t matter. The boost in efficiency is so much more important.


Does Google Cloud run anything other than Tensorflow well? Specifically, I'm wondering about PyTorch.


GCP is pretty tightly integrated/optimizes for Tensorflow. That’s why scaling GPU amounts wasn’t a hassle for example since I was already using the TFEstimator framework.

I’m pretty sure for the next Tensorflow 3.0 update, they’re rethinking towards a Pytorch style with more dynamic computational graphs.


Do these people who you talk with know the specs?

Do they know if tensor cores in the these processors are just good for interference or are they similar to the more pricey models (floating point precision)?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: