Hacker News new | past | comments | ask | show | jobs | submit login

They can't price-differentiate FP64 compute out, since ML uses FP32 or even FP16. They tried discriminating FP16 performance but frameworks switched to using FP32 units and downconverting to FP16 after. They can't kill FP32 performance since that's used for gaming. They tried killing the virtualization, they tried differentiating based on clustering, they tried every reasonable technical procedure.

This seems consumer-hostile because the entire thing is consumer-hostile - they want ML researchers to pay more for graphics cards because they have more money to spend, not because they can offer superior performance. (They can offer superior performance to GeForce, just not superior performance/$ or performance/W.)




You cannot just look at that, the Tesla’s have bigger memory bandwidth and bigger memory which can help to utilize the GPUs better by having faster access to more data


If you're in the specific range where your problem doesn't fit in a GeForce and does fit in a Tesla, then it can be great.

For a very large number of problems that are smaller than both or bigger than both, that extra memory bandwidth is a lot smaller than the price difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: