Hacker News new | past | comments | ask | show | jobs | submit login

I recently bought a T4 to go with my epyc 7402 and 512GB ram for fun and this looks like a great use case. Thanks!



What's the advantage of purchasing a T4 instead of a 3090 or 4090?


Possibly the price. On secondary markets like Ebay - I've occasionally seen T4 cards for $500-600. Also, the form factor. The T4s are comparatively much smaller/shorter than a 3090/4090. So would be a easier fit in a server case.


A lot of 2U cases won't fit a consumer GPU. Furthermore, Tesla-equivalents are usually either significantly cheaper than their consumer counterpart (for last-gen and older GPUs) or similar in price with far more RAM.

I bought a bunch of Tesla P40s at a really low price compared to what 1080tis are going for.


I bought a couple dozen 24GB K80s for like $40 each.


I wanted to so bad, but for my project they just didn't work.

Doesn't mean I won't buy them anyway soon... gah, your message isn't helping me!


In my case I had two systems with 10x 2080tis in them that were being used for ML stuff. But the memory limits were annoying, and when ethereum mining really hit high swing I was able to sell the cards for a great price (even though 3xxx cards were out). I expected to replace them with faster modern cards with more memory at the same price later but we really haven't gotten there yet-- gpu prices are still super inflated and all except the most absurd cards are still memory starved (for ML). In the interim it turned out to be really cheap to get K80s.


For sure. Consumer GPU prices have fallen dramatically but they're still competitive enough; meanwhile last-gen (or later) no-output GPUs with passive cooling and single 8-pin CPU-type connectors are insanely cheap. P40s are readily available for $190, less if you lowball eBay or Facebook sellers at volume discounts.

You can even find some weird retired Cirrascale servers on eBay that provide 8 high-speed PCIe lanes through risers with Tesla-specific connectors on a motherboard with tons of RAM (and a terrible CPU) and multiple PSUs for fractions of their release cost.

It's a great time to be buying 3-5 year old ML equipment for small businesses and hobbyists. I wonder if the prices will ever go up? Not that I'm interested in speculating, but it's a small slice of the market that I'm participating in... though maybe in a year it won't be. I imagine a lot of small businesses will bring ML/DL stuff internal for at least development/testing.


Old enterprise hardware has been a pretty good value for a long time. The target audience for the hardware doesn't tend to buy surplus/used and joe-average doesn't want a 5kw consuming rackmount windtunnel.


You have forced air and don't want an integrated fan in your card


Power consumption. A Tesla T4 with 16GB RAM will consume a mere 70W. An RTX 3090 will need at least 300W, and the Titan models go up to 450W.


You can set the power limit of the 3090 as low as 100W. It will slow it down a lot, but probably still decently faster than a T4.


Every answer given is training AI, crazy to think about




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: