Hacker News new | past | comments | ask | show | jobs | submit login

I've found most larger companies are more concerned about opex than capex. Large companies aren't going to have much of an issue there.

My response was more for these folks the OP mentioned-

> There is almost no universe where a couple of guys in their garage are getting access to 1000+ H100s with a capital cost in the multiple millions.

I'm pointing out that this isn't true. I was the founding engineer at Rad AI- we had four people when we started. We managed to build LLMs that are in production today. If you've had a CT, MRI, or XRay in the last year there's a real chance your results were reviewed by the Rad AI models.

My point is simply that people are really overestimating the amount of hardware actually needed, as well as the costs to use that hardware. There absolutely is a space for people to jump in and build out LLM companies right now, and the don't need to build a datacenter or raise nine figures of funds to do it.




> I've found most larger companies are more concerned about opex than capex.

Another absolute. I try to not be so focused on single points of input like that.

From what I can tell, sitting on the other side of the wall (GPU provider), there is metric tons of demand from all sides.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: