Hacker News new | past | comments | ask | show | jobs | submit login

Thanks.

>By that point you know it’s going to work, it’s just a matter of how well and whether you could’ve done nominally better with different tuning.

This can't be true in all cases, right? I'm assuming that for many initially promising results on less-compute when they scale it, the results aren't impressive. I'm very curious to know what is the trials-to-success rate of publishable results when big-compute is thrown in the mix.




It’s indeed a very high trials to success ratio. Again though, there’s enough papers preceding this one that you could have good confidence in the effort. Another thing that helps is orgs like OpenAI have their own servers, rather than renting ec2 instances.

You also don’t just launch that many things and them ignore it. You monitor it to make sure nothing is going terribly wrong.

But yeah there’s also the fact that if you’re Google, throwing $2m worth of compute at something becomes worth it for some reason (eg Starcraft)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: