Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



I've seen Petals mentioned several times before and I don't think it's the same thing. Correct me if I'm wrong, but it seems Petals is for running distributed inference and fine-tuning of an existing model. What the above poster and I really want to see is distributed training of a new model across a network.

Much like I was able to choose to donate CPU cycles to a wide variety of BOINC-based projects, I want to be able to donate GPU cycles to anyone with a crazy idea for a new ML model - text, image, finance, audio, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: