Hacker News new | past | comments | ask | show | jobs | submit login

NeRF is a good example of a network that doesn't have convolutions yet requires a ton of iterations to train. This paper is particularly relevant to wide networks which are important because CPU memory is currently much cheaper than GPU memory (even for FANG researchers!).



Interesting, I didn't know that NeRF was simply a feedforward network.

I hope that this research group can make more headway into training on CPUs, but I also would like to (naively) see less hyperbolic titles. This paper is not just particularly relevant to wide networks - it's only relevant to wide networks.


I think you mean to say "fully connected" in place of "feed-forward" when trying to draw a distinction with respect to "convolutional".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: