Hacker News new | past | comments | ask | show | jobs | submit login

Dismissing more interesting architectures, and ignoring the different approaches to fitting makes this seem very incomplete. I've long thought that neural networks are principally useful when the architecture can be chosen to exploit structural features of a problem - it hardly seems surprising that feed-forward neural networks with a given architecture have an alternate representation as polynomial regression. The focus on multicollinearity is also sort of odd here - I usually take it as a given that neural networks provide an over-complete basis which only works due to various kinds of regularization (and the use of optimization techniques like gradient-descent/back-propagation, which aren't really concerned with globally optimal parameter estimates, but rather in producing good fitted values/predictions).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: