Hacker News new | past | comments | ask | show | jobs | submit login

It wouldn't really have fewer assumptions. In fact, it probably would have more. We just wouldn't know what they are. Classical image analysis is still interesting and valuable because you can construct an algorithm based on desired properties without having to have a large, labelled training set beforehand, and because it's computationally much less expensive.



> It wouldn't really have fewer assumptions. In fact, it probably would have more.

It depends on how you look at it. A deep learning approach is supposedly more generic. Therefore I suppose the assumptions would be dynamic instead of fixed.


Assumptions are NOT dynamic. Once we have chose a "loss" function or whatever fancy name you want to call the objective function by, you've already made a choice. There are never dynamic assumptions (a classic example would be the choice of use L2 loss in the pixel space essentially assumes a Gaussian likelihood, which is in principle kind of goofy but hey it works). Although, as alluded to earlier, it is hard to understand the space induced by the architectural assumptions (and many other moving parts)

I like to think it this way - effectively, deep learning provides learned priors from data for a downstream task whereas the manual way comes from expert knowledge without the learning part.


There are plenty of fixed assumptions within deep learning. Off the top of my head: (1) Loss function (2) Pooling layers (which hard code invariances)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: