Hacker News new | past | comments | ask | show | jobs | submit login

Typos fixed. "All" was hyperbole of course, but I think it definitely does not look good for the majority of the work done on features. SIFT was recently outperformed PN-Net for example.



SIFT is also quite old. It's amazing a single technique has retained so much value. Isn't it curious that modern convnets use convolution. On top that, they do convolutions at multiple scales (pooling). Starting to sound very familiar...


Actually the neural net approaches are older than SIFT.

Neural nets learn the distribution and even causal factors in the data. To me it seems that this distribution is often just too complex for it to be robustly captured by something that doesn't learn. Learning causal factors critically depends on learning along the depth of the network of latent variables which is a particularly opaque process, but this is what MLPs seem to do quite canonically (convnet being just a restricted special case of MLPs). I mean discerning causal factors is pretty much canonically the act of accumulating evidence with priors (weighted summation), deciding whether it is sufficient evidence and signaling how much it is (non-linearity).


Some of the approaches are, some aren't. SIFT itself builds upon knowledge that is much older than it. Either way it doesn't matter. The OP was arguing that the many years of man effort put into SIFT was a complete waste. I am saying that this is very shortsighted, as non-machine learning vision techniques have heavily influenced how we approach and think about vision problems even when using ML.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: