Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a relevant link: http://www.researchgate.net/post/Is_deep_learning_with_decis...

Two examples:

- using layers of random forests (trained successively rather than end-to-end). Random forests are commonly used for feature engineering in a stack of learners.

- unsupervised deep learning with modular-hierarchical matrix factorization, over matrices of mutual information of the variables in the previous layers (something I've personally worked on; I'd be happy to share more details if you're interested).



Thanks!

Are these methods main stream? Esp. the layered RF, how good/bad does it do as compared to regular ones?


Not particularly. The desire of neural networks here are the non linear transforms you can do with the data. There's definitely some appeal and things to try here though. Gradient boosted trees and other attempts to augment random forest are pretty main stream though.


Nit: gradient boosting isn't an 'augmentation' of random forests - if anything, it's the other way round. AdaBoost is from 1995, the GBM paper was 1999, and Breiman's random forest paper in 2001 explicitly couches it as an enhancement to AdaBoost.


Good point! Terrible wording on my part.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: