Hacker News new | past | comments | ask | show | jobs | submit login

> One big problem with the conclusion is that intuitions from low dimensional spaces often don’t carry over to high dimensional spaces. e.g. common example is how the volume of the unit hypersphere intersected with hypercube ratio goes to zero. One funny thing I saw once was something like “the real curse of dimensionality is how different optimization in high dimensional spaces is compared to low dimensional spaces”.

How so usually in Pure Mathematics(Analytic area's) everything done in R^{1} R^{2} is usually generalized to R^{N} and much of the intuition carriers over. So how does it fail here in the context of Machine Learning ?

Note: I know nothing about ML :(




All the mechanics scale like you’d expect. You have bigger matrices containing more observations with more parameters per observation. Keep the dimensions aligned properly and everything works out just like it would with smaller vectors.

The discrepancies appear when you start dealing with probabilistic stuff. Suddenly everything’s conditioned by everything else, and your complexity skyrockets. Example: it’s easy to find people with about average height and weight, but the more physical characteristics you measure (increasing dimensionality) the more like you are to find something statistically unusual, eg very long fingers. Very interesting Air Force research on this [0]

WRT to optimization, higher dimensions can help avoid the local minima problem. When you have 1000s if possible directions to go, it’s unlikely that none of them will yield a better answer.

[0] https://www.thestar.com/news/insight/2016/01/16/when-us-air-...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: