Why is it that in genetic algorithms never seem to be mentioned any more. Are they sub-standard, or just a "higher-level" than typically talked about i.e. you must implement them yourself.
Genetic algorithms are not really an off-the-shelf black box that you can just plug your data into and get results. They take a domain expert to use efficiently, and even then they aren't guaranteed to perform that well. The area that I've encountered where they are most effective is in approximation heuristics for NP-hard problems where you slowly assemble a solution from smaller pieces.
+1 I'd also add that genetic algorithms are for optimization, and can't really be compared with most of the algorithms in that chart. It'd be a sub-level where different optimization techniques for finding model weights, for each type of approach (classification, clustering etc)are compared.
Most (all?) of the algorithms on the chart iteratively optimize an objective. However, most of the objectives are convex or otherwise admit an optimization strategy that performs better than a genetic algorithm.
I believe you are repeating what I said (?). All of the algorithms have different methods of arriving to an objective function and leveraging it's results. Yet, most share the same problem in terms of optimizing it, and yes, most choose other routes.
> They are very prone to getting stuck in local minima.
That's quite a generalization. A GA's tendency to get stuck in local minima can be mitigated by adjusting population size, selection method/size and rate of mutation -- i.e. increase the randomness of the search.
This is not a good generalization. I've usually only seen this issue with optimization problems when:
1) You haven't played with parameters
2) Implementation is not correct (usually the case with genetic algos, since it requires a reasonable amount of domain expertise vs say GD)
What evidence is informing your opinion that genetic algorithms are a bad search algorithm? What makes you say that they are very prone to getting stuck in local minima? Do you think they suffer from local minima more than, say, gradient descent?
I find it odd that Ordinary Least Squares is missing from the map, even though it's probably more popular than all the other methods in that entire map combined.
OLS is a special case of ElasticNet, Lasso, and ridge regression with the regularization parameters set to zero. (The latter two are also special cases of ElasticNet with one of the two regularization parameters set to zero.) In the presence of many predictors or multicollinearity among the predictors, OLS tends to overfit the data and regularized models usually provide better predictions, although OLS still has its place in exploratory data analysis.
To add to simonster's comment [1]: confusingly, OLS is also morally equivalent to what the map calls "SGD regressor" with a squared loss function[2]. It is also nearly equivalent, with lots of caveats and many details aside, to SVR with a linear kernel and practically no regularization.
So yeah, it is confusing. There is a lot of overlap between several disciplines and it's still an emerging field.
Yeah the nomenclature is not very rigorous and there is some overlap depending on how you look at it but, roughly and without being pedantic, the closest in that map would be SGD with a logistic loss function[1].