Why is it that in genetic algorithms never seem to be mentioned any more. Are they sub-standard, or just a "higher-level" than typically talked about i.e. you must implement them yourself.
Genetic algorithms are not really an off-the-shelf black box that you can just plug your data into and get results. They take a domain expert to use efficiently, and even then they aren't guaranteed to perform that well. The area that I've encountered where they are most effective is in approximation heuristics for NP-hard problems where you slowly assemble a solution from smaller pieces.
+1 I'd also add that genetic algorithms are for optimization, and can't really be compared with most of the algorithms in that chart. It'd be a sub-level where different optimization techniques for finding model weights, for each type of approach (classification, clustering etc)are compared.
Most (all?) of the algorithms on the chart iteratively optimize an objective. However, most of the objectives are convex or otherwise admit an optimization strategy that performs better than a genetic algorithm.
I believe you are repeating what I said (?). All of the algorithms have different methods of arriving to an objective function and leveraging it's results. Yet, most share the same problem in terms of optimizing it, and yes, most choose other routes.
> They are very prone to getting stuck in local minima.
That's quite a generalization. A GA's tendency to get stuck in local minima can be mitigated by adjusting population size, selection method/size and rate of mutation -- i.e. increase the randomness of the search.
This is not a good generalization. I've usually only seen this issue with optimization problems when:
1) You haven't played with parameters
2) Implementation is not correct (usually the case with genetic algos, since it requires a reasonable amount of domain expertise vs say GD)
What evidence is informing your opinion that genetic algorithms are a bad search algorithm? What makes you say that they are very prone to getting stuck in local minima? Do you think they suffer from local minima more than, say, gradient descent?