My impression of genetic algorithms are that they work in the context of biology (you can actually see it happen in real time with viruses & bacteria), not computer science. I have yet to see an instance where a real problem in computer science was best solved using genetic algorithms.
The Genetic Algorithm ("genetic algorithms" is actually a misnomer) actually has little to do with biology except for original inspiration. It is simply one of a number of sample-based methods for doing global stochastic optimization.
There's an old saying about the Genetic Algorithm: that it's the "third best way to do anything". Stochastic optimization methods (or "metaheuristics") in general are knowledge-poor methods, essentially last-ditch techniques where you don't have any other known way to solve your problem and you don't want to jump off the cliff into random or brute-force search. They rely on a central heuristic: that similar candidate solutions will likely have similar performance (the "smoothness" criterion).
Here's the thing. There are a huge and growing number of crunchy problems in this category. If you're trying to find a good tic-tac-toe solution, you can almost certainly do better than a stochastic optimization method (just use state-space search, say). But if what if you're trying to find the set of behaviors for a two-thousand-agent multiagent simulation model which produces statistics most closely resembling known historical data? Or what if you're trying to find the best parameters for optimizing an aircraft engine whose space is filled with local optima? It's ugly problems like these, for which there is no principled solution method, where the Genetic Algorithm and its ilk reign supreme. You might say that you never see problems in computer science which are "best solved using genetic algorithms". My answer is: your daily problems are too simple to need them.
Book plug: you might enjoy my free online text on the subject, called Essentials of Metaheuristics. You can also get it in paperback. http://cs.gmu.edu/~sean/book/metaheuristics/
I've read through a large portion of the PDF version of your book quite a few times. It is very well written and helped me understand a number of different techniques very well.
I definitely recommend Sean's book as a starting point if you are interested in this field.
(I don't know Sean, and don't get anything for this recommendation other than the happy glow of helping a well deserved author get more recognition)
it's basically a search technique for problems where the search space is too large. so you visit "promising" portions of the search space. this happens a lot in computer science, and it happens even more in real life ;)
The thing about GAs is they're probably one of the easier to understand tools in Machine Learning, even someone with almost no math background can successfully implement a GA. However knowing when to use them optimally does require understanding of the rest of ML. Essentially if your optimization problem is convex, or "convex enough" (as is the case when minimizing the cost function for linear/logistic regression, SVMs, neural networks and more), GAs aren't the best solution. However when you have a really ugly optimization problem on your hands, they are a really good last resort.
There are plenty of applications, however you'll never see them everywhere because they'll never beat optimization techniques like gradient descent, hill climbers, back propagation etc. in cases where those techniques work.
So while GAs are very easy to understand, finding an ideal use case for them takes a bit of knowledge. Looking to solve problems with GAs is harder than having a hard problem and realizing GAs may be helpful.
I did not say that they did not work. I said that they were not the best solution. Other algorithms will usually do the trick at a fraction of the cost :-)
If you firmly believe so, could you please post what you think would be better solutions to some of those problems listed on the wiki? Namely electronic circuit design and plant floor layouts.
Evolutionary algorithms are very capable metaheurisitics with striking exploratory abilities. Exploratory in the sense that given a large search space, a well performing evolutionary algorithm can navigate this search space a lot faster than traditional metaheuristics. However, the problem of getting stuck into local optima is common still.
They definitely are a staple algorithm type in the field of optimization, especially combinatorial. Look it up.