Hacker News new | past | comments | ask | show | jobs | submit login

I believe the effectiveness of search algorithms goes something like this:

Monte Carlo < Hill climbing < Simulated annealing < Genetic algorithm < Differential evolution




No. It's completely context dependent.

A hill-climbing algorithm will take the lunch of every algorithm listed here if you have a clean convex optimization problem.


Where would you find such a problem in the real world?


Absolutely everywhere, it is a common subproblem in many many algorithms. For example, the basin hopping algorithm (sister of SA) solves one at every step.

A specific example is in image deblurring via Tikhonov regularization, which involves minimizing a function that is provably convex for many physically realistic types of blurring.


For my Master's thesis, I couldn't get satisfactory results with a genetic algorithm, even after spending quite some time tweaking the parameters from the default. But, with the default parameters of the simulated annealing code, I got some excellent results. For a few cases, I managed to get my model to fit almost perfectly with the experiment. For those cases, the only spots it didn't fit were parts where the experimental apparatus wasn't characterized well.

So, it's highly problem dependent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: