Actually this topic has moved beyond philosophy. There is a beautiful and simple mathematical argument and formulation of Occams Razor in terms of the universal prior of Solomonoff induction.
In a practical setting complex theories or models with lots of explanatory variables tend to overfit the data. In machine learning, regularization or picking a proper prior and things like Minimum Description length in forming networks are instances of penalizing complexity for better predictive power. In decision trees, pruning is another example. In genetic programming picking the smaller and or faster of two similarly performing programs substantially reduces its tendency to overfit.
For more, see http://en.wikipedia.org/wiki/Inductive_inference and http://www.scholarpedia.org/article/Algorithmic_probability.
This is another but similar concept of an incomputable strategy with optimal predictive power http://hans.math.upenn.edu/~ted/203S10/References/peculiar.p... predicated on picking the simplest explanation.
In a practical setting complex theories or models with lots of explanatory variables tend to overfit the data. In machine learning, regularization or picking a proper prior and things like Minimum Description length in forming networks are instances of penalizing complexity for better predictive power. In decision trees, pruning is another example. In genetic programming picking the smaller and or faster of two similarly performing programs substantially reduces its tendency to overfit.