Not just discontinuous but have high Kolmogorov complexity (effectively meaning that the value of the objective function is random and has no real relation to the input arguments) so not a surprise that you can't do better than random!
Honestly, there's no justification to be using NFL theorems to explain why we can't optimize well on real world tasks.
Edit: And such high Kolmogorov complexity function constitute most possible objective functions -- i.e. exponentially more than the number of objective functions that don't have high Kolmogorov complexity. And all real world objective functions have comparatively low Kolmogorov complexity.
Yeah. You have a function, so basically a long array of numbers, and you want to find the maximum. If the data in the array has some structure, like it's sampled from a sine wave or something, you can use some strategies to find the maximum. Like gradient descent, or binary search. Something.
But if the array is filled with random numbers, looking at other arrays elements give absolutely no hint on what might be in an array element you haven't yet looked at. So there doesn't exist any more efficient strategies to find the maximum number, than linear or random search.
And the space of all possible functions mostly consists of discontinuous functions that are, for all purposes, just samples of random noise.
This is all NFL theorems say. I really don't understand how they got be such a big deal.
Honestly, there's no justification to be using NFL theorems to explain why we can't optimize well on real world tasks.
Edit: And such high Kolmogorov complexity function constitute most possible objective functions -- i.e. exponentially more than the number of objective functions that don't have high Kolmogorov complexity. And all real world objective functions have comparatively low Kolmogorov complexity.