This is a classic example of how the physics community has been failing for the last 30 years.
First of all, there's an extreme focus on papers that have come out in the last 1.5 years, so that a lot of very interesting older work is invisible.
Secondly, physicists don't look outside the discipline, despite the fact that we often use inferior techniques. Back in the 1990's, Mark Newman and I were both working at Cornell and both of us were aware that the techniques physicists were using to evaluate power law distributions were bogus. Well, I was a timid grad student and, despite being one of the best physicists of his generation who already had written half of an excellent textbook and had a stellar research record, Mark was a postdoc who spent most of his two years in absolute anguish about how he was going to find his next job.
Mark wrote a paper about this ten years later, after physicists had published thousands of bogus papers using bogus statistics. It's a tragedy that neither Mark, myself or some other young turk didn't write it earlier -- and it wouldn't have been hard to do it all because it would mainly be a review paper of what was already in the statistics literature.
> Secondly, physicists don't look outside the discipline, (...)
Pardon? Phong shading is a simplified model of results of a physical process -- light reflections off certain kinds of surfaces (metallic ones, IIRC). It's imprecise, it has its limitation etc. Contrast that with the very tiny, precisely measured acceleration of Pioneer spacecrafts. It is not very common to get good results from applying a coarse tool to a fine problem.
A big part of physics is the art of approximation. It's almost never possible to have a perfectly 'exact' description of a situation.
For instance, in introductory physics we have students work a number of problems involving objects falling under the influence of gravity. Air resistance is rarely considered, and if were to be considered, approximations of some sort would be required, since there's no complete theory of turbulence.
Everything that involves a complex computer simulation, say molecular dynamics or fluid flow, involves approximations.
Even when you look at the simplest calculations involving elementary particles, all of these are intellectually justified by the renormalization concept that assumes that, at some very small length scale, the laws of physics that we know will break down -- but we know that QED, QCD and such are very good approximations.
The first virtue of an approximation is that it captures the qualitative character of a problem, after that, it's a matter of adding an increasing number of decimal places of quantitative accuracy.
More sophisticated models of radiation transport exist (these are critical to the development of H-bombs, etc.) and an obvious follow-up to this paper would be to use a better radiation transport code to validate the result.
> The first virtue of an approximation is that it captures the qualitative character of a problem, after that, it's a matter of adding an increasing number of decimal places of quantitative accuracy.
Unless the approximate model diverges from the actual process in extreme cases, such as very high, very low values.
One thing comes to my mind, if only slightly related: `Reciprocity failure' [1], an effect in photography, where the usual model of relationship between shutter speed (exposure time), photo material sensitivity and lightness of the scene photographed diverges from reality for extreme values.
In such cases, increasing precision of the model isn't just a numerical task (variable precision, iteration count, etc.).
> It is not very common to get good results from applying a coarse tool to a fine problem.
From the blog post, that isn't really what happened here -- the problem was accounting for reflections, and a technique was needed to estimate such effects. It doesn't matter that the anomalous acceleration was small compared to the total acceleration. It was of the same order as these reflection effects, and so only a "coarse" accounting of them was needed.
To nitpick, the discpline of physics covers everything - or it damn well ought to :) While it seems obvious after reading the article that solving the rendering equation is indeed a promising way to solve the Pioneer anomaly, I'm sure that I wouldn't have come up with the idea.
Power laws, Pareto distributions and Zipf's law, M. E. J. Newman, Contemporary Physics 46, 323–351 (2005). [link: http://arxiv.org/abs/cond-mat/0412004]
Agreed that physics would hugely benefit from exposure to modern techniques in computer science, "software carpentry," and statistics. How can that be encouraged, though? It's not in the curricula, and those are ossified and hard to shift. Individual researchers forging out on their own can't move the field much themselves. Summer schools, maybe, which could create ad-hoc communities carrying skills back to their home institutions?
As to the temporal bias in literature awareness: it's sort of unavoidable in any research area of decent size. The scope of the literature is simply too enormous. Certain key papers get codified as canon and others are lost in the fog. The situation isn't helped by the various journal publishers, whose various paywalls prevent an effective biobliometric universe which could be used as a discovery engine for old but relevant papers.
"Mark wrote a paper about this ten years later, after physicists had published thousands of bogus papers using bogus statistics."
Agreed, but to be fair a lot of non-physicists were also doing this. For example in their excellent review article http://arxiv.org/abs/0706.1062 Mark Newman, Aaron Clauset, and Cosma Shalizi show that a claimed power law in bytes received per http request was wrong. I'm not going to defend the sloppiness of physicists, but I would note that other fields don't fare as well in this either.
And I wish you or Mark had written that paper earlier too!
There's also the example is about how one medical researcher apparently rediscovered the trapezoidal rule, and managed to get a lot of citations to his discovery.
One of the comments on the original paper (the pubmed link that teraflop cites) did point out that it was "just" the trapezoidal rule, FWIW, so the only people one may want to find fault with were the who stopped at his paper and decided to cite it, rather than dig further.
I think your view of 'the physics community' is skewed by your own experiences and does not describe the actual situation in 'the physics community' very well. My experiences, perhaps because I was at an 'university of technology', are entirely different.
Agreed - they're still teaching undergrads Fortran '77 in 2011, as there's this weird conception in the physics community that no language since F77 could possibly even be arithmetically correct.
First of all, there's an extreme focus on papers that have come out in the last 1.5 years, so that a lot of very interesting older work is invisible.
Secondly, physicists don't look outside the discipline, despite the fact that we often use inferior techniques. Back in the 1990's, Mark Newman and I were both working at Cornell and both of us were aware that the techniques physicists were using to evaluate power law distributions were bogus. Well, I was a timid grad student and, despite being one of the best physicists of his generation who already had written half of an excellent textbook and had a stellar research record, Mark was a postdoc who spent most of his two years in absolute anguish about how he was going to find his next job.
Mark wrote a paper about this ten years later, after physicists had published thousands of bogus papers using bogus statistics. It's a tragedy that neither Mark, myself or some other young turk didn't write it earlier -- and it wouldn't have been hard to do it all because it would mainly be a review paper of what was already in the statistics literature.