Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
No One Peer-Reviews Scientific Software (chicagoboyz.net)
54 points by cwan on Nov 30, 2009 | hide | past | favorite | 41 comments


I work in a fairly large scientific collaboration. Our data sets are on the scale of a few hundred terabytes per year. So, there's a lot of software to convert from the bits that stream out of the detectors to something that end users (grad students) can analyze.

We have a software group devoted to maintaining the core code. Our code has a nightly autobuild for development code and libraries are released every couple months.

The code is based on the ROOT package (about one million lines of code) maintained by folks at CERN and is well established within the community and elsewhere. Our libraries probably come to about a million more lines of code, which is maintained scrupulously by our collaboration.

Now neither of these sets of code actually do any analysis, they just make it so that the data is usable. In my analysis, there is probably ~10k lines of code to do the analysis and make the figures. In total, there's probably 500k or more lines of analysis code that is not officially maintained.

That doesn't include all of the code loaded onto FPGAs and custom chips reading out the detectors or the code in the trigger system. Nor does it include code written into various simulators used to determine "expected" response of the detectors.

So for peer review to look at code at the level suggested, people would have to look over literally millions of lines of code. To even be able to make it run, a user would have to set up an environment that would take between a few days and a week.

Instead, when I go to publish something, I lay out both my method and how I verified that it works. Reviewers then check that my method is sound and that my verification looks right. They have to trust that I implemented the method as I said I did.

In my collaboration, there is actually an internal review process that verifies code runs and for which a much longer private note must be written to explain all of the details of the analysis. However, there is still a high level of trust that everything was implemented as stated. This review does not qualify as "peer" review because it's conducted by people who will be listed as authors.

While I agree that more review is better than less, I hope this comment illustrates why software peer review is not a reasonable expectation.


The code I wrote for my simulations is not that big, but it's certainly big enough to be complicated and buggy. And it's GPL and available on my website.

And while I certainly don't expect any of my referees to go over the code line-by-line, one of the rationales I had for GPLing the code was to make it possible for someone to replicate my study. One of the basic tenets about publishing research is that articles should contain all the information needed to reproduce the results. This is not possible for simulation papers unless the source code is public (or maybe a binary is provided, but that has many pitfalls by itself).

Results generated with proprietary codes could just as well be hokus-pokus in my mind. It's not clear how it adds to the field unless others can reproduce and build on the results. And with reasonably complicated codes, it's not realistic to think that someone will implement their own version (though that is of course the absolute best thing).

So while I don't necessarily think scientific software should be peer reviewed, I'm very hesitant about results obtained with proprietary software.


What's your view on the mountains of scientific code that depend on proprietary math kernels and in some cases proprietary compiler platforms?


As you may guess, I'm unhappy about it. In astronomy, almost everyone uses IDL, which is a proprietary, closed-source language. (It also sucks, but that's beside the point.) I think this is bad, because it locks practically the entire research field into a closed platform. (Though GDL is working to be a replacement. I don't know how conformant it is, though.)

Still, I think this is less of a problem than the high-level scientific code. Most of the low-level functionality is pretty simple and (hopefully) much better tested than most scientific codes.


I'd just throw in that if someone uses your code to try to reproduce what you've done, they may actually just be reproducing a bug -- or in the sinister case, reproducing a faked result. So a more general point in this whole discussion would be that making the code available isn't always as important as making the mathematics or other theory behind the code available, and "reproduction" becomes "reimplementation", or some other corroborating element of the process as described by timr a few comments down (http://news.ycombinator.com/item?id=968456).

(Of course for large software endeavors that isn't practical either. So mattheww's point about trust unfortunately resonates.)

As for the proprietary issue, I think sometimes it might actually help reproducibility: if you can just drop a few lines of script into Matlab and examine them, it's potentially a much smaller task with a lot more battle-worn, common-ground code factored out than examining or re-implementing a full library of open code.


The parent hopefully reflects the workings of large, established scientific organizations.

I did only a bit of work for a very small shop and where I worked coding standards were completely non-existent.

I think that using Matlab is a good way destroy the ability of anyone else to review one's work because of Matlab's nature as a closed, ad-hoc environment.

Some scientists could learn a lot from software best practices.


I did my Ph.D. in a field requiring complex computer simulations. This article, like most climate "skeptic" critiques, is a logical fallacy writ large. A kernel of truth, surrounded by massive layers of exaggeration and misinterpretation of fact.

First, the notion that anyone could "peer-review" software for correctness is obviously absurd. We know this is an impossible standard. We accept that software is a (probably buggy) model, and (in my experience) reviewers are therefore extremely skeptical of results from computer models -- to the point that they're actually biased against publication. And since reviewers know that they can't verify correctness, they tend to look for theoretical papers that very closely reproduce experimental results.

Second, this type of critique is pushing hard on a straw man. The implicit assumption is that each paper is a stand-alone morsel of truth, and that the peer-review process somehow guarantees a paper's claims. This is not the case, and no scientist believes it to be true. It doesn't matter that peer review fails sometimes, because the system doesn't depend on it being flawless.

Science is about process, not people. Peer review is an important part of that process, but it's openly acknowledged that review has flaws. Thus, ask a scientist about any particular paper, and while s/he may be enthusiastic about the result or the theory, you'll almost never hear a scientist take the paper for granted as fact. More often than not, discussion of an interesting paper will center around designing other tests that can independently reproduce the paper's result. Ultimately, you have a system where a lot of genuinely smart, skeptical people are attacking ideas from all sides until either a consensus emerges, or the idea is destroyed.

The "skeptics" have it wrong, not because they're incorrect about human nature, but because they're attacking a fictional vision of science that exists only in their minds. They imagine a world revolving around a theory that is supported by one piece of data, extrapolated by buggy computer models, and presented by corrupt individuals who (for some unknown reason) have an incentive to doctor the results in the same way. In reality, science is massively redundant and competitive, computer models are treated as ancillary data (at best), and individual corruption is exposed by the redundancy of the system. If the "skeptics" put as much effort into understanding the science they critique as they do trying to find "smoking guns" for individual researchers, they would understand this dynamic.


Then I think there is a certain disconnect between how scientists understand science and how it is presented to the public at large. It certainly doesn't help in this regard that many scientists involved in Environmental Science are also passionate political activists. Maybe it is understood by scientists that peer review has flaws, but when they appear on morning talk shows the term "peer review" is wielded as a holy sword.

Regardless of the state of the current process, how could review become anything but better from increased openness?

>"And since reviewers know that they can't verify correctness, they tend to look for theoretical papers that very closely reproduce experimental results."

This seems to be part of the problem. "We'll use this data series for the 60 years that it agrees with everybody else, then throw it out for the 40 years where it doesn't, then not show anybody the 40 years we threw out".

But then again, I'm no climate scientist. Although, one would think that no harm would be done by releasing the data, even if there were legitimate statistical reasons for truncating it.

>"Ultimately, you have a system where a lot of genuinely smart, skeptical people are attacking ideas from all sides until either a consensus emerges, or the idea is destroyed."

If you're a smart, skeptical person attacking a paper that shows global warming to be severe, Michael Mann will get you fired. Or ignore your ideas. Mann just published a paper that appears to use an inverted data series after McIntyre already correctly pointed out that it was inverted, out of pride I suppose. Fine science, that.


It certainly doesn't help in this regard that many scientists involved in Environmental Science are also passionate political activists.

Absolutely.

I think we're going to have to have some kind of new certification system for research that affects public policy. Scientists simply can not be activists -- the conflict of interest is too great. If you find a meteor is going to strike the earth next year, you'd better be spending your time on orbital calculations and mass estimates, not on the Today show. Leave that to the politicians. Mind your knitting.

In the code arena I'll be charitable and say that mistakes were made at CRU. Public-funded science should run from a public wiki or source control system where the scientists are the only authors but the public at large are readers. Yes, I know that will drive some scientists mad compared to the current secretive system but it's time to let some daylight in, folks. You'll never, ever win an argument with an honest skeptic if you're keeping secrets. As for the dishonest ones? They're not your job. Mind your knitting.


"Public-funded science should run from a public wiki or source control system where the scientists are the only authors but the public at large are readers."

The IPCC report is free and public, for exactly this reason. It's quite plain from their arguments that few "skeptics" ever actually look at it.

"Yes, I know that will drive some scientists mad compared to the current secretive system but it's time to let some daylight in, folks. You'll never, ever win an argument with an honest skeptic if you're keeping secrets."

Riiight. Because the politicians and lobbyists who advocate for the "skeptics" are paragons of honesty and openness. Those dirty scientists could learn a thing or two.

(I have to say...your comment is the most brazen example of black-is-white, up-is-down spin I've ever seen on HN. Bravo.)


The IPCC report is a political document. It uses results that support its position and ignores everything else. The entire book "The Deniers" is a collection of complaints by scientists that the IPCC twisted their results to support its agenda. Ironically, nearly all of the scientists still "believe" in global warming, they just think THEIR work was misused.


> Because the politicians and lobbyists who advocate for the "skeptics" are paragons of honesty and openness.

You got it backwards, nobody expects any politicians and lobbyists to be paragons of honesty and openness!

That is why when scientists start acting as politicians and lobbyists, they lose their credibility, and that is the issue he was pointing out.


EDIT: I see some comments that are already talking about huge data volumes.

To those guys, simply take a look at Google, or Amazon, or the credit-card processors. Lots of places deal with huge datasets in a peer-reviewed, transparent manner. Of course,use common sense here.


In what meaningful sense do Google or Amazon have peer-reviewed and transparent data sets?

Please be specific.


I'll pick on Google, and use memory.

1) Google is notorious for peer-reviewing code.

2) IIRC, Google also has some functional/provable code

In general, large, critical datasets are routinely processed by businesses. Businesses who use separate testing groups, code reviews, and tiger teams to validate each others' work. None of this is new information (at least to me)

Of course, transparency doesn't extend beyond the organization's walls in many instances. If you thought I was saying that it did then I mis-communicated.


However, in many instances, transparency does extend well beyond an organizations boundaries to the public domain.

Many scientific and business code-bases are open source. Since the advent of portable languages and VM-based back-ends, the practice of open sourced software has increased - not decreased. Many algorithms themselves are published and peer-reviewed. You can purchase Corman's Introduction to Algoritms, Knuth's books Sedgewick's books, the Stony Brook Algorithm Repository. Scientific software has been especially drawn to participate in open-source. From the beginning, efforts of scientific software were focused on to increased accuracy and minimizing error. Start from Anthony Ralston's books and Richard Hamming's book for instance.

Peer review is available and performed at numerous levels. The original criticism "no one" does peer review is over-drawn, misleading, and I would say, unfounded.


Yes, you did miscommunicate; even allowing for the miscommunication you're making an extremely poorly-considered analogy.

The kind of peer review that matters for purposes of scientific integrity is review by outsiders; eg, a paper is "peer-reviewed" when experts in the field not involved in writing it or conducting its research give it a going-over and see if it appears to hold up.

The kind of transparency that matters for purposes of scientific integrity is making data available as-is to outsiders, so that they can meaningfully replicate your results ab initio (or, perhaps, not!).

Neither Google nor Amazon conducts meaningful amounts of peer review in the scientific sense nor are they transparent in the scientific sense (nor should they be, the last thing I want is any old anyone seeing the raw data backing someone else's gmail account or search history).

So you're making a useless assertion in the context of the issue at hand: neither Google nor Amazon does much "peer review" (to my knowledge Microsoft in fact does to some limited extent with shared source and hiring 3rd party auditors for some important code chunks); neither is "transparent".

In this thread you'll find multiple people speaking from positions of actual experience working on large-scale endeavors of scientific computation who've commented at length upon their internal practices.

I'm not going to repeat what they've typed but if you read those comments you will see that at least those commenting here were in fact engaging in what you're calling "peer review" and "transparency" wrt their development practices; their reports match my experience concerning scientific computation with large annual budgets but I won't claim any authority for my anecdotal experiences.

I will close out by posting a protip here; you might benefit from it but it's not specifically aimed at you.

If a blog with a name like "chicagoboyz" makes a bold, sweeping, and somewhat shocking assertion about an entire area of human endeavor -- that it itself has no plausible claim to expertise in (it's an ec blog, not a large-scale computation blog, and no claim of direct experience was made that I saw) -- and you find yourself nodding your head and thinking "yeah, that sounds plausible", proceed to do the following:

- slow down, step away from the computer, and count to 20 backwards in greek

- ask yourself: do I have any concrete knowledge, at all, about the area this claim is being made? Any involvement in a project in that area, or business involving that area, etc.? (In this case: do you have any direct experience with large-scale computational efforts in science? do you know anyone who's been involved in such a project? anything beyond the flamebait du jour?)

- if you do have any concrete knowledge: great, you have at least some nonzero evidence base from which your initial "yeah, that's right feeling" may or may not be substantiated. Think carefully about what you already know and see if the "yeah that's right" feeling holds up.

- if you don't have any concrete knowledge: you've given yourself an awesome opportunity for self-discovery and personal growth. Clearly there's something that makes you want to uncritically believe this specific sweeping claim about some area about which you literally know nothing concrete; we generally consider who believe sweeping claims without evidence suckers, and we've found an area where your preexisting biases leave you a sucker, and therefore at the mercy of others. You might still have the right intuition about the sweeping claim, but at least take the opportunity to de-suckerify yourself on this front before drawing your conclusion.


"Then I think there is a certain disconnect between how scientists understand science and how it is presented to the public at large."

There's clearly a disconnect between how science is performed, and how it is perceived. It's not clear that this is the fault of scientists. It's also very clear that certain political groups intentionally cultivate this misunderstanding, so that they can convince the public that an entire field of science is wrong.

"It certainly doesn't help in this regard that many scientists involved in Environmental Science are also passionate political activists."

I have unpleasant news for you, then: the "skeptical" side is composed almost exclusively of politicians and economists.

"This seems to be part of the problem. 'We'll use this data series for the 60 years that it agrees with everybody else, then throw it out for the 40 years where it doesn't, then not show anybody the 40 years we threw out'."

That's not actually what they did, and the fact that you don't understand the nuances of the subject goes back to what I was saying about spending more time looking for "smoking guns" than actually understanding the science. The tree-ring data wasn't omitted to make models fit better (that paper wasn't even about a model) -- it was omitted because it was unreliable data.

If you ask me, the tree-ring data should never have been included in that paper at all. And if it weren't, as far as I can tell, it wouldn't have affected the results -- except that there would be an obvious (but largely irrelevant) gap in modern temperature data -- and the "skeptics" would no doubt be attacking some other absurd corner of the climate change research.


>"It's not clear that this is the fault of scientists. It's also very clear that certain political groups intentionally cultivate this misunderstanding, so that they can convince the public that an entire field of science is wrong."

It sounds like you're excusing bad behavior by saying that other people sometimes behave badly. There is no "certain political group" that is over-representing the value of "peer review" to the public, unless we are talking about environmentalists:

http://www.reuters.com/article/latestCrisis/idUSGEE5AP1Y5

No "certain political group" forced the scientists involved in this scandal to act in unethical and secretive manners. Just because politicians act in unethical and dishonest ways doesn't mean that scientists are excused for sinking to their level.


Nope. I'm not excusing bad behavior. I think those scientists were being immature and irresponsible. But their bad behavior doesn't change my belief that the field of climate change research is valid.

Truth be told, the evidence I've seen doesn't even change my confidence level in their paper -- because unlike "skeptics", I never felt that any one source of historical temperature data was terribly likely to be precise in the first place. Fortunately for climate science, there are many other independent sources of historical temperature data, and all show the same trend. I believe it's extremely unlikely that they're all wrong. Certainly not in the same way.

(In case you were wondering, that's a core difference between skepticism, and "skepticism". A skeptic can be convinced that he's wrong. A "skeptic" wants only to convince others.)


Regardless of the state of the current process, how could review become anything but better from increased openness?

Increased openness is definitely a direction scientific computing needs to move toward. The trick is conflating that with "peer review" which often, for all its merit, is strongly tied to red tape and frustration. It's impossible to peer review software not because it's impossible to verify techniques but because without some level of trust somewhere it becomes impossible to publish, to communicate.

There's definitely a startup in there. There are tons of opportunities to kill off scientific journals as they are.

The challenge is that you can't kill peer review at the same time. Peer review is still, at the end of the day, critical. It's not a holy sword of truth but instead more like rinsing your plates in the sink before you throw them in the washing machine.


First, the notion that anyone could "peer-review" software for correctness is obviously absurd. See the post above: http://news.ycombinator.com/item?id=968727

It's hard to believe that the parent has high rating. Seriously, as science uses more and more software and computers, science is going to have to move closer to standards of reliability of the software industry as well as scientific standards of reliability. Is that not obvious?

Uh... also... the parent also distorts the linked article, which readily admits peer-review isn't a panacea ("Worse, the concept of 'peer review' is increasingly being treated in the popular discourse as synonymous with 'the findings were reproduced and proven beyond a shadow of a doubt.'"). Did parent even read the original article???


standards of reliability of the software industry

Standards of reliability? Are you talking about the same software industry I'm familiar with?


Uh, the software industry may have a long way to go compared to where one might want it to be. But it has gone a ways compared to one person writing batch scripts or (the example from the article) one person writing spreadsheets.


It's also worth noting that in places where reliability is worth the massive decrease in productivity (e.g. robots on mars, avionics, etc.) tools like formal verification and multiple implementations for verification of behavior are used.


Those are situations where scientists hire professionals to do the programming, so that supports joe_the_users's point.


Members of all fields of science are calling for open data and source code, so I read a political message in your opposition (and focus on climatology).

As for correctness, the article doesn't use that CS term and I haven't seen anybody ask for that impossible standard (in the strict CS sense.)

Find flaws earlier. That's the goal.


so I read a political message in your opposition (and focus on climatology): Well, here's the very first sentence of the article he's commenting on: "Recent revelations that the peer review system in climatology might have been compromised by the biases of corrupt reviewers miss a much bigger problem."

If focusing on climatology is evidence of a "political message", then I think you've fingered the wrong suspect.


Again, the calls for open access come from scientific fields broadly (where it already isn't happening voluntarily). If you're going to refute that on HN, where the audience isn't pitchfork-wielding, science-averse troglodytes, then you're probably making a narrow argument targeting a specific opposition, i.e. political.


I don't understand. The person you replied to didn't object to open access. He objected to this particular article, which (as it happens) says other things besides "open access would be a good thing".

The article suggests that part of the peer review process should be an examination of the software used, which is about as practical as saying that an experimental paper's reviewers should visit the lab and examine all the apparatus. (Actually, considerably less practical, as anyone who's ever done any sort of review on any nontrivial piece of software knows.) This is not the same as saying that there should be "open access".

The article opens by citing an alleged integrity problem in climatology and saying that lack of access to the relevant software is an even bigger problem. This is not the same as saying that there should be "open access".

The article has a very clear political agenda, as you can note from its beginning (climate-specific stuff, which you yourself have suggested taking as diagnostic for political agenda) and its end (approving links to Eric "famous computer scientist" Raymond and TCS Daily). timr is (1) pointing out the political agenda, for which you accuse him of being political, and (2) describing some specific problems he has with the article, which you characterize as "refuting calls for open access" even though his complaints do not at all take that form.

(I think it would be wonderful if every scientific publication were accompanied by every byte of code and data used in the work it describes, or at least the nearest thing to that that's legally and commercially feasible. But it wouldn't cure the problems that the article professes to find in the peer review process, and the only difference it would make to arguments over climate science is that the no-anthropogenic-global-warming people would find something else to complain about.)


timr:

Second, this type of critique is pushing hard on a straw man. The implicit assumption is that each paper is a stand-alone morsel of truth, and that the peer-review process somehow guarantees a paper's claims. This is not the case, and no scientist believes it to be true...The "skeptics" have it wrong, not because they're incorrect about human nature, but because they're attacking a fictional vision of science that exists only in their minds.

The article:

The concept of “peer review” is increasingly being treated in the popular discourse as synonymous with “the findings were reproduced and proven beyond a shadow of a doubt.”

This is never what peer review was intended to accomplish. Peer review functions largely to catch trivial mistakes and to filter out the loons. It does not confirm or refute a paper’s findings. Indeed, many scientific frauds have passed easily through peer review because the scammers knew what information the reviewers needed to see.


Two myths my non-science friends understand to be true.

a) A peer reviewed paper is very close to fact.

b) Scientists are essentially free from backbiting, tribalism and empire-building. They dispassionately rely on provable data.

Modify slightly to ignore any scientist funded by the wrong people.


You may also want to add,

c) Data obtained with public money is considered non-proprietary.

I have spoken with several people who read about climate scientists seeming to hide or be reluctant to share information collected with public money interpret this as the scientist being evasive or untruthful. They are always surprised when I tell them that project scientist, particularly on large space and earth science projects, usually have proprietary access to the data for some number of years as payment for involvement in support of the project.


This reminds me of one case where a person published a phylogeny, a reader felt that it was flawed, and so they went over the code together. They discovered a major bug which, when fixed, completely invalidated the results. The paper was retracted. (I can't source this, because it was a conversation I had with a postdoc and I don't remember the details)

In general, though, the fact that code isn't reviewed isn't the biggest problem with peer review. The problem is that it's simply very difficult to come up with the real problems if you're just reading a paper, even if they're lab experiments- just like you can't see the code, you can't watch them perform the experiments. With my own work, reviewers never actually pick up on what are the actual problems with the research (which I know only too well.)


This is an excellent find. Thanks

How did we let this problem develop? I think it was simply a matter of creeping normalcy.

Probably so. I know from working with various industries, such as military radio guys, that gradually their jobs became all about software. Twenty years ago you'd have one software guy and 30 engineers and now you have 30 software guys and 1 engineer. It was very late in the day when folks sat down and said "Hey! We're really a software organization now, and this stuff is really important, so we'd better start tightening things up" I imagine some types of academic research are in that same place now.

It just creeps in over many years.

Open Source Science is looking better and better.


*sigh, the general public will be heartbroken when they find out that scientists are in fact regular human beings. Science has become a surrogate for their faith.


It's unusual to peer review software in computer science research as well. There is no requirement that you release the source or even the binary of your shiny new system, especially in time for review. I've recently reviewed a paper that had a reference to an open-source project associated with the paper on a well-known site for hosting such things, but the project was content-free.


I think this article presents a brief, well written summary of an important issue from someone who clearly knows what he's talking about and I would recommend reading it.


Regarding: the "NO ONE" peer reviews (as the title of the article claims) and "creeping normalcy" as was apparently implied. I can think of numerous counter-examples in the software industry to what is stated about lack of review. For instance:

1. A company in the banking industry subjects every step of software development process to code review, plus security code review, and Q/A testing every time they release code for production.

So.. Shannon Rose wasn't talking about the banking industry or business code review. The author was talking about scientific systems. Ok.

2. Let's take a bioinformatic knowledge-base company. The code used to search and extract information from the knowledge base, though proprietary, is subject to code review and Q/A testing. In addition the knowledge base itself vigorously Q/A tested, peer reviewed and signed off before release.

But Shannon Rose wasn't talking about bioinformatics? Really? Ok....

3. Let's take a civil engineering company particularly, one who has been in the nuclear engineering industry (such as it has been). Structural engineering code used in nuclear power plants is constantly verified, running known engineering cases against the code base and rigorously checking the result, footnoting and explaining every detailed difference down to the nth decimal place.

But Shannon Rose wasn't talking about actual critical engineering code ... But wait. There's more.

The entire engineering software code base was open-source as are many scientific software projects. Further more, it was looked at in detail by physicists, engineers, and software engineers from within the company who weren't even working on the project. The code base was thus open to examination by engineers and scientists who use it as well as outsiders.

But, Shannon Rose wasn't talking about the nuclear engineering industry. No. This article concerns a particular set of useful but not life-threatening scientific finding that the author disagrees with - and where there are people who oppose learning and understanding what the truth is and what it means. Maybe that idea violate some persons understanding of some biblical text somewhere because somehow humans happen to be responsible mucking up a whole planet.

So, when that biblical text was written, who did peer review on that text? The interpretation?

The fact is, I can think of countless examples where people who work on scientific software, who after all believe that they are responsible for what they do, because maybe they're just that way, take extra precaution to verify their results and subject their work to review. (Because, after all, we don't want another Three Mile Island, or Y2K).

How many examples do I have to cite where there are responsible people before we can reject the idea that "no one" does peer review of any scientific software?

A footnote: the author also wrote that they think the Obama administration is "bringing martial law to the U.S." And why is this? Because the Bush administration was too incompetent to put heinous criminals on trial, we won't get to convict them.

And somehow this mistake is Obama's fault. If McCain had won it would presumably be his fault instead. Because one president couldn't be responsible enough to begin with. Khalid Sheikh Mohammed was actually put under military tribunal and allegedly "confessed." Who did peer review on the trial of Khalid Sheikh Mohammed?

Answer: No one. There was a document written up. No one signed it.

No. This article is not about science, or software. It claims instead there is this general widespread failure by people who work and review in scientific software. That claim is irresponsible on its own. It's follow-up conclusion that this "completely irresponsible" software review can be regulated by so-called "responsible" politicians (such as those who brought us Abu Ghraib, torture, and the possible untriability of Khalid Sheikh Mohammed) can "clean up" peer review is beyond hogwash.

Peer review certainly is an issue and always requires addressing. But I think this author should take their overactive bile somewhere else. </rant>


Also these climate models are likely calculated on Intel hardware: a company notorious for floating-point bugs. Even if we verify the software is correct, unless we see the RTL that describes the logic behind these chips, it's all just hocus-pocus as far as I'm concerned.


You can't verify software is correct, it can always fail at time t+1.

Basic configuration testing on PPC or other architectures would highlight a lot of the error-sensitive paths in the math libraries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: