Hacker News new | past | comments | ask | show | jobs | submit login

My main beef is how in parralel to the pure data and experiments, there’s all this commentary going back to other events (Chernobyl) or expected results (“ours models would predict") to seed the ideas that some of these data is just non significant.

To draw a bad analogy, if we had a plane crash and had to study how 5% survived, we wouldn’t be saying ‘previous crashes had everyone dead, we should not pay too much attention to this set of survivors’ or ‘our models predict bulky males to survive, young kids surviving here is just an oddity that has nothing to do with our study’.

Also if there’s actual data not matching the model, shouldn’t we think hard and long about the the model ? And if we do, the argument wouldn’t be that data doesn’t matches the model, but that we have additional data explaining the root cause of the oddities.

I am not saying we must to tie everything back to a single source, and for instance the fact that having more screening than before actualy affects the rate of detection and changes the statistics is a very valid point among many.

But brushing away swaths of oddities because it’s not expected feels lazy. Basically I’m wondering if we would find causes aside from direct radiations that could cause some of the issues. For instance the whole country’s rate of cancer rising shouldn’t be an argument to dismiss Fukushima having something to do with it, hell the gov shipped contaminated soil around the country, there were multiple food mislabeling issues etc. We should pin down an actual cause of the rise of cancers around the country before saying it’s unrelated.




Well, to be fair, the paper you picked did no experimentation. It's a review paper. Not only that, it's a review paper whose express purpose was to put forth the idea that the high thyroid rates in Fukushima prefecture are likely not due to the disaster (they say as much in the conclusion). It is very possible that it suffers from confirmation bias (I don't know enough about the topic to comment one way or another). If you look on any topic, you will find review papers like this.

If I understand your comment, though, I think you still suffer from misunderstanding the paper. The paper states that the data is consistent with our models. It's not consistent with the idea that the thyroid problems are caused by the disaster. As the paper states, we do not know what the thyroid problem rate was before the disaster because we did not screen people in the same way or at the same rate. The equipment being used now is much better at detecting thyroid problems. Other places using this equipment (for example South Korea) are also finding high rates of thyroid problems. When doing screening in other prefectures, they are also finding high rates of thyroid problems. Discovery of thyroid problems are on the rise all over the world. (All this according to the paper -- I haven't the foggiest clue if it is true).

In other words, the paper is explicitly saying that while the rates are high, it is likely that the high rates are due to a variety of other causes. They go to a considerable amount of effort to document in detail what these are. You seem to be fixed on the idea that the high rates are especially unusual or outside the boundaries of what we would expect if there were not a disaster. This entire paper was written to dispute that point of view. It is not the case that they are simply ignoring it. I think the biggest thing to understand is that they claim that there are places in the world where the rates are higher, which didn't have a nuclear disaster. Since the data does not match that of areas that did have nuclear disasters, but does match that which did not have nuclear disaster, they conclude that the thyroid problems are likely to have been caused by something other than the disaster.

If you dispute that, it's entirely up to you. Like I said, I have no idea if it is true or not -- I'm just telling you what they wrote. Generally, I don't like review papers like this because of the problem with confirmation bias. People start with the conclusion they want to have and then find evidence to support it. You can always find that kind of evidence. It's not even a well written paper... but I didn't choose it ;-)

I think the best way to learn more about whether or not these researchers know what they are talking about is to read the citations that they make. They make a lot of claims about detection rates of thyroid problems, the rate at which these turn in to cancers, etc. If they are wrong that the current rates can be explained by non-disaster causes (or if they are wrong that data does not support a disaster cause), then you should be able to find the problem in those citations. Remember, it's a review paper! They are only gathering data from other papers and putting it together.

I suppose if I'm particularly uncharitable, if the authors of the paper are lazy, then I'd ask you to be at least reach their bar. If the data is rife with swaths of oddities, then write a review paper that points it out. Provide citations that show current data does not match our models. I bet you don't even need to do that. I bet there is at least 1 group of researchers in the entire world who has at least tried to write such a paper. Find it and we can have a much, much better conversation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: