Hacker News new | past | comments | ask | show | jobs | submit login

In 2012, NCEI -- then known as National Climatic Data Center (NCDC) -- reviewed its methodology on how it develops Billion-dollar Disasters. NCEI held a workshop with economic experts (May, 2012) and worked with a consulting partner to examine possible inaccuracy and biases in the data sources and methodology used in developing the loss assessments (mid-2013). This ensures more consistency with the numbers NCEI provides on a yearly basis and give more confidence in the year-to-year comparison of information. Another outcome is a published peer-reviewed article "U.S. Billion-dollar Weather and Climate Disasters: Data Sources, Trends, Accuracy and Biases" (Smith and Katz, 2013). This research found the net effect of all biases appears to be an underestimation of average loss. In particular, it is shown that the factor approach can result in an underestimation of average loss of approximately 10–15%. This bias was corrected during a reanalysis of the loss data to reflect new loss totals. It is also known that the uncertainty of loss estimates differ by disaster event type reflecting the quality and completeness of the data sources used in our loss estimation. In 2019, six of the fourteen billion-dollar events (i.e., three inland floods events, California/Alaskan wildfires, tropical cyclones Dorian and Imelda) have higher potential uncertainty values around the loss estimates due to less coverage of insured assets and data latency. The remaining eight events (i.e., the severe storm events producing tornado, hail and high wind damage) have lower potential uncertainty surrounding their estimate due to more complete insurance coverage and data availability. Our newest research defines the cost uncertainty using confidence intervals as discussed in the peer-reviewed article "Quantifying Uncertainty and Variable Sensitivity within the U.S. Billion-dollar Weather and Climate Disaster Cost Estimates" (Smith and Matthews, 2015). This research is a next step to enhance the value and usability of estimated disaster costs given data limitations and inherent complexities.

In performing these disaster cost assessments these statistics were developed using the most comprehensive public and private sector sources and represent the estimated total costs of these events -- that is, the costs in terms of dollars that would not have been incurred had the event not taken place.

So these are numbers that were revised by consultants to be higher, aren’t just a measure of the direct impact of the event but of some made up estimate of what would have been the economics had it not taken place as “economic damage,” and built on comprehensive but uncertain data that is then compensated for with a model overlay that isn’t disclosed or discussed.

This is a case where the more comprehensive the model you develop becomes, the higher your estimated economic impact from events correlates.

Also the data is cpi adjusted. The part that’s left out is that they are including the kitchen sink and estimated damages to economic activity not actual damage, which is a methodological change to how damage is estimated and much higher — so it looks more expensive than historical events.

I’d really like to see more objective effort in the design of this research as it smacks of agendizing the conclusion a little. Especially considering that the methodology for calculating economic losses was designed to produce a report called US billion-dollar weather and climate disaster loss estimate report (NCDC 2014). What were they going to do? Develop models that didn’t hit the billion dollar threshold?

Especially when basically this is all designed around “Monte Carlo simulations,” which is a fancy-pants ways of saying “we used excel to produce an average with AVERAGE, STDEV.P, and VAR.P functions.” The problem there is that a Monte Carlo simulation requires assigning accurate and unbiased sources of multiple data input values to create a model of “uncertainty” or risk, and then it spits out results by averaging the outputs to obtain an estimate. But the results of a Monte Carlo simulation are subject to statistical variability. While the simulations provide estimates and probabilities, they are not precise predictions and it’s easy to change the outcome by selecting different inputs.

https://www.ncei.noaa.gov/access/billions/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: