Hacker News new | past | comments | ask | show | jobs | submit login

"He and Bernal provide a meta-analysis of the disparate measurements with a “Bayesian” statistical approach. It separates measurements into separate classes that are independent from one another — meaning that they don’t use the same telescope or have the same implicit assumptions. It can also be easily updated when new measurements come out. “There’s a clear need — which you would’ve thought statisticians would’ve provided years ago — for how you combine measurements in such a way that you’re not likely to lose your shirt if you start betting on the resulting error bars,” said Peacock. He and Bernal then consider the possibility of underestimated errors and biases that could systematically shift a measured expansion rate up or down. "

I am not sure how to parse this quote, and seem unlikely that appropriate statistical models weren't available




They've made a framework for combining datasets from different sources, something that is tricky in general because of the need to guard against unknown differences between experiments.

It's also not obvious that there should be a statistical framework because combinations is usually something done carefully by people familiar with both experiments, so essentially it's on a case by case basis and not wholesale across the field.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: