Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“Supernova (SN) cosmology is based on the key assumption that the luminosity standardization process of Type Ia SNe remains invariant with progenitor age. However, direct and extensive age measurements of SN host galaxies reveal a significant (5.5σ) correlation between standardized SN magnitude and progenitor age, which is expected to introduce a serious systematic bias with redshift in SN cosmology. This systematic bias is largely uncorrected by the commonly used mass-step correction, as progenitor age and host galaxy mass evolve very differently with redshift. After correcting for this age bias as a function of redshift, the SN data set aligns more closely with the cold dark matter (CDM) model” [1].

[1] https://academic.oup.com/mnras/article/544/1/975/8281988?log...



I know the team that did this. In fact i was listening to their seminar just a few days ago. They are very careful and have been working on this a long time. One caveat that they readily admit is that the sample used to create the luminosity age relation has some biases such as galaxy type and relatively lower redshift. They will be updating their results with the Rubin LSST data in the next few years.

Exciting times in cosmology after decades of a standard LCDM model.


> after decades of a standard LCDM model

Could you help me understand this sentence: "After correcting for this age bias as a function of redshift, the SN data set aligns more closely with the cold dark matter (CDM) model”?


The CDM model has no dark energy, unlike the LCDM model. The L stands for Lambda, which is the dark energy term in the Einstein equations. So they are saying when accounting for this effect, our universe looks more like a universe without dark energy, at least when only considering the supernovae probe.


That's only if you consider the supernovae data alone. In combination with other probes like BAO, etc, the combined data are pointing to a Universe with a dynamical (or time varying) dark energy model.


Just curious, is this dark matter holding back the universal expansion?


Our best guess is “maybe?”


Is there a recording of their seminar anywhere?


Its not publicly available. Maybe for the best haha. The speaker at some point went on a bit of a tirade against many people in the supernovae cosmology community. I think he endured many years of being ignored or belittled.


Did he yell "They LAUGHED at me at Heidelberg! They said I was mad. MAD!"?

It is a very fundamental shift, though. The whole "Dark Energy/Matter" hypothesis has always seemed to me, to be a bit of a "Here, there be dragonnes" kind of thing, but I am nowhere near the level of these folks, so I have always assumed they know a lot that I don't.


It is, but that’s also kinda the point. It’s just a variable to stand in for “whatever tf mass we’ve been missing this whole time” or what-have-you.


I've never really gotten this criticism. Science has worked on "here be dragons" ever since it became a "thing".

Neutrinos took like 40 years to discover after experiments earlier showed that either all of modern particle physics was wrong, or there was something that we couldn't see.


It wasn't a criticism. At least, not from me. It was just an observation.


I did a deep dive into cosmology simulations ~a year ago. It was striking how much is extrapolated from the brightness of small numbers of galaxy-surface pixels. I was looking at this for galaxies and stars, and observed something similar. The cosmology models are doing their best with sparse info, but to me it seemed that the predictions about things like Dark Matter and Dark Energy are presented in a way that's too confident for the underlying data. Not enough effort is spent trying to come up with new models. (Not to mention trying to shut down alternatives to Lambda CDM, or a better understanding of the consequences of GR, and the assumptions behind applying Newtonian instant-effect gravity in simulations).

Whenever I read things like "This model can't explain the bullet cluster, or X rotation curve, so it's probably wrong" my internal response is "Your underlying data sources are too fuzzy to make your model the baseline!"

I think the most established models are doing their best with the data they have, but there is so much room for new areas of exploration based on questioning assumptions about the feeble measurements we can make from this pale blue dot.


That fuzziness can be quantified. It's called error bars. Whenever physicists perform a measurement, they always derive a confidence interval from the instruments they use. They take great care of accounting for the limits of each individual instrument, perform error propagation and report the uncertainty of the final result.

Consider figure 5 of the following article for example:

https://arxiv.org/abs/1105.3470

The differently shaded ellipses represent different confidence levels. For the largest ellipsis, the probability of the true values being outside of it is less than 1%. We call that 3-sigma confidence.

> Whenever I read things like "This model can't explain the bullet cluster, or X rotation curve, so it's probably wrong" my internal response is "Your underlying data sources are too fuzzy to make your model the baseline!"

Well, then do some error analysis and report your results. Give us sigmas, percentages, probabilities. Science isn't based on gut feelings, but cold hard numbers.


It's not just a question of instrumental error though. There are also assumptions being used in interpreting the data from the instruments, and it's not generally possible to assign them reliable probabilities.

e.g. the first line of the article's abstract quoted above:

"Supernova (SN) cosmology is based on the key assumption that the luminosity standardization process of Type Ia SNe remains invariant with progenitor age."

If the results reported in the article are right, the confidence we should have in this assumption, and therefore any results relying on it, have just radically changed.


That's moving the goal post. I was specifically responding to concerns about fuzzy data.

It's true that assumptions have to be made, and those can and should be questioned, but that wasn't the concern of the comment I replied to.


My concern is model accuracy holistically; analyzing likelyness-to-be-correct including all assumptions; I think the post you are responding to is in context.


On the todo list! Not enough bandwidth, but hoping to get to that in the next year. Great point.

edit: That 1% figure doesn't sound possible unless it has its own set of assumptions that need a confidence!


Yeah a lot of stuff seems to be based on these fuzzy data which I also think is unreliable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: