Under our best current theory (map) General relativity, total energy might not be conserved globally, the divergence of the stress-energy-momentum tensor is zero, meaning that energy is conserved locally within a small region of spacetime.
Physics is about producing models that make accurate predictions, it is a map, not the territory itself.
The 'crisis in cosmology' e.g. Hubble tension is most likely a sign that current models of the universe are incomplete.
Energy is conserved in static spacetimes and asymptotically flat spacetimes.
The Friedmann-Robertson-Walker spacetimes that cosmology often uses are not static nor asymptotically flat.
It is widely discussed, but all models are wrong, some are useful.
Noether's theorm is a power tool to find useful models.
> Physics is about producing models that make accurate predictions
Very few models in physics ever make accurate predictions -- only in very limited experimental circumstances, mostly ones inaccessible at the time these models were developed.
The ability to craft these experimental conditions, which enable accurate prediction, is predicted on the models actually describing reality. How else would one control the innumerable number of causes, and construct relevant devices, if these causes did not exist and the devices werent constructed to measure reality?
No no, the hard sciences are not concerned about prediction at all. They are concerned about explanation -- it is engineers who worry about predictions, and they quickly find that vast areas of science -- esp. physics -- is nearly impossible to use for predictive accuracy.
I like mostly agree with you but I kind of think as prediction versus explanation as more of a spectrum where you can weight both. Like I mostly think about it from a machine learning perspective where if you do the matrix inversion you can say well this is exactly where these coefficients come from but a random forest you might only get a shap value and an transformer will never give you the exact answer as to how it arrived at the solution since it is measuring a latent space. And in physics you desire a system of equations that can be used to describe some dynamics. And if it is terrible at doing it, then you are not going to trust the model much. But like the power of a model comes from its predictive ability. Like how Ptolemaic model mostly gets the planets right but for the wrong reasons and newtons law of gravitation gets it mostly right for the right reasons and it didn’t need to get regular adjustments like Ptolemaic. And so in that example you have both the predictive ability and the explainability both being important in different ways.
I recommend galit Shmueli paper called “to explain or predict “. I also like the “two cultures” paper by Leo breiman. These are both machine learning / statistics views on this topic.
Techniques (eg., of ML or non-ML) do not decide between explanation and prediction. It's common in ML to speak like many computer scientists do, completely ignorantly of science, and suppose somehow it is the algorithm or how we "care about" it which matters -- no.
It is entirely due to the experimental conditions which are a causal semantics on the data, not given in the data or in the algorithm -- something the experimenter or scientist will be aware of, but nothing the computer scientist will even have access to.
Regression is explanatory if the data set is causal, has been causally controlled, the data represents measures of causal properties, these measures are reliable in the experimental conditions, the variables under question each have causal relationship, and so on. Conditions entirely absent in the data and in the algorithm, and in anything to do with ML.
In a large number majority of cases where ML is applied, the data might as well be a teen survey in cosmo magazine and the line drawn an instrumental bit of pseudoscience. This is why the field is not part of scientific statistics -- because it aims to address "data as number" not "data as casual measure". The computer scientist thinks that ML can be applied to mathematics, or games like chess which is a nonsense scientifically (since there are no empirical measures of the causal properties of chess).
ML is the algorithms of statistics without any awareness, or use of, any scientific conditions on the data generating process.
The common joke about spherical cows in physics also points to the predictive, descriptive nature of the field.
The equivalence of various QM interpretations also points to the scientific realist views as being incorrect.
Rice's theorm, Gödel, Wada property etc... also demonstrate the problems of confusing the map with the territory.
There are further topics like indecomposable continua that arise frequently and naturally in nonpathological dynamical systems. Especially with time delayed ODEs, Hamilton systems etc?
Are you arguing that Hamilton systems aren't 'physics'?
The value of western reductionism is finding 'effective procedures' but teaching it as being reality is more about didactic convention and convenience.
'Hard science' is a term for study the universe through theories, hypotheses and experiments.
It is still about making predictions that match observations.
You're assuming an idealization is a fictionalization, rather than a way of getting at an essential property (ie., a stable, real, causal feature) of a system.
Treating a cow as spherical is a means of selecting is real property of volume, as it is causally efficacious in say, a gravitational field -- whilst discarding is accidental-random variations in volume across all cows.
That we can treat cows as spherical, and obtain relevant dynamics should show that this early 20th C. instrumentalism is false. By idealization one selects the actual properties of objects for explanatory modelling -- one does not invent them or otherwise construct a merely instrumental fiction. Cows have volume, whose variation is accidental across cows, their volume expressed as a sphere selects better for their essential volume.
Very few, if any, theories of physics are predictive in almost any situation without this idealization -- because it is impossible to describe, eg., the volume of any actual cow. An actual cow has uneven density, shape, etc. and would require a significant amount of data to describe -- nearly all of which does not bare on the role its mass plays in a gravitational field.
What idealization does is create hypothetical scenarios which imagination all irrelevant causes are controlled, and all accidental properties are uniform (/ of a known distribution) -- so that the model can focus on Explaining the target Essential property in question.
These hypotheticals are not inventions, they are means of targeting what is being explained.
If you look at the predictive accuracy of scientific models, as applied in any actual scenario, they are fall apart -- almost nothing at all can be predicted, because all actual situations comprise innumerable accidental features which cannot be modelled.
Determinism is falsified by quantum measurement, not superposition (which is deterministic), and even then only pragmatically, rather than philosophically.
Quantum superposition is indecomposable, measurement is an artifact of the Copenhagen interpretation.
You can view wave function collapse as invalidating your priors and it still works without observation.
The special case of quantum superposition being indecomposable function, or one you can't Curry if you prefer.
QM is actually lucky that there are only two exit basins, n≥3 is where you get the stronger form of indeterminism.
Classic Chaos is deterministic with inf precision, but riddled basins require absolute precision which is stronger, and the wada property is still not deterministic with absolute precision.
Example of the Newton fractal and the wada property.
Classical mechanics makes sufficiently accurate predictions to enable essentially all the engineering we do and was invented after we have been engineering things for a few millennia.
If you construct highly controlled experimental conditions predicated on classical mechanics being true, then in those scenarios, the model predicts.
But in almost all cases it fails to predict, because the situation is vastly too complex to model. You are only able to construct devices (eg., steam engines, baloons, etc.) which are "simple" in the relevant ways, because classical mechanics successfully explains real properties of objects.
If it didn't, you'd have no idea how to take an ordinary situation like, "dropping some objects off a cliff" into one where you could actually predict where they will land (ie., by waiting for a day with no wind, by shaping the objects to limit drag, and so on --- without controlling for these accidental features, you'd not be able to predict where anything would land other than "down there somewhere").
Under our best current theory (map) General relativity, total energy might not be conserved globally, the divergence of the stress-energy-momentum tensor is zero, meaning that energy is conserved locally within a small region of spacetime.
Physics is about producing models that make accurate predictions, it is a map, not the territory itself.
The 'crisis in cosmology' e.g. Hubble tension is most likely a sign that current models of the universe are incomplete.
Energy is conserved in static spacetimes and asymptotically flat spacetimes.
The Friedmann-Robertson-Walker spacetimes that cosmology often uses are not static nor asymptotically flat.
It is widely discussed, but all models are wrong, some are useful.
Noether's theorm is a power tool to find useful models.