If x and z are strongly correlated you can't determine causality for either within linear models. They are practically interchangeable and removing one will not decrease the effect of the other. "Controlling for", or keeping both on the right side, will decrease the effect of one over the other in unexpected ways, and why strong collinearity yields strange results in linear models.
If multicollinearity were such an issue, you'd be rejecting a wide range of techniques, such as including a lag term as a regressor for autoregressive models.
In fact, you could consider this situation as an autoregressive process, using generations instead of traditional time steps.
Multicollinearity is a huge issue when fitting autoregressive and distributed lag models. In fact, it is even mentioned as such in the introduction section of the Wikipedia article on distributed lag [0].
Imagine a second order autoregression model for a time series that is a flat constant value for all time. How would you assign the coefficient for the first lag term and the second lag term? 0.5 to both? 1.0 to one and 0.0 to the other? In the presence of perfect correlation like that, it would be indeterminate how to assign effect sizes. Prediction accuracy of the model might be fine, but any causal interpretation of the effect sizes would be nonsense.
This also happens greatly for lagged regressors for more standard regression problems, and you often have to do something like z-scoring them and then combining the correlated regressions together into some type of pre-treatment aggregate score, like an average... effectively reducing N correlated lag terms into 1 composite score, when the correlations are high enough to cause multicollinearity problems. [1] has some further details.
In the context of pure forecast accuracy, you may not care so much about multicollinearity as long as the overall prediction is highly accurate.
But since this comment thread was about causality, I think it's important to point it out here. It can cause problems for causal inference in many classes of models, and it is a reason why you have to do careful pre-treatments, instrumental variable methods, etc., in some cases.
Those are good points. Your initial comment criticized the endogeneity issue, not the multicollinearity.
Further, controlling for parents income is essentially an AR1 model, which doesn't suffer from the multicollinearity interpretation issues you described. The problem would be if child willpower were too highly correlated with parents' income, not if parents' income were highly correlated with child's income.