Hacker News new | past | comments | ask | show | jobs | submit login

you need to do hard analysis that there cannot be once optimized, with all operations re-ordering that can happen

The previous post demonstrated that this is extremely difficult, beyond the ability of cutting edge research for mainstream languages.

https://research.swtch.com/plmm




I'm a bit tired of reading motivated blog posts like this that bring up enough related work that you know they know solutions exist to the problems they're bringing up, but for some reason felt it was not necessary to bring up those solutions. This line in particular is plain false:

> None of the languages have found a way to formally disallow paradoxes like out-of-thin-air values, but all informally disallow them.

The post neglects to mention the Promising Semantics paper of 2017 that resolves the out of thin air problem to (as far as I know) everyone's satisfaction, despite pointing out the previous work that brought up the problem. Similar things are true for ARM's memory model, etc.--this is all stuff that's been mostly resolved within the last few years as proof techniques have caught up with compilers and the hardware. Ironically, the thing that's been hardest to formalize by far in a useful way (outside of C++ consume) is--surprise, surprise--sequentially consistent fences!

It also handwaves away as some unimportant point the reason why compilers provide things like relaxed accesses--it's not just (or even primarily) about the hardware, but about enabling useful compiler optimizations. Even if all hardware switched to sequentially consistent semantics, don't expect languages that aim for top performance to abandon weak memory. And personally, I think it's ironic that at a time when even Intel is struggling to maintain coherent caches and TSO, and modern GPU APIs don't provide sequential consistency at all, people are trying to act like hardware vendors will realize "the error of their ways" and go back to seqcst.


I had not looked at the Promising Semantics paper of 2017. Thank you for the reference.

That said, what I have learned from watching this space for a decade is that even formal proofs are no match for the complexity of this general problem. You have to get the definitions right, you have to prove the right statements, and so on. There is still plenty of room for error. And even a correct, verified proof is not really an insight as to why things work.

Experts were telling us in 2009 that OOTA had been resolved to everyone's satisfaction, only to discover that it wasn't, really. Maybe Promising Semantics is the final answer, but maybe not. We need to get to the point where everything is obvious and easy to explain, and we're just not even close to that yet.

Looking at citations of Promising Semantics in Google Scholar I found this Pomsets with Preconditions paper from OOPSLA 2020: https://dl.acm.org/doi/abs/10.1145/3428262. It contains this sentence:

"As confirmed by [Chakraborty and Vafeiadis 2018; Kang et al. 2018], the promising semantics [Kang et al. 2017] and related models [Chakraborty and Vafeiadis 2019; Jagadeesan et al. 2010; Manson et al. 2005] all allow OOTA behaviors of OOTA4."

I take that to mean the jury still seems to be out on excluding all OOTA problems. Maybe the canonical example has been taken care of, but not other variants. And again we don't really understand it.


Disregarding memory models, OOPSLA is not a good source for PL research.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: