I'm of the general opinion that what people think volatile does vs. what the compiler really does is quite different. Perhaps the situation has improved since this paper appeared.
Formally ignoring or forbidding volatile where it presently doesn't mean anything (despite fervent wishing) wouldn't break anything. Compilers would of course continue supporting whatever they do, but would be allowed to warn about dodgy uses.
On MSVC, volatile has traditionally meant something akin to atomic. It still will, regardless of what the Standard says and other compilers do.
You can't call non-volatile methods of volatile objects (just as you can't call non-const methods of const objects), and you can overload methods on volatility like constness. It may not have the "don't cache this in a register" semantics it did in the single-core C era, but it can't just be discarded.
Code that compiles today would still compile if restrictions were lifted, modulo overload resolution changes that might introduce new ambiguities or name collisions.
"Can't" is a big word when applied to the ISO committee. They would want to consider the magnitude of consequences. Since all implementations would provide a "make like before" switch, consequences would be limited.
They can choose to make breaking changes, but they can't eliminate "volatile" without breaking anything. I've definitely seen code that relies on it, albeit for strange reasons.
No, at best you saw code that expects the compiler to not apply specific optimizations that C++ doesn't suggest or enforce. It's a compiler issue, thus if anything that code needs to be compiled accordingly.
No, there are other reasons to use volatile declarations that have nothing to do with the original purpose, and that are completely unaffected by any potential effect on optimization. At the level I have seen, it is used as a simple tag to generate a type related to T that is neither T nor const T.