Why? It was stupid and unsafe 10-15 years ago when MySQL was doing it, too, and all the devs who had been using more mature DBs (Oracle, DB2, etc.) complained about how bad it was.
I was nodding in agreement right up until the word "Oracle". Essential any history of databases will say that for years, Oracle was not an RDBMS even by non-strict definitions (the claim is that Ellison didn't originally understand the concept correctly), and certainly did not offer ACID guarantees.
Possibly Oracle had fixed 100% of that by the time MySQL came out, but now we're just talking about the timing of adding in safety, again -- and both IBM and Stonebraker's Ingres project (Postgres predecessor) had RDBMS with ACID in the late 1970s, and advertised the fact, so it wasn't a secret.
Except in the early DOS/Windows world, where customers hadn't learned of the importance of reliability in hardware and software, and were more concerned simply with price.
Oracle originally catered to that. MySQL did too, in some sense.
In very recent years, it appears to me that people are re-learning the same lessons from scratch all over again, ignoring history, with certain kinds of recently popular databases.
I am curious as to why. The underlying systems have only gotten more reliable and faster then they were 10-15 years ago. 10-15 years ago writing to disk was actually _more_ of a challenge then it is now with SSD's that have zero seek time.
I don't think it's gotten any easier to verify that something was actually persisted to disk though.
The hard part has always been verifying that the data is actually persisted to the hardware. And the number of layers between you and the physical storage has increased not decreased. And the number of those layers with a tendency to lie to you has increased not decreased.
For some systems it's not considered to be persisted until it's been written to n+1 physical media for exactly these reasons. The os could be lying to you by buffering the write, the driver software for the disk could be lying to you as well by buffering the data. Even the physical hardware could be lying to you by buffering the write.
In many ways writing may have gotten more reliable but verifying the write has gotten way harder.