I work with databases in extremely high OLTP workload environments since 20 or so years.
We're talking enterprise products, mostly Sybase, some Postgresql and very little Oracle.
Have I encountered bugs?
Sure, tons of them. Some of them grave enough to render the specific version of the database software unusable in the context of the project I worked on.
However, in all this time I probably dealt with no more then 3 - 5 corrupt databases, none of them went corrupt due to a database bug. Usually it was related to hardware failure,
Arguing that database corruption is inherent in the design of the product is, from a database perspective, beyond the pale.
A database "breaking" is absolutely not the same as a database blasting your data into corrupt confetti.
If you actually go through the various stuff posted, you find a recurring theme: people who lose data fall into a pattern of "well, they told me not to do this, but I did it anyway, so now it must be their fault".
Which, I think you'll find, is a far cry from "database corruption is inherent in the design".
But hey, learning that sort of thing would require reading; much easier to jump on a bandwagon, badmouth a product and downvote anyone who disagrees, amirite?
We're talking enterprise products, mostly Sybase, some Postgresql and very little Oracle.
Have I encountered bugs?
Sure, tons of them. Some of them grave enough to render the specific version of the database software unusable in the context of the project I worked on.
However, in all this time I probably dealt with no more then 3 - 5 corrupt databases, none of them went corrupt due to a database bug. Usually it was related to hardware failure,
Arguing that database corruption is inherent in the design of the product is, from a database perspective, beyond the pale.
A database "breaking" is absolutely not the same as a database blasting your data into corrupt confetti.