Yes, but i've been using oracle databases for almost a decade and have never known it to drop data on the floor through bugs (only through user error). Not saying it doesn't happen, just that it's not a common event. It seems with mongo you should expect dataloss.
This is the difference between a database product that has been developed for more then 40 years and a developing software product that has been public for 2.5 years. If mongo has the longevity that Oracle has had with their db product, my guess is that in 40 years we will not be talking about mongo data loss. (However, my guess is that in 40 years we will not be talking about mongo)
Mongo people may say that's a good thing - if you aren't planning on dataloss, you are just begging for a disaster. And Mongo will force you to deal with recovery early on.
That's no excuse for the DB being buggy, but some of Mongo's problems are due to hard design constraints - it's not so easy to make a DB that is fast and reliable, and easy to configure. Other's are due to it being immature. Some of it is concerning - it seems it can crumble under heavy write load - not so great for a DB who's selling point is "fast at scale".
Part of Mongo's charm is how it works on a stock system. For traditional DB's, they cache stuff in RAM, then the OS caches the stuff they cached in RAM and swaps their cache to disk. Then you modify something, and the OS swaps the DB cache from disk to RAM, then the DB tells the OS to write the change to disk invalidating your OS disk cache, which then ... you get the picture. Mongo (and Couch) use the OS's cache, which is suboptimal on a tuned machine, but optimal on something you just threw together.
No, just that there's an upside to their risky design philosophy.
I like Mongo because of its documentation. It's really really great. And good documentation = widespread adoption, and a team who actually cares about user's needs. What they really need is a lengthy tutorial on backups (which they already have written), linked from every page in their documentation. Because their reliability not something they should be hiding.
Sure there is an upside, nothing against, but the trade-off they made should have been advertised on their front page (before they fixed the defaults) in large bold flashing letters -- "you might lose your data if you use this product with default options". That is all.
Why? Because they are making a database not an rrd logger or in memory caching server.
> What they really need is a lengthy tutorial on backups.
As I put it the grandparent post, as a general rule, avoid products whose mission is by design to teach you backup discipline. That is all.
> a team who actually cares about user's needs.
You know what is a better way to care about users' needs? Not losing their data because of a bad design. We are not talking about generating a wrong color for a webpage or even exceptions that are thrown and server needing restart, we are talking about data being corrupted silently without users noticing. Guess what, even backups become useless. You have no idea your data is corrupted, so you keep backing up corrupted data.
If you're suggesting that "traditional" databases operate without thinking about OS cache, unbuffered IO when called-for, memmap, etc., I strongly believe you're way off.