Hacker News new | past | comments | ask | show | jobs | submit login

Writing to disk and transaction logs are nice, but they aren't magic bullets. What if a data center catches fire? More mundanely, I've heard ~6% of hard drive fail/year. Only replication can help you there.

I'd argue that durability is a sliding scale. You have to figure out how much risk you're willing to take and you cannot have a perfectly durable system.




traditionally durability isn't considered a sliding scale, it's a goal/priority which requires you to implement multiple features and fallbacks to handle everything from invalid writes, crash during write, to the data center catching on fire.

thinking about durability this way may work great for MongoDB but it isn't how durability is framed in the rest of the database world.


A) Just because something is 'traditionally' done doesn't mean its mandatory. Databases 'traditionally' spoke SQL but I don't see you dinging anyone for breaking that tradition. You've used the Appeal to Tradition fallacy (look it up on wikipedia) many times, and it add nothing to your argument.

B) Durability is an important goal at a system-wide level, but that doesn't mean it needs to be handled at the database layer. In addition to the already mentioned replication and transaction log methods, it can also be handled at the block or fs layer using snapshots, or by admins using backup tools. It can even be handled by having a different Database of Record and using Mongo as a operational store. Mongo as software is agnostic; we provide the tools, but it is up to the user or admin to make the best decisions for their technical and business interests. If another layer of the stack provides sufficient protection against data loss, it is unnecessary to pay performance costs associated with doing it in the DB layer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: