Hacker News new | past | comments | ask | show | jobs | submit login

From my perspective, A filesystem is a critical infrastructure in an OS, and failing here and there and not fixing these bugs because they're not common is not acceptable.

Same for the RAID5/6 bugs in BTRFS. What's their solution? A simple warning in the docs:

> RAID5/6 has known problems and should not be used in production. [0]

Also the CLI discourages you from creating these things. Brilliant.

This is why I don't use BTRFS anywhere. An FS shall be bulletproof. Errors must only cause from hardware problems. Not random bugs in a filesystem.

[0]: https://btrfs.readthedocs.io/en/latest/mkfs.btrfs.html#multi...




Machines die. Hardware has bugs, or is broken. Things just bork. It's a fact of life.

Would I build a file storage system around btrfs? No - without proper redundancy at least. But I'm told at least Synology does.

I'm pretty sure there's plenty of cases where it's perfectly usable - the feature set it has today is plenty useful and the worst case scenario is an host reimage.

I can live with that. applications will generally break production ten billion times before btrfs does.


> Machines dies. Hardware has bugs, or is broken. Things just bork. It's a fact of life.

I know, I'm a sysadmin. I care for hardware, mend it, heal it, and sometimes donate, cann-bird or bury it. I'm used to it.

> worst case scenario is an host reimage...

While hosting PBs of data on it? No, thanks.

> Would I build a file storage system around btrfs? No - without proper redundancy at least.

Everything is easy for small n. When you store 20TB on 4x5TB drives, everything can be done. When you have a >5PB of storage on racks, you need at least a copy of that system running hot-standby. That's not cheap in any sense.

Instead, I'd use ZFS, Lustre, anything, but not BTRFS.

> I can live with that - applications will generally break production ten billion times before btrfs does.

In our case, no. Our systems doesn't stop because a daemon decided to stop because a server among many fried itself.


I have worked on and around systems with an order of magnitude more data and a single node failing did not matter. We weren't using btrfs anyway (for data drives) and it definitely was not cheap. But storage never is.

But again, most systems are not like that. Kubernetes cluster nodes? Reimage at will. Compute nodes for vms backed by SAN? Reimage at will. Btrfs can actually make that reimage faster and it's pretty reliable on a single flash drive so why not?


Well, that was my primary point. BTRFS is not ready for these kind of big installations handled by ZFS or Lustre at this point.

On the other hand, BTRFS’ single disk performance, esp, for small files is visibly lower than EXT4 and XFS, so why bother?

There are many solutions for EXT4 which allows versioning, and if I can reimage a node (or 200) in 5 minute flat, why should I bother with the overhead of BTRFS?

It’s not that I haven’t tried BTRFS. Its features are nice, but from my perspective, it’s not ready for prime time, yet. What bothers me is the mental gymnastics pretending that it’s mature at this point.

It’ll be good file system. An excellent one in fact, but it still needs to cook.


My impression of btrfs is that it's very useful and stable if you stay away from the sharp edges. Until you run into some random scenario that leads you to an unrecoverable file system.

But it has been that way for now 14 years. Sure, there are far fewer sharp edges now than there were back then. For a host you can just reimage it's fine, for a well-tested fairly restricted system it's fine. I stay far away from it for personal computers and my home-built NAS, because just about any other fs seems to be more stable.


The thing is, none of the systems I have the luxury to run a filesystem which can randomly explode any time because I pressed a button developers didn't account for, yet.

I have bitten by ReiserFS' superblock corruption once, and that time I had plenty of time to rebuild my system leisurely. My current life doesn't allow for that. I need to be able to depend on my systems.

Again, I believe BTRFS will be an excellent filesystem in the long run. It's not ready yet for "format, mount and forget" from my perspective. Only I'm against is, "it runs on my machine, so yours' is a skill issue" take, which is harmful on many levels.


Synology uses btrfs on top of classic mdadm RAID; AFAIK they don't use btrfs's built-in RAID, or even any of btrfs's more advanced features.


You do you.

Personally, btrfs just works and the features are worth it.

Btrfs raid always gets brought up in these discussions, but you can just not use it. The reality is that it didn't have a commercial backer until now with Western Digital.


If it works for you, then it's great. However, this doesn't change the fact that it does not work for many others.

If I'm just not gonna use BTRFS' RAID, I can just use mdadm + any file system I want. In this case, any file system becomes "anything but btrfs" from my point of view.

I've burnt by ReiserFS once. I'm not taking the same gamble with another FS, thanks.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: