Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a similar vein, inodes can run out. On most conventional Linux file systems, inode numbers are 32 bits.

For many, this is not going to be a practical problem yet, as real volumes will run out of usable space before exhausting 2^32 inodes. However, it is theoretically possible with a volume as small as ~18 TiB (using 16 TiB for 2^32 4096-byte or smaller files, 1-2 TiB for 2^32 256- or 512-byte inodes, plus file system overheads).

Anticipating this problem, most newer file systems use 64-bit inode numbers, and some older ones have been retrofitted (e.g. inode64 option in XFS). I don't think ext4 is one of them, though.



It does happen in prod. Usually due to virtual FSes that rely on get_next_ino: https://lkml.org/lkml/2020/7/13/1078


That method is wrapping and not checking for collisions? I would not call that a problem of running out then. It's a cheap but dumb generator that needs extra bits to not break itself.


There is a limit on reliable usage of the FS. Call it what you want. The user doesn't particularly care.


What I'm trying to say is that the problem you're describing is largely a separate problem from what kbolino is describing. They are both real but not the same thing.


... which also caused it's own issues with 32-bit applications without large file support that now fail to stat those files on 64-bit inode filesystems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: