Hacker News new | past | comments | ask | show | jobs | submit login

  Once you pushed an app beyond the level of usage the developer
  had performed in their initial tests, it would crawl to a near-halt
With HFS (unsure about HFS+) the first three extents are stored in the extent data record. After that extents get stored in a separate "overflow" file stored at the end of the filesystem. How much data goes in those three extents depends on a lot of things, but it does mean that it's actually pretty easy for things to get fragmented.

A bit more detail: the first three extents the resource and data forks are stored as part of the entry in the catalog (for a total of up to six extents). On HFS each extent can be 2^16 blocks long (I think HFS+ moved to 32-bit lengths). Anything beyond that (due to size or fragmentation) will have its info stored in an overflow catalog. The overflow catalogs are a.) normal files and b.) keyed by the id (CNID) of the parent directory. If memory serves this means that the catalog file itself can become fragmented but also the lookups themselves are a bit slow. There are little shortcuts (threads) that are keyed by the CNID of the file/directory itself, but as far as I can tell they're only commonly written for directories not files.

tl;dr For either of the forks (data or resource) once you got beyond the capacity of three extents or you start modifying things on a fragmented filesystem performance will go to shit.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: