Hacker News new | past | comments | ask | show | jobs | submit login
The Origins of DS_store (2006) (arno.org)
534 points by edavis 3 days ago | hide | past | favorite | 257 comments





Aside from this file, the "fork" concept of Mac file systems caused some wtf moments. Fork not being fork() but being the two-pronged idea in that file system, both a resource and a data component existed as pair. One metadata and one the file contents. In Unix, the metadata was in the directory block inode, and wasn't bound to the file in a formalism uniquely, it had to be represented by structure in tar, or cpio or zip distinctly. Implementing Mac compatible file support in Unix meant treating the resource fork first class and the obvious way you do it is for each file have .file beside it.

You couldn't map all the properties of the resource fork into an inode block of the time in UFS. It has stuff like the icon. More modern fs may have larger directory block structure and can handle the data better.


> One metadata and one the file contents.

I’d say this is not the right way to describe a resource fork. Instead, think of it as two sets of file contents—one called "data" and one called "rsrc". On-disk, they are both just bytestreams.

The catch is that you usually store a specific structure in the resource fork—smaller chunks of data indexed by 4-byte type codes and 2-byte integer IDs. Applications on the 68K normally stored everything in the resource fork. Code, menus, dialog boxes, pictures, icons, strings, and whatever else. If you copy an old Mac application to a PC or Unix system without translation, what you got was an empty file. This meant that Mac applications had to be encoded into a single stream to be sent over the network… early on, that meant BinHex .hqx or MacBinary .bin, and later on you saw Stuffit .sit archives.

That’s why these structures don’t fit into an inode—it’s like you’re trying to cram a whole goddamn file in there. The resource fork structure had internal limits that capped it at 16 MB, but you could also just treat it as a separate stream of data and make it as big as you want.


From https://en.wikipedia.org/wiki/Resource_fork:

> While the data fork allows random access to any offset within it, access to the resource fork works like extracting structured records from a database.

So, whatever the on-disk structure, the motivation here is that from an OS API perspective, software (including the OS itself) can interact with files as one "seekable stream of bytes" (the data fork), and one "random-access key-value store where the values are seekable streams of bytes" (the resource fork).

So not quite metadata vs data, but rather "structured data" (in the sense that it's in a known format that's machine-readable as a data structure to the OS itself) and "unstructured data."

The on-disk representation was arbitrary; in theory, some version of HFS could have stored the data and resource forks contiguously in a single extent and just kept an inode property to specify the delimiting offset between the two. Or could have stored each hunk of the resource fork in its own extent, pre-offset-indexed within the inode; and just concatenated those on read / split them on write, if you used the low-level API that allows resource forks to be read/written as bytestreams.

This in mind, it's curious that we never saw an archive file format that sends the hunks within the resource fork as individual files in the archive beside the data-fork file, to allow for random access / single-file extraction of resource-fork hunks. After all, that's what we eventually got with NeXT bundle directories: all the resource-fork stuff "exploded" into a Resources/ dir inside the bundle.


> So, whatever the on-disk structure, the motivation here is that from an OS API perspective,

There are multiple layers to the OS API. There is the Resource Manager, which provides the structured view. Underneath it is the File Manager, which gives you a stream of bytes. You can use either API to access the resource fork, and there are reasons why you would use the lower-level API.

One example from the documentation was to provide a backup. For various reasons, it was possible that a resource fork could become corrupt—this is back in the day that macOS had no protected memory (for shame!), disk was slow, and we didn’t use journaling filesystems. Some programs kept around backup copies of whatever file you were working on. If your data was stored in the resource fork, well, there’s an easy way to get a backup… just open the resource fork as a stream of bytes and copy it to another place on disk. You could copy it a data fork, and some people even copied it to a data fork in the same file.

The other main reason you would use the lower-level API is because you are writing a program like MacBinary or Stuffit.

> This in mind, it's curious that we never saw an archive file format that sends the hunks within the resource fork as individual files in the archive beside the data-fork file,

Well, there are advantages and disadvantages to that approach. You can already access resources inside a resource fork inside various archive formats, like MacBinary, AppleDouble, and AppleSingle. But you probably do want to preserve the actual byte stream of the resource fork itself. (And there’s also an undocumented compression format for single resources.)


I am not old enough to know how resource forks were implemented on Mac OS but this is definitely not the case today. Resource forks are implemented (or maybe "emulated" is a better word to use? Not sure how much effort is put into them) as random-access. You can use POSIX APIs to interact with them (using _PATH_RSRCFORKSPEC) and these are typically faster than other interfaces.

Back in the day, you used the Resource Manager to open a resource fork. The resource manager provides functions to load individual resources, query which resources exist, and add or modify existing resources.

The Resource Manager made it to Mac OS X as part of Carbon. The main part of Carbon is gone, but a part of it called CarbonCore survives, and that contains the resource manager. If you dig through the docs, you can find it. It was deprecated in 10.8 (which seems really late… the writing was on the wall about resources back when 10.0 hit).

https://developer.apple.com/documentation/coreservices/carbo...

The modern resource manager functions in CarbonCore I think just use the POSIX API underneath. Undoubtedly, there’s some test suite at Apple that makes sure it works correctly. Also undoubtedly, there’s some application vendors who wrote code using resources in the 1990s and still has some of that shipping today.


In Unix, it's said that "Everything is a file" - i.e. that everything on the system that applications need to manage should either be actual files on disk or present themselves to the application as if they were files.

This adage translated to classic MacOS becomes "Everything is a resource". The Resource Manager started out as developer cope from Bruce Horn for not having access to SmallTalk anymore[0], but turned out to completely overtake the entire Macintosh Toolbox API. Packaging everything as type-coded data with standard-ish formats meant cross-cutting concerns like localization or demand paging were brokered through the Resource Manager.

All of this sounds passe today because you can just use directories and files, and have the shell present the whole application as a single object. In fact, this is what all the ex-Apple staff who moved to NeXT wound up doing, which is why OSX has directories that end in .app with a bunch of separate files instead. The reason why they couldn't do this in 1984 is very simple: the Macintosh File System (MFS) that Apple shipped had only partial folder support.

To be clear, MFS did actually have folders[1], but only one directory[2] for the entire volume. What files went in which folders was stored in a separate special file that only the Finder read. There was no Toolbox support for reading folder contents, just the master directory, so applications couldn't actually put files in folders. Not even using the Toolbox file pickers.

And this meant the "sane approach" NeXT and OSX took was actually impossible in the system they were developing. Resources needed to live somewhere, so they added a second bytestream to every file and used it to store something morally equivalent to another directory that only holds resources. The Resource Manager treats an MFS disk as a single pile of files that each holds a single pile of resources.

[0] https://www.folklore.org/The_Grand_Unified_Model.html?sort=d...

[1] As in, a filesystem object that can own other filesystem objects.

[2] As in, a list of filesystem objects. Though in MFS's case it's more like an inode table...


One of most important technical details about resources in early MacOS is that it allowed the system to swap resources by using double indirect pointers (aka handles) with the lock bit stuffed into the upper 8 bits of the 32 bit. Stealing the extra flag bits from the upper bits instead of increasing the alignment to make a few lower bits available was fine on the 68000 and 68010 with their 24 Bit address space, but exploded into your face on an 020/030 with a real 32 Bit address space. It was a nightmare do develop and debug. A mix of assembler, Pascal and C without memory protection, but at least you could use ResEdit to put insults into Menu entries on school computers.

Good 'ol purgeable resources: one of the reasons why the early Mac could get away with 128kb and lots of floppy swapping.

>> One metadata and one the file contents.

> I’d say this is not the right way to describe a resource fork. Instead, think of it as two sets of file contents—one called "data" and one called "rsrc". On-disk, they are both just bytestreams.

I think it's a perfectly fine way. You're just coming at it from a wildly different level of abstraction.

One could say yours is not the right way either and jump down into quantum fields as another level.


Resource fork used to contain all the stuff you could edit with ResEdit (good old times!) right? Icons, various gui resources, could be text and translation assets too. For example Escape Velocity plugins used custom resource types and a ResEdit plugin made them easy to edit there.

A lot of Classic Mac apps just used the resource fork to store all their data. It was basically used as a Berkeley DB, except the keys were limited to a 32-bit OSType plus a 16-bit integer, and performance was horrible. But it got the job done when the files were small, had low on-disk overhead, and was ridiculously easy to deploy.

Once you pushed an app beyond the level of usage the developer had performed in their initial tests, it would crawl to a near-halt, thrashing the disk like crazy on any save. Apple's algorithm would shift huge chunks of the file multiple times per set of updates, when usually it would be better to just rewrite the entire file once. IIRC, part of the problem was an implicit commitment to never strictly requiring more than a few KBs of available disk space.

In a sense, the resource fork was just too easy and accessible. In the long run, Mac users ended up suffering from it more than they benefited. When Apple finally got rid of it, the rejoice was pretty much universal. There was none of the nostalgia that usually accompanies disappearing Apple techs, especially the ones that get removed outright instead of upgraded (though one could argue that's what plists, XML and bundles did.)


The rejoicing was definitely not universal. It really felt like the NeXT folks wanted to throw out pretty much the entire Mac (except keeps its customer base and apps) and any compatibility had to be fought for through customer complaints.

Personally, MacOS X bundles (directories that were opaque in the Finder) seemed like a decent enough replacement for resource forks. The problem was that lots of NeXT-derived utilities munged old Mac files by being ignorant of resource forks and that was not ok.


The 9->X trapeze act was a colossal success, but in retrospect it was brutally risky. I can't think of a successful precedent involving popular tech. The closest parallel is OS/2, which was a flop for the ages.

A large amount of transition code was written in those years. One well-placed design failure could have cratered the whole project. Considering that the Classic environment was a good-enough catch-all solution, I would have also erred on the side of retiring things that were redundant in NeXT-land.

Resource forks were one of the best victims, 1% functionality and 99% technical debt. The one I mourned for was the Code Fragment Manager. It was one of Apple's best OS9 designs and was massively superior to Mach-O (and even more so wrt other unices.) Alas, it didn't bring enough value to justify the porting work, let alone the opportunity cost and risk delta.


The Challenges of Integrating the Unix and Mac OS Environments (from 2000)

https://www.usenix.org/techsessionssummary/challenges-integr...


I'm still mourning file name extensions and the loss of the spatial Finder.

Me too and I switched from Mac to Linux in 2005.

MacOS X bundles are actually NeXTStep bundles, and are behind the same idea in Java JAR files with META-INF directory, and .NET resources, due to Objective-C's legacy on all those systems.

> When Apple finally got rid of it, the rejoice was pretty much universal. There was none of the nostalgia that usually accompanies disappearing Apple techs

Here's some https://arstechnica.com/gadgets/2001/08/metadata/


  Once you pushed an app beyond the level of usage the developer
  had performed in their initial tests, it would crawl to a near-halt
With HFS (unsure about HFS+) the first three extents are stored in the extent data record. After that extents get stored in a separate "overflow" file stored at the end of the filesystem. How much data goes in those three extents depends on a lot of things, but it does mean that it's actually pretty easy for things to get fragmented.

A bit more detail: the first three extents the resource and data forks are stored as part of the entry in the catalog (for a total of up to six extents). On HFS each extent can be 2^16 blocks long (I think HFS+ moved to 32-bit lengths). Anything beyond that (due to size or fragmentation) will have its info stored in an overflow catalog. The overflow catalogs are a.) normal files and b.) keyed by the id (CNID) of the parent directory. If memory serves this means that the catalog file itself can become fragmented but also the lookups themselves are a bit slow. There are little shortcuts (threads) that are keyed by the CNID of the file/directory itself, but as far as I can tell they're only commonly written for directories not files.

tl;dr For either of the forks (data or resource) once you got beyond the capacity of three extents or you start modifying things on a fragmented filesystem performance will go to shit.


> When Apple finally got rid of it

Oh, they're not gone -- still very much part of APFS. You can read the contents of the resource fork for a file at path `$FILE` by reading `$FILE/..namedfork/rsrc`

The resource fork is still how custom icons for files and directories are implemented! (Look for a hidden file called `Icon\r` inside any directory with a custom icon, and you can dump its resource fork to a `.icns` file that Preview can open)


Hehe yep, but if we're doing vestigial nitpicks, I'd like to see an OpenResFile app that was ported to OS X and kept using the resfork to save its data. FAIK such a recalcitrant beast might even exist.

They also back transparent filesystem compression

NSUserDefaults, the modern programmer's fork DB :)

I credit ResEdit hacking partially for steering my path towards becoming a programmer. I had my Classic Mac OS installs throughly customized, as well as the other various programs and games that stored their assets in resource forks.

It was a lot of fun and something I’ve missed in modern computing. Not even desktop Linux is really fills that void. ResEdit and the way it exposed everything complete with built-in editors was really something special.


ResEdit and using it to modify Escape Velocity is 100% the reason I’m still in this industry.

Same here but only for joining the industry. Now it's the opposite, that webdev still hasn't reached that level of maturity of classic Mac OS makes me want to quit.

The other big thing in the resource fork was the executable code segments that made up the application. In fact applications typically had nothing but the data fork at all. It was all in the resource fork.

I always thought the resource fork as a good idea poorly implemented. IMO they should have just given you a library that manipulated a regular file. Then you could choose to use it or not but it would still be a single file. It could have a standard header to identify it and the system could look inside if that header was there.

One of the big problems with resource forks was that no other system supported them so to host a mac file on a non-mac drive or an ftp server, etc, the file had to be converted to something that contained both parts, then converted back when brought to the mac. It was a PITA.


NTFS has alternate data streams. I think its hardly ever used.

https://en.wikipedia.org/wiki/NTFS#Alternate_data_stream_(AD...


Very commonly used to hide malware and other things you don't want the average user or windows admin to find.

The article said most browsers mark downloaded files.

That's done as part of xattr, or extended attributes. It's a very flexible system. For example you can add comments to a file so they are indexed by Spotlight.

Except NTFS does not have "extended attributes" in Linux/Irix/HPFS sense.

Every FILE object in the database is ultimately (outside of some low level metadata) a map of Type-(optional Name)-Length-Value entries, of which file contents and what people think of as "extended attributes" are just random DATA type entries (empty DATA name marks the default to own when you do file I/O).

It's similar to ZFS (in default config) and Solaris UFS where a file is also a directory


> Except NTFS does not have "extended attributes" in Linux/Irix/HPFS sense.

Except actually NTFS does have "extended attributes" in the HPFS sense, which were added to support the OS/2 subsystem in Windows NT. And went on to be used by other stuff as well, including the POSIX subsystem (and its successors Interix/SFU/SUA) and more recently WSL (at least WSL1, not sure about WSL2), for storage of POSIX file metadata.

In NTFS, the streams of a regular file are actually attributes of `$DATA` type; the primary stream is an unnamed `$DATA` type attribute, and any alternate data stream (ADS) is a named `$DATA` type attribute. By contrast, extended attributes are not stored in `$DATA` type attributes, they are stored in the file's `$EA` and `$EA_INFORMATION` attributes. I believe `$EA` contains the actual extended attribute data, whereas `$EA_INFORMATION` is an index to speed up access.

Alternate data streams are accessed using ordinary file APIs, suffixing the file name with `:` then the stream name. Actually, in its fullest form, an NTFS file or directory name includes the attribute type, so the primary stream of a file `foo.txt` is called `foo.txt::$DATA` and an ADS named bar's full name is `foo.txt:bar:$DATA`. For a directory, the default stream is called `$I30` and its type is `$INDEX_ALLOCATION`, so the full name of `C:\Users` is actually `C:\Users:$I30:$INDEX_ALLOCATION`. You will note in `CMD.EXE`, `dir C:\Users:$I30:$INDEX_ALLOCATION` actually works, and returns identical results to `C:\Users`, while other suffixes (e.g. `:$I31` or `:$I30:$DATA`) give you an error instead. Windows will let you created named `:$DATA` streams on a directory, but not a named one.

By contrast, extended attributes are accessed using dedicated Windows NT APIs, namely `NtQueryEaFile` and `NtSetEaFile`.

I'm not sure why Windows POSIX went with EAs instead of ADS; I speculate it is because if you only have a small quantity of data to store, but want to store it on a huge number of files and directories, EAs end up being faster and using less storage than ADS do.


EaData & EaFile remind me of the murky memories of OS/2 APIs.

HPFS had a different approach of internally handling EAs, but OS/2 did create extra file on FAT16 filesystems to store EAs, which could point to origin of $EA. (HPFS itself has special EA-handling implemented in its FNODE, equivalent of inode/FILE entry)

I do not recall the EA actually being used anywhere by new code though, quite shocked by the mention of WSL. Old POSIX subsystem originated before ADSes I think, and might have decided to avoid creating more data types.

My quip about difference of Linux/Irix xattr is related to architectural design involved in the APIs - Irix style xattr API (copied by Linux) is rather explicitly designed for short attributes - do not know if it's still current but I recall something about API itself limiting it to single page per attribute? Come to think of it, that would match certain aspects of Direct IO that AFAIK were also imported from Irix...

Oh, and BTW - NTFS internal structures being accessible as "normal" files is one of the design decisions inherited from Files-11 on VMS, one I quite like from architecture cleanliness pov at the very least.


> I do not recall the EA actually being used anywhere by new code though, quite shocked by the mention of WSL.

This explains it: https://learn.microsoft.com/en-au/archive/blogs/wsl/wsl-file...

uid, gid, mode, and POSIX format timestamps are stored in an EA. It also mentions file capabilities being stored in an ADS. On Linux, capabilities and ACLs are stored in xattrs, so that seems to imply that xattrs are stored in ADS not EA.

> Old POSIX subsystem originated before ADSes I think, and might have decided to avoid creating more data types.

I'm not sure about that, I think support for ADS has been in NTFS from its very beginnings, it was designed to support it from the very start.

Actually, from what I understand, the original design for NTFS – which was never actually implemented, at least not in any version that ever shipped to customers – was to let users define their own attribute types. The reason why their names all start with $, is that was supposed to reserve the attribute type as "system", user attribute types were supposed to start with other characters (likely alphabetic). And that's the reason why they are defined in a file on the filesystem, $AttrDef, and why the records in that file contain some (very basic) metadata on validating them (minimum/maximum sizes, etc). If they were never planning to support user-defined attribute types, they wouldn't have needed $AttrDef, they could have just hardcoded it all in the code.


the dollar sign convention predates NT, it's one of the things inherited from Files-11, where the metadata-files were not hidden from end user, just marked with strict enough permission checks. (A lot of VMS APIs used dollar signs for namespacing, too, and I believe some aspects of the naming scheme come from specific PDP assemblers when referring to some names?)

Looking at NTFS from on-disk structure side, it always seemed quite obvious to me that a lot of accolades given to BeFS applied to NTFS - it's the lack of actually using the abilities - and IIRC a lot of the indexing system is actually used by Windows Search, which in tech spaces I always found mentioned as "useless thing I disabled", yet I found out later offices where people are very much dependant on the component (helps that MS Office installed document handlers to index its documents in it)


> Looking at NTFS from on-disk structure side, it always seemed quite obvious to me that a lot of accolades given to BeFS applied to NTFS - it's the lack of actually using the abilities

Microsoft had some very grand plans in this area... Cairo, OFS, WinFS... but they just kept on getting delayed, cancelled, pulled from the beta for too many issues. I think contemporary Microsoft has lost interest in this (it was something Bill Gates was big on) and moved on to other ideas.


I used to dual boot OS X and Windows on my Mac in the late 2000s. I am pretty certain when I open the HFS+ volume and copy things to the NTFS volume, some stuff became alternate data streams. Windows even had a UI to tell me about it. I didn't understand it then but my guess would be that's the resource fork.

OS/2's HPFS also had alternate data streams, called Extended Attributes. You'd make two calls to DosQueryFileInfo() - the first time to get the size of any EAs so you could allocate a buffer, then call it again to read the contents into the buffer.

It got used occasionally - not a lot. I had a newsgroup reader that would store the date of the last time you downloaded items for a group in an EA (of the file that had the items).


Rarely used because it's invisible and quite awkward to use as a user, basically unusable to most, with no GUI. Also because it will just silently be demolished if you copy to/from an FAT filesystem like a typical flash drive, so it's completely unreliable.

Many cross-platform applications which store metadata in xattrs on Unix-based systems will use ADS for the same purpose.

E.g. Dropbox, which syncs some extended attributes (and uses some for internal metadata), seems to store them in the ADS on Windows.


the 'trusted flag' (my term) == the thing that you touch when you Unblock-File (pwsh) or uncheck in the file properties UI => lives in an alternate data stream.

NTFS ACLs (aka file permissions) are stored in alternate data streams.

I work on ReFS and a little bit on NTFS. Alternate data streams are simply seekable bags of bytes, just like the traditional main data file stream. Security descriptors, extended attributes, reparse points and other file metadata are represented as a more general concept called an "attribute".

You can't actually open a security descriptor attribute and modify select bytes of it to create an invalid security descriptor, as you would if it were a general purpose stream.


Help me understand the terminology. I thought alternative data streams were just non-resident attributes. Attributes like "$SECURITY_DESCRIPTOR" have reserved names but, conceptually, I thought were stored in the same manner as an alternative data stream. (Admittedly, I've never seen the real NTFS source code-- I've only perused open source tools and re-implementations.)

Essentially, attribute names directly specify the attribute type - so $SECURITY_DESCRIPTOR declared the entry in FILE attribute list to be a security descriptor. DATA attributes have another name field to handle multiple instances

> Essentially, attribute names directly specify the attribute type - so $SECURITY_DESCRIPTOR declared the entry in FILE attribute list to be a security descriptor. DATA attributes have another name field to handle multiple instances

If you at the Linux kernel source code, `fs/ntfs3/ntfs.h` contains the following:

    struct ATTRIB {
        enum ATTR_TYPE type; // 0x00: The type of this attribute.
        __le32 size;         // 0x04: The size of this attribute.
        u8 non_res;          // 0x08: Is this attribute non-resident?
        u8 name_len;         // 0x09: This attribute name length.
        __le16 name_off;     // 0x0A: Offset to the attribute name.
        __le16 flags;        // 0x0C: See ATTR_FLAG_XXX.
        __le16 id;           // 0x0E: Unique id (per record).
        union {
            struct ATTR_RESIDENT res;     // 0x10
            struct ATTR_NONRESIDENT nres; // 0x10
        };
    };
So the name field isn't specific to `$DATA` attributes, every attribute has it. However, for most attributes either the name is zero bytes, or it is a hardcoded name (like `$I30` for directories). Is `$DATA` the only one that can have different instances of the attribute with arbitrary names?

Arguably from the point of On Disk Structure (to reuse terminology from NTFS' ancestor in VMS), all attributes can have names as well.

Now, implementation in ntfs.sys is another thing and I have no idea if it's just an unused code path or if something would explode, and from what I heard Microsoft ended up in situation where people are scared to touch it not because of code quality but because of being scared of breaking something.


> Now, implementation in ntfs.sys is another thing and I have no idea if it's just an unused code path or if something would explode,

ntfs.sys has validation checks in it which prevent you from directly creating anything other than named or unnamed $DATA attributes on a regular file, and named $DATA attributes on a directory, and (indirectly) creating other stuff (directories, file names, standard attributes, EAs) through the appropriate APIs. If you try to do anything funky, you'll get an "Access Denied" error code returned by ntfs.sys


I was thinking more of "ntfs.sys encounters filesystem structure with names set for normally unnamed attributes".

The API preventing arbitrary messing up is a separate (and good and valid) concern.


> I was thinking more of "ntfs.sys encounters filesystem structure with names set for normally unnamed attributes".

From reading the source code of the Linux kernel NTFS driver (the ntfs3 one in the latest Linux kernel, not the older one it replaced), its (pretty reasonable) strategy is just to ignore things it doesn't expect. But I don't know what ntfs.sys does in such a scenario, I've never tried.


I see. So there's one more layer of indirection there that I'm missing.

Used by malware mostly, I think.

> the two-pronged idea in that file system, both a resource and a data component existed as pair. One metadata and one the file contents.

Application metadata describing what file types an application could open, what icons to use for those file types if they matched the application’s creator code was stored in the resource fork of the application, but file metadata never was stored in the resource fork. File types, creator codes, lock, invisible, bozo, etc. bits always were stored in the file system.

See for example the description of the MFS disk format at https://wiki.osdev.org/MFS#File_Directory_Blocks


It was all of the forked data that made dual format CDs/DVDs "interesting". In the beginning it was a trick. Eventually, the Mac burning software made it a breeze. Making a Mac bootable DVD was also interesting.

I recall seeing CD-ROMs that had both Mac and Windows software on it, and depending on which OS it was mounted on, it would show the Windows EXE or the Mac app... I wonder how that's done. I'm guessing there was a clever trick so files on both filesystems share the same data (e.g. if the program/game had a movie, it would only store the bytes of the movie once but its addressable as a file on each filesystem), but that sounds like a nightmare.

I can probably look it up and figure it out myself, ah, the joys of learning about obsolete tech!


As it starts about 32k in, the ISO 9660 superblock doesn't inherently conflict with an Apple partition map which starts at the beginning. Apple also had proprietary ISO 9660 extensions that add extra metadata to the directory entries much like the RockRidge extension does. Those would get ignored by non-Apple implementations of ISO 9660.

Microsoft went a different route with its long filename extensions (Joliet) – they simply created a whole different (UCS-2/UTF-16 encoded) directory tree. An ISO 9660 implementation that's compatible with Joliet will prefer the Unicode directory hierarchy and look there for files.


You can hide files from windows by setting a property on the file. You can hide files from MacOS by inserting it's name in a file called ".hidden".

There were also the audio CDs that had data on them. Audio CD players would just play the audio, but a CD-ROM could access both. Some had apps that were games that would play the audio portion for the game.

If you want to know about the different types of CDs, you'll want to know about the various colors: https://en.wikipedia.org/wiki/Rainbow_Books


Some Playstation 1 were setup to also play the game soundtrack if you put them in an audio CD player.

MechWarrior 2: Mercenaries (for PC) was the same way. Rocking soundtrack. Beautiful game, provided you had a Voodoo 2.

The Mac version of the original Descent was like this too, with a great redbook audio soundtrack. The game wasn't locked to the original disc though, you could pop out the CD in the middle of the game and replace it with any other audio CD and it'd play that just as well.

I remember this site from the 00's: http://cdrfaq.org

I remember listening to the Warcraft 2 soundtrack from the game CD-ROM in the living room audio CD player.

IIRC from that time, those CD-ROMs contained two tracks, one formatted with ISO 9660 and another with HFS+. Windows didn't come with HFS+ drivers so it ignored it, and probably MacOS prioritized mounting the HFS+ track.

I've seen some where the combined file size exposed on each track would be larger than a CD could hold, so there had to be something more going on. StarCraft and Brood War come to mind with the large StarDat.mpq / BrooDat.mpq files.

Oh, StarDat.mpq, brings back memories. That was one of the major reasons I'm in this industry now - the file itself is a "virtual file system" (MOPAQ, with MO being IIRC the authors' initials) file with some CRC and obfuscation. As a kid, I was hell-bent on learning how it works, writing code to decode and encode it, and then use it in my own hobby projects. I learned a lot of concepts from that little rabbit hole. Hell, the way StarDat.mpq, BrooDat.mpq, and Patch_something.mpq interacted, was what you'd call "overlay FS" today.

TL;DR ISO9660 provided an area to stuff type-tagged extension information for each directory entry.

In addition, first 32kB of iso9660 are unused, which allowed tricks like putting another filesystem metadata there.

By carefully arranging metadata on disk it was then possible to make essentially overlapping partitions, stuffing each filesystem metadata in area unused by the other, with files reusing the same space


> Implementing Mac compatible file support in Unix meant treating the resource fork first class and the obvious way you do it is for each file have .file beside it.

Prefixing the file name with a single dot - is this a file system convention ? Or just a "good idea" ?


Prefixing the filename with ._ is Finder convention whenever you copy metadata to a filesystem which doesn't support resource forks, like FAT32

Unix convention to hide. .Files hidden from ls unless -a used but cd .config/ works fine. It matched the use of . For "this dir" and .. for "parent dir" also hidden by default. It was in v7 on a pdp11, my first experience of Unix in 1980. Probably pre-dated that.

Oh sure. I started with v6 on a pdp-10 in 1979. And the leading dot is ingrained in my brain.

But what I'm wondering about is the idea of associating (for example) "myfile.xyz" and ".myfile.xyz". I've never heard of this as a convention for associating metadata.


resource and data forks were hfs(+) features that appeared in pre-osx versions of macos. post-osx made use of the bsd fast filesystem and a rather nice unix style convention from nextstep where the on-disk representation of a .app or .pkg (which would appear as a single entity in the gui) was actually a directory tree. this would rather elegantly include ui resources as well as multiple binaries for cross platform support.

You have the same "resource fork" concept in Unix xattrs and NTFS streams.

No disagree, Both came later IIRC. Melbourne unis work on appletalk and Apple file system support was in the late 80s and I believe POSIX xattr spec work was mid nineties, NTFS was '93 or so. The fork model in apple file store was eighties work.

GP wasn’t arguing about timelines.

NTFS ADS were created to accommodate Mac OS resource forks on network volumes when using AFP.


Gotcha! I assumed they were invented for Windows centric reasons.

The concept of extended file attributes has been introduced by HPFS, in OS/2, in 1989.

From HPFS it was taken by SGI XFS (the ancestor of Linux XFS) and MS NTFS, both in 1993.

From there it has spread to various other file systems and specifications.

The concept of resource forks is earlier, but both are examples of using alternate data streams in a file.


I remember there used to ways to turn off the creation of .DS_Store but they removed it, I can't figure out for life why they would make such a change. I had to write a program [0] to watch the entire file system and delete .DS_Store as soon as they're created.

[0] https://github.com/slmjkdbtl/dskill


You can turn it off for network volumes:

  defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE
https://support.apple.com/en-us/102064

I don't recall there ever being a way to turn it off for local volumes.


I set up my samba config to veto .DS_Store files, which also seems to work (although not sure if it creates more overhead as MacOS tries to recreate it each time...)

Is it also possible to do this for removable media?

It seems to be the first time I'm seeing Apple themselves officially recommending a "defaults write" command.


Not sure if this still works, but:

  defaults write com.apple.desktopservices DSDontWriteUSBStores -bool TRUE
Re: defaults:

https://support.apple.com/guide/terminal/edit-property-lists...


There was https://github.com/binaryage/asepsis but Apple broke it IIRC.

It was a hack, not anything Apple ever supported:

> At core Asepsis provides a dynamic library DesktopServicesPrivWrapper which gets loaded into every process linking against DesktopServicesPriv.framework. It interposes some libc calls used by DesktopServicesPriv to access .DS_Store files. Interposed functions detect paths talking about .DS_Store files and redirect them into a special prefix folder. This seems to be transparent to DesktopServicesPriv.

> Additionally Asepsis implements a system-wide daemon asepsisd whose purpose is to monitor system-wide folder renames (or deletes) and mirror those operations in the prefix folder. This is probably the best we can do. This way you don’t lose your settings after renaming folders because rename is also executed on folder structure in the prefix directory.

Unsurprisingly, you can no longer do anything like this with SIP. If you're willing to disable SIP, there are forks of the project that apparently still work.


I apple really made a mess when they started adding .<stuff> in the root of a filesystem and never cleaned it up.

I can see after .DS_Store was allowed, it was no problem for other engineers to approve .fseventsd or .Spotlight-V100 or other nonsense that has cropped up over the years.

And I can't tell you how many filesystems I've had "corrupted" with these sorts of files.

Mostly SD cards, usb flash drives, but occasionally something horrible.

for these kinds of things I usually run:

  rm -rf .DS_Store .Trashes ._.Trashes .fseventsd .Spotlight-V100
and quickly eject the drive before something else is written.

If you've had a disk that is going bad and you need to copy stuff off of it, the LAST thing you want is to index the whole thing and start writing to it.

seriously, there should be a setting.


> github.com/slmjkdbtl/dskill

You have a good coding style.


find / -name ".DS_Store" -exec rm {} \; 2>/dev/null

Put that in a script and add it to your crontab.


That's gonna be incredibly slow on most developer machines. node_modules, __pycache__, Cargo target/ folders, Yocto build folders, .git folders, etc etc etc -- all my machines which are ever used for development end up with such a gargantuan amount of small files across the filesystem that any operation which involves iterating through all of them takes forever.

Besides, there are .DS_Store I really don't wanna delete. Notably, there are git repos which have erroneously committed .DS_Store files; I don't wanna make those repos dirty by deleting them.


Cool. This was a response to ops github repo which does the same as the command I commented with. Your use case is different.

That doesn't seem to be true? Looking at the C source code, it seems to be using fsevents to delete .DS_Store files when they're created, not periodically scan every single file on the system to delete .DS_Store files

Ah ok. Thanks for pointing that out.

-delete is faster

Even better!

Why? I just ignore them.

Google sent a copyright violation notice for each .DS_Store anyone at my company uploaded to Drive for nearly a year (yes, many support tickets were filed).

It wasn't Apple's fault, but it still would have been nice if there was a way to turn them off.


That's scary considering how willingly they'll shutdown accounts for tripping their automated copyright violation service.

For sure! I made sure to have an open ticket with them until it was resolved so I'd have someone to call if some other automated system decided to shut down our services for it.

Why? Somehow DS_Store is claimed as a copyrighted file?

White noise was claimed on YouTube[0].

When is someone going to copyright .gitignore? You could register gitignore.me right now! Fame, riches, lunch with Myhrvold[1][2]!

[0]: https://www.bbc.com/news/technology-42580523

[1]: https://en.wikipedia.org/wiki/Intellectual_Ventures

[2]: https://www.amazon.com/Modernist-Cuisine-Science-Stainless-S...


The process likely went something like this:

1. Pirates uploaded a folder full of copyrighted files to Google Drive, accidentally including some DS_Store files along with the actual media.

2. The copyright owner filed a DMCA takedown on the whole folder, accidentally claiming ownership of a bunch of generic DS_Store files.

3. The above two steps have likely happened many times, not just once.

4. Google's takedown system now automatically flags DS_Store files as having multiple copyright violations.

5. A Google employee might be able to whitelist a user's individual DS_Store files to temporarily suppress the violation on their account, but since they can appear in different folders with different data and are constantly receiving new copyright claims, their system likely errs on the side of caution and continues to flag them as copyright violations so that Google doesn't accidentally lose its safe harbor protections.

In theory, a Google engineer could code in a special case to avoid this problem, but good luck finding and talking to one who's authorized to do so; Google is notorious for having one of the lowest employee;revenue ratios in the world and writing useless FAQs instead of having a proper support channel for when things go wrong.


> In theory, a Google engineer could code in a special case to avoid this problem

And then in this alternate universe, pirates start naming all of their files ".DS_Store"!


That's a good question. I get the impression the system is fairly opaque even to the people working there. I was told it was "resolved" and had my ticket closed a bunch of times, only to have another 30+ copyright violation emails the next time someone uploaded a batch of files from MacOS.

If the person who finally managed to figure it out ends to reading this, thanks for the resolution :)


Holy cow, that’s crazy!

Missing Stair effect - ignoring a problem does make everything progressively worse for everyone because the problems pile up.

That’s the easy solution. But some people are absolute control freaks and would rather go nuts about a hidden file than actually spend their energy creating things. Very telling.

Some people can't.

DS Store seems so unfortunate. Yes it serves a purpose. Yes you can work around it in various ways. But the reality is that it’s basically proliferated file litter to 99% of people who come across it. It’s uncharacteristically un-Apple in terms of UX polish.

Growing up with both System 7.5 / OSX, and windows machines, the Macs never seemed inclined to make me see extraneous files, filetypes, and other “how the computer works” implementation details. It’s just so odd to my mental model of it all to see this file end up everywhere.


For those who live their whole life within Apple's walls, they will never see .DS_Store files, unless they use the Terminal. Finder (with hidden files shown) doesn't even show them anymore.

It is very ugly when files are shared from a Mac to people on Windows though. I think it gives a bad first impression for anyone who might be thinking of transitioning to the Mac.


They pop up in code repositories too, depending on contents and whether the engineer in question noticed it.

absolutely essential to add a line for .DS_Store in every .gitignore, unfortunately.

Enough to teach people to use a global git core.excludesfile, IMO.

Same place you should put rules for Emacs / Vim swap files.


Totally correct. Files which are unrelated to the project don't belong in .gitignore.

This may be technically correct, and I do have .DS_Store in my global, but I also put it in projects, because I know not everyone on my team is going to do that. I add it to the .gitignore in projects to save me from other people junking up the project. It’s a lot easier to add some lines to a file than it is to micromanage the global file for every potential future contributor.

This touches on something I've learned to be more mindful of: the "right answer" (especially to a techie) is often not the right answer in real world cases.

I’m fine with the occasional .DS_Store getting added, because you can just remove it afterwards. Most of my work is either my own projects or at work, and whether people at work commit .DS_Store files is a question that touches on code reviews, company onboarding guides, etc.

Maybe the benefits / drawbacks would be different for an open-source project with a lot of contributors.


Just add a like to your onboarding docs to teach what the global ignore file is and how to manage it. the have them add a line for DS_Store.

It makes sense to add an ignore for .* though and then specifically unignore only those dotfiles/directories that you actully want checked in.

or just ignore them globally once.

I've banned people before because they couldn't stop themselves from continuously uploading those useless ds files

Seems like you have control issues.

Nah, those people have issues controlling their machines. It's fine if you upload useless spam a few times, but at some point you need to quit it because you're creating unnecessary work for others.

Well I was trying to tell you nicely, but you’ll figure it out before long.

Banning someone because they commit a file you don’t like is definitely a sign of a controlling person.


Repeatedly. Besides, if you repeatedly pay such little care how can anything you do be trusted?

> It’s uncharacteristically un-Apple in terms of UX polish.

Apple's polish has always been more about the surface then the internals.


I remember playing around with setting up a Hackintosh, and found all those errors in the system logs --- then realised that an actual working Mac generates much the same (ignorable) errors.

To be fair to Apple here, so does every other operating system. Linux system logs are filled with errors too. In general, keeping the logs of even a moderately complex application "clean" - so that the only errors logged are real errors, in some poorly defined meaning of "real" - is very hard.

For operating systems it must be straight up impossible.


> Linux system logs are filled with errors too.

Mostly only due to misbehaving hardware. Something that should really not happen on a Mac. And "filled" is way hyperbolic, there usually isn't a lot of it.


It's difficult to accept as competency when they control both the software and hardware.

Never understood why it had to be in the same folder. Can’t the os have its own little db somewhere that has a reference to each path?

The idea was that metadata, for example a file’s label, would travel across to whichever device you use the network drive from.

But classic Mac OS stored “Desktop DB” and “Desktop DF” at the root of each mounted drive, IIRC.

It seems like a better solution.


Yeah, because such devices are only made by Apple and can or should understand Apple's internal format.

Putting in in the folder is also nice in that it naturally gets deleted when the folder is deleted

All file operations have been watched by Spotlight since forever at this point.

Except for ignored file types and folders you marked as private.

Or those on network volumes or removable media. When somebody else on other machine removes them, your local database is out of sync pronto.

This also happens with .DS_store files if the other computer on the network isn’t a Mac. It’s irrelevant.

computers other than a mac don’t need DS_Store anyways, so still relevant.

No, the objection to not keeping .DS_Store per folder and doing it per file system instead was that a non-Mac might make changes that would not be seen. The point is that this can already happen for a lot of different reasons! So keeping it per folder doesn't eliminate the failure mode, it just makes it slightly less common… at the cost of annoying all non-Mac users and any Mac user who needs to interact with Git etc. The tradeoff analysis for doing it per folder is bad.

> Those files should only be created if the user actually makes adjustments to the view settings or set a manual location for icons in a folder. That’s unfortunately not what happens and visiting a folder pretty much guarantees that a .DS_Store file will get created

This is my number one frustration with the Finder.

You can customize the look and size of individual folder windows in many interesting ways, al a the Classic Mac OS Finder, which is a really great feature. But if you blow through that same folder in a browser window then most of those customization are lost, overwritten with the settings of that browser window, even if you never change anything.

What's the point of allowing all of these great customizations when they're so easily clobbered?

I have a global hot key to bring up the Applications folder. I'd love to customize the look of that window, but it's pointless. Whenever I hit that hot key I have no idea what I'm going to get. It's always getting reset.

By the way, the reason it does this is because the Finder has no way to set a default browser window configuration. So instead, it just leaves behind the current browser settings in each folder it visits. Super frustrating.


It used to be before darwin that every open folder corresponded to one window and there was only one user, so that approach worked. I really miss that, it was nice having the same window pop to the front with everything just like you last had it before.

> I have a global hot key to bring up the Applications folder

Not global, but as long as you're in the Finder cmd-shift-A opens the Applications folder. cmd-shift-U opens the Utilities folder.


As a non-Mac user, I always find it somewhat annoying when I download some .tgz published on Github or something and find .DS_Store littered inside.

I guess macos probably just uses GNU tar? It's kind of surprising it wasn't modified or configured by default to ignore .DS_Store.


> It's kind of surprising it wasn't modified or configured by default to ignore .DS_Store.

It was, but not by default.

If you export COPYFILE_DISABLE=true then tar will skip .DS_Store files.


Most of Mac's Unix utils come straight from FreeBSD without any special sauce from Apple.

Apple patches most of their utilities. This is frequently annoying when trying to track down what they did :)

They had the chance to get rid of DS_store, but they put it in MacOS anyway?

er. as the article says - it was created for OSX and not classic macOs

Ah that reminds me I committed a few last week and never cleaned it up..

Global ignore.

It's worth mentioning how to turn off the creation of .DS_Store files by default while browsing network volumes - otherwise the directory modified timestamps are updated as you browse using the Finder, which is Just Plain Terrible.

https://old.reddit.com/r/MacOS/comments/lvju40/comment/gpc8i...


macOS is tricky now. I just looked via Finder to see if I had any .DS_Store files on my network volumes, and it appeared not. However, when I went to Terminal, sure enough, they were there. I now can't trust Finder's ability to show hidden files, as it only shows the hidden files it thinks a user should care about, rather than all hidden files. Not good.

Since my network shares are for a local Synology, it's not a a big deal for me. I have run into them at work before, and it does create quite the mess.


My synology NAS drops turds in lots of directories too.

Your synology has its own way to store xattrs and alternate file streams, in the @eaDir, so some of the turds may be dropped by your windows or mac client machine. But yes, it also does few of its own things for the other software running on your box, like for example SYNOINDEX_MEDIA_INFO for known media files.

If I remember correctly there is an option in Synology DSM to not let clients create .DS_Store files in network shares.

It is a samba feature, called veto. You can define there, what you don't want on your shares, starting from .DS_Store and Thumbs.db, to *.mp3, for example.

Personally I make sure mac users do this before they get write access to a network share. It's just a matter of common curtesy IMHO.


curtsy (feminine bow) -> courtesy (polite act)

Yes, but it’s a common courtesy to perform a curtsy in the appropriate situation.

And failure to do so might come off as curt, see?

If you run Samba you can also configure Samba to just ignore such creations.


Also there are those dot underscore files. Is there any way to disable creating these files on the network shares?

[0] https://superuser.com/questions/212896/is-there-any-way-to-p...


Every time I see it I think Nintendo DS.

Thankfully Emacs's file manager Dired lets me easily pretend this pesky little file, as well as those produced by a LaTeX run, doesn't exist.

  (setq dired-omit-mode t
        dired-omit-files "^.+\\.\\(DS_Store\\|aux\\|bak\\|bbl\\|bcf\\|blg\\|dvi\\|ent\\|idx\\|ilg\\|ind\\|log\\|orig\\|out\\|pdf-view-restore\\|pdf#\\|reg\\|run.xml\\|synctex.gz\\|toc\\)$")

"easily" you say?

Well sure it's a bit noisy but it's just a bit of regex.

What's really astonishing is that no one at apple dared to fix the bug that creates these files...

FYI there is a tool on macOS called `dot_clean` that will "Remove dot-underscore files" https://ss64.com/mac/dot_clean.html

I bought a secondhand Macbook M2 and I was genuinely shocked at how awful Finder is (no offense to this guy) it sort of reminds me of the Android file manager that hides directories and files and tells you that "you can't do that, you cant just use a folder!" That was really off putting. I ended up installing Asahi despite getting the Mac to have access to the awful apps that I can't get on Linux like MS Word and photoshop. I hate this landscape tbh.

> For Mac OS X we decided to rewrite the Finder from scratch.

I would think that the file manager for an entirely separate operating system being written from scratch would be a foregone conclusion.


NeXT OS had a perfectly good file manager/GUI, but I guess it was pretty different from what Mac users were used to.

Whenever I move a file from windows into WSL via explorer I get a Zone file. I assume it’s the same things but quite annoying.

See also: 'look for (nearly) empty directories and delete them' - https://alexwlchan.net/2024/emptydir/

I miss the old Pre-OSX finder that could accomplish copy files without opening a second window and dragging into.

I'll never get how some rocket scientist (IVIE I suspect) removed Apple's best finder feature, colored file folders, which made for easy sorting. To make matters worse, added stupid dot labels instead. What a cluster.

Oh well. Still a bad day on a Mac is better than a great day in Windows.


See John Siracusa"S comments on Finder https://arstechnica.com/gadgets/2003/04/finder/

Why doesn't Windows need such a directory to store folder customizations in Explorer?

Explorer uses a hidden desktop.ini file for this.

Negative. desktop.ini doesn't get edited when you switch (for example) from Details to List.

Also, I think only the desktop allows moving icons around freely.


I guess a more correct answer would have been that deskop.ini is used for some folder customizations.

> only the desktop allows moving icons around freely

I'm pretty sure Windows used to allow you to move icons around, I clearly remember making a mess on some Windows 98 folders. Maybe they removed that feature recently?


There's also the .fseventsd directory which I've also seen on non-UNIX systems.

I am MacOS's biggest fanboy and Tim Cook's strongest soldier but I will also say the Finder is one of the dumbest file explorers I've ever experienced in my life

> I am MacOS's biggest fanboy and Tim Cook's strongest soldier

wow.


> Internally, those two components were known as Finder_FE and Finder_BE (Frontend and Backend).

Interesting to see that apps were split into front and back end (indeed, I'm surprised even that the terms existed) back in 1999.


Why are you surprised? I have been writing client server apps since the late 80s.

Originally a central DB and a PC front end. But the server could be doing business processing e.g. feeds and processing of stock prices.

Client Server predates the web.


Most informative post ever on Hacker News. Now I know!

Rixsteps take on .DS_Store - a much more enjoyable read. https://rixstep.com/1/20030521,00.shtml

And a follow up article - https://rixstep.com/2/20061212,00.shtml


Funny all the people complaining about “this makes Mac ugly compared to Linux” meanwhile every Linux tool I install craps dotfiles not only in ~/ but in working folders as well. It’s a fact of life that tens of thousands of smart programmers realized is necessary.

Every Linux tool? Simply not true. Most tools keep it in a ~ subdir like ~/.config

MacOS creates a junk file/folder just by visiting any folder. It's not comparable.


Any linux desktop environment is going to do the same, as the DE will need to store previews and thumbnails amongst other things. Except on a linux machine this data would be stored in some other location that you’ll have to lookup and find.

You're right, I didn't mean 100% of all linux tools, like coreutils. But every piece of linux software I've purchased a license for does exactly this.

I think people that get upset about this just need something to fret about, at least in my experience. They probably trim their speaker cables to the same length to match impedence, too.


Not to mention that it's an obnoxious and incompetent design. Look at the fact that Mac OS litters every other computer it visits with turds, for its own (and in fact only one user's) benefit. It's doubly stupid because the next browsing Mac that comes along trounces the previous one's turd.

If Apple wanted to store view settings for remote volumes (or even local volumes), the competent design would have been to store them locally (and per user) in a central location on the machine doing the browsing.

I remember the promised re-write of Finder and thought it never happened. Nothing seems to have improved for the user. I could post a list of decades-old defects that persist today.

The one thing I can think of that has finally been fixed (and this was long after the "rewrite") was that you can now finally sort the file list properly: with folders at the top.

Now I wish someone would explain something that might actually be worse than DS-turds: the presence of a "Contents" subdirectory in every goddamned Apple package. I mean... who thought you needed to create a directory called "Contents" to hold the contents of the parent directory? It's mind-boggling.


> Not to mention that it's an obnoxious and incompetent design. Look at the fact that Mac OS litters every other computer it visits with turds, for its own (and in fact only one user's) benefit. It's doubly stupid because the next browsing Mac that comes along trounces the previous one's turd.

It also kind of reveals an underlying attitude of the OS developers: That it's OK to use the user's filesystem (particularly directories owned by the user as opposed to the OS) as their dumping ground for all this metadata. As if it's their hard drive rather than mine.

I'm OK with Apple putting whatever it wants in /System and /Library, but I'd expect the rest of my filesystem to contain only files I put there.

Same goes for you, Microsoft: You can have C:/WINDOWS and I should get the rest of the filesystem.


> That it's OK to use the user's filesystem (particularly directories owned by the user as opposed to the OS) as their dumping ground for all this metadata.

There are more of this type of offender than I can possibly count that dump myriad dotfiles and dotfolders in your home folder on nixes instead of adhering to platform conventions or XDG or anything, really. Worse, these programs won't function properly if you set your home folder to be read-only (leaving subdirectories writable) to keep it clean. Drives me nuts.


Oh, yea. I didn't mean to give Linux/Unix a pass. Those systems can be equally cavalier about leaving their configuration droppings all over my filesystem, too.

The issue is where does this information go.

If in a central place what happens if the original directory is moved - how is the metadata updated. - Unix is another file somewhere, Windows can be in the registry.

With Apple it is kept with the directory.

The issue is that a directory needs some metadata and the Unix design of everything is a file does not allow the directory to include this without adding another file somewhere.

The POSIX file system is not the perfect thing.


You really want to look at Haiku. The only sane hierarchy for desktop OS’s. Native apps respect the hierarchy, however some ported apps create garbage .files where they shouldn’t (Haiku reserves /home/config/apps/name/… for garbage). /system is read only as a bonus

oh man, don’t get me started on gui applications usurping the Documents folder.

I can see the appeal for removable media, at least. It’s pretty common for those to have only a single user toting them around between home/work/school and for that case it makes a lot of sense to store that info on the media so settings stick across different machines. It probably made even more sense back when removable media was the norm for data transfer because network access was spotty or slow.

It really should be turned off by default on network volumes though.


The funny part is actually that its not supposed to create DS_store everywhere:

> There is also an unfortunate bug that is not fixed to this day that result in an excessive creation of .DS_Store file. Those files should only be created if the user actually makes adjustments to the view settings or set a manual location for icons in a folder. That’s unfortunately not what happens and visiting a folder pretty much guarantees that a .DS_Store file will get created

I get the sense that if you are annoyed by it, you aren't the target audience of Mac OS, the target audience are technologically illiterate people for who it really doesn't matter (they barely know what folders are anyway), so to Apple there is no reason to ever invest any effort to fix it.


By that logic, though, there was never any reason to implement it in the first place.

> Now I wish someone would explain something that might actually be worse than DS-turds: the presence of a "Contents" subdirectory in every goddamned Apple package. I mean... who thought you needed to create a directory called "Contents" to hold the contents of the parent directory? It's mind-boggling.

It’s because not every bundle does include that folder. Here you go: https://en.m.wikipedia.org/wiki/Bundle_(macOS)


I will raise you- desktop.ini and thumbnails.db

Windows is polite enough to not write them on network shares, unlike .DS_Store.

Now, yes. It used to be a really irritating problem there too.

back in windows xp days yes, it's pretty much never a problem nowadays. for the past...almost two decades, actually - thumbnails get stored in user profile folder since vista. (though it is different for network folders and may still be a problem.) and desktop.ini files - you'd only ever encounter them in predefined system folders (like pictures, etc.) or if you manually customize a folder in its properties - customize tab (like changing folder "type" to one like those predefined folders or changing its icon, not the same as changing size/thumbnail size/columns/etc though, that's stored elsewhere too)

That's still a weaker hand. macOS also has the ton of ._ files. Would have been better to have folded than raised

No macOS does not.

The issue is the file system.

Apple file systems allow a file to have extended attributes or resource forks. Thus a file is not a simple stream of bytes.

When you copy a file to a file system (e.g. FAT) that does not understand these attributes macOS copies those to a ._ (I think if the file system was NTFS then you could probably convert them but I don't think anyone does)

Copying a file out of an Apple environment loses data (OK the data is metadata and usually no one cares)


> the competent design would have been to store them locally (and per user) in a central location on the machine doing the browsing

Not sure but it could be the case that when you mount a network drive there isn't a stable identifier that can be used to track it.


Sure, that wouldn't work if the network volume was accessed by different URIs. But it would work in 95% of cases, which is good enough.

Exactly. And if the same machine used two URIs, there'd simply be two entries for settings. And the settings cache could flush old entries periodically.

Like two websites that look the same, except one captures your creds?

You don't want user prefs to apply to multiple locations solely based on URI.


Just because two URIs might appear to be similar doesn't mean they are identical. Using the URI string as a hash key wouldn't be vulnerable to this.

Store a single .DS_Store in the root of the disk that stores either the reference or all of the data for that filesystem?

Users rarely mount network drives as root so not sure how this would work.

Also the conflict resolution to support concurrent updates would be crazy.


I think it's likely that there is a reasonably stable path for any kind of mount, but I don't know a ton about networks so I'll leave it to someone else to weigh in.

But the stakes are very low here, so settings can be invalidated and discarded if they can't be resolved or they age out of the local cache. And if the mount is of a type that can't be reliably identified later, the default should have been to do nothing. Spewing junk all over every computer visited, especially junk that won't even survive the next Mac user's visit... is amateur-hour and obnoxious at best.


We detached this subthread from https://news.ycombinator.com/item?id=40870645.

> Back in 1999 I was the technical lead for the Mac OS X Finder at Apple. At that time the Finder code base was some 8 years old and had reached the end of its useful life. Making any changes to it require huge engineering effort, and any changes usually broke two or three seemingly unrelated features. For Mac OS X we decided to rewrite the Finder from scratch.

Not that I don't appreciate your work from back then, but as a longtime daily Mac user I cannot wait for the day that this is done once again. The Finder has so many bizarre quirks and it's so slow to proliferate updates that it's just embarrassing. Not to mention it's actually capable of locking up waiting for network access in some circumstances.

I don't know what the Finder source code looks like today but I bet it's a similar kind of hell project as the Classic Finder was back then when they first rewrote it, considering how reluctant they are to do anything to it.


When they rewrite it, I’m afraid we’ll get an iPad-esque nerfed and incomplete monstrosity, like we have with the Home or Settings apps.

Exactly my thought. When they replace Finder, it’ll almost certainly be with a port of the useless iPad Files app.

Apple unfortunately isn’t in the business of making powerful, efficient (user-facing) software anymore.


>The Finder has so many bizarre quirks and it's so slow to proliferate updates that it's just embarrassing

Say what you will about Windows, but the Explore file manager has always been pretty rock solid.


I will say, network drives feel local on Windows. On macOS they feel like network drives. I think I’d say the same about external drives. I stopped using them, because I got sick of waiting for them to spin up anytime Finder had to do some work.

Up until 7, and even afterward in some areas, Windows got things right from an interface standpoint. People seem to forgot that Microsoft dumped large amounts of time and money into figuring out how people use computers and developed their desktop environment accordingly. I've used Windows, macOS, and more Linux DEs than I care to admit. The only thing that tops the Windows DE is KDE, which isn't a massive departure from Windows. macOS has legacy as an excuse, but I don't know what can be said about the various Linux DEs that don't Work Right for the sake of spiting ideas that do.

Windows 11 has pretty severely fucked up Explorer. Named directories can't have their path copied (I think 10 did this bullshit, too). The context menu getting insane whitespace, missing options, and having things dynamically load into it is a travesty. It is heartbreaking that mobile-inspired trash is ultimately going to be way you're forced to interact with a computer.

People let their distaste for somebody's bad behavior and/or old things stop them from admitting that we're in a pretty severe backward slide.


Dynamically-loaded context options (with any user-perceptible lag whatsoever) has to be one the greatest UX sins I can think of. Like apps stealing focus on startup (looking at you, Adobe!)

> with any user-perceptible lag whatsoever

About that part... Modern computers are insanely fast. How does every single piece of software manages to fill half a minute of CPU or disk I/O for enumerating some 3 or 4 items?

It's absurd.

I use Firefox inside eatmydata nowadays, because it spends 10 minutes enumerating the same 2 directories every time it starts up (hundreds of thousands of times). The start menu and equivalents everywhere are already famous. Windows can't search files nowadays, not only it doesn't work, but it never ends either... The list is endless.


> I use Firefox inside eatmydata nowadays, because it spends 10 minutes enumerating the same 2 directories every time it starts up (hundreds of thousands of times).

What have you got like a 10 year old profile or something?

Librewolf starts up instantly for me, and I saw no performance difference using eatmdata.


Why would an old profile cause it to be scanned hundreds of thousands of times? (Yeah, I'm resetting it next time just in case... 10 years is amateur's numbers :) )

Anyway, there are a lot of people reporting the same thing on the internet. I've found 3 different bugs opened for the same thing.

But yeah, as far as I remember, Iceweasel doesn't do it either. Maybe I should change my browser.


Hmm. Wasn't it completely unreliable for moving around large numbers of files at the same time? Like if file #243 of 400 failed for some reason, you could actually lose data?

I don't know any more because I use Total Commander on Windows...


I'm not aware of any bugs like that. Got any links maybe?

No, it may have been windows 95 :)

I prefer the good ole two pane file managers and I actively avoid both the finder and explorer most of the time.


Explorer can’t even sort folders by size…

That's because folders have no size. It requires calculating children size recursively.

It could be done quickly by reading the MFT. WizTree can calculate the size of all 236k directories/800k files on my system in two seconds. For some reason, Explorer takes ~10 seconds to calculate the size of a single directory (Program Files, 17k directories, 240k files). If Explorer just did what WizTree does, it could actually show and sort by directory sizes.

Based on how well the System Preferences → Settings rewrite went: please don't.

They did apparently rewrite it in Cocoa back in ~2008. Although that was 16 years ago so I'm sure it's accumulated a fair bit of tech debt since then.

Finder remains one of those apps I still can’t make effective use of. Windows File Explorer for all its warts and changes still “just makes sense” to my brain vs how finder lays things out and expects you to browse.

I’ve long since moved to command line or dual pane explorers but it’s something that makes me pause every time I do find myself in Finder for some reason.


I wholly agree with you on this one. Windows has its fair share of issues, but Windows Explorer feels like peak file browsing to me.

For MacOS I can recommend Forklift [0]. I've been using it for years and it is a bit closer to the Windows Explorer way of doing things. Does what it is meant to do. Affordable. No nags. Gets out of the way. Not perfect, but soooo much better than the horrific experience that is Finder.

[0] https://binarynights.com/


How’s Forklift 4?

I have a paid Forklift 3, and it’s nagging me to upgrade and pay for next version.

I mostly went back to Finder for now, as I remember having some kind of issues with Forklift3 not being performant, though I don’t remember the details.


It seems fine to me. To be honest I don't recall what actually changed from v3.

That said I only work on local files and don't use any of the remote workflows. The most advanced feature I use is synchronising files between local storage and SD card. And that works fine.

One thing that did break in v4 is that search doesn't work anymore when using the text only toolbar. I reported that ~10 months ago but it's still broken. Maybe I'm the only person who was actually using it.


I never quite understand why the Finder gets so much hate. Personally I think it’s quite ok. I especially like columns navigation, quite effective for me to get around.

It does make me wonder though, how do you feel about System 7.0 Finder?


I have similar feelings about the Finder and also don't quite get the love for Windows Explorer. It's just ok and if it were practical to replace it with just about any common Linux file manager on my Windows boxes I'd do so without a second thought.

NeXT/Mac column view are great and should be table stakes in a file manager in my opinion.


I found myself in a similar situation. Learning some of the hotkeys in Finder for common tasks really helped me curb that feeling

Command + O to open files/folders in Finder was a bit challenging to remember since Enter/Return just works in Explorer


Command + down arrow also works to open

Command + up arrow is a good shortcut to go up one level, surprisingly hard via gui


Command + O to open files/folders in Finder was a bit challenging to remember since Enter/Return just works in Explorer

...and in Finder, Enter is rename, which is a lot more puzzling, so much that many others have commented on the same and some even tried to justify it:

https://apple.stackexchange.com/questions/6727/why-does-the-...

https://old.reddit.com/r/MacOS/comments/16hxjrn/why_is_the_d...


Arrow keys are where it’s at. Command up to go up one level, command down to go down one level (open). Always felt like I had to move my hands more on Windows.

“O” as in “Open”. It’s the same shortcut in every app.

Oh boy, Windows does the same thing (regarding hidden files to sort out FS stuff), but they hide it (just like Apple). We WSL2 users found out the hard way and Microsoft refuses to offer a solution. Relevant issue: https://github.com/microsoft/WSL/issues/7456

Apologies for my post getting snipped, The latest iOS beta keeps randomly eating my text. Apple is aware.


Unless im misunderstanding something, these files don't actually exist but reside in the NTFS's alternative data stream, and only display separately in WSL due to ext4 not supporting ADS right?

Which then is the same with Apple's ._ files

Unix file systems are not sufficient, you need a layer on top.


Which is annoying as I liked the NeXT file Manager.

Agreed on Dual Pane file managers though. I used them on Windows from Windows 3 onwards and various macOS ones except the writers of the macOS ones had nice early versions then decided to rewrite to provide memory hogs that stopped working - e.g. Cocoatech Pathfinder - It is simple just a file browser don't keep adding stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: