I haven't had it fail once, and I've created many addresses. I do have a custom domain hooked up to my iCloud account, if that is making any difference, I don't know.
There are places where people will help you with your photography problems (dpreview.com) or who will get close to you and enjoy your avatar and praise you (VRChat) and then there are places where people believe there are two kinds of people and you better know what kind I am (Twitter clones)
IMO even a home-level, per-user tmp directory isn't ideal (though it is better). In a single-user environment, where malware is the biggest concern in current times, what difference does it make if it's a process running under a different user or one that is running under your current user that is attacking you?
In other words, for many systems, a home-level temp directory is virtually the same as /tmp anyway since other than system daemons, all applications are being started as a single user anyway.
And that might be a security regression. For servers you're spinning up most services at bootup and those should either be running fully sandboxed from each other (containerization) or at least as separate system users.
But malware doesn't necessarily need root, or a daemon process user id to inflict harm if it's running as the human user's id and all temp files are in $HOME/.tmp.
What you really want is transient application-specific disk storage that is isolated to the running process and protected, so that any malware that tries to attack another running application's temp files can't since they don't have permission even when both processes are running under the same user id.
At that point malware requires privilege escalation to root first to be able to attack temp files. And again, if we're talking about a server, you're better off running your services in sandboxes when you can because then even root privilege escalation limits the blast radius.
> In a single-user environment, where malware is the biggest concern in current times, what difference does it make if it's a process running under a different user or one that is running under your current user that is attacking you?
In these systems, the responsibility passes to EDRs or similar. But neither a $HOME/.tmp or /tmp matter in these scenarios. _Shared_ systems are where the concept of $HOME/.tmp might be more interesting.
> In a single-user environment, where malware is the biggest concern in current times, what difference does it make if it's a process running under a different user or one that is running under your current user that is attacking you?
Very true, and this is a real weakness of the UNIX (and Windows, even worse!) style security model in the modern environment. Android/iOS do a lot better.
So something crummy but usable-enough for experts (SELinux?) worse-is-better'd its way onto the Linux scene, and now it has matured enough that on the one hand it can't be displaced but on the other its model is ossified and can't be untangled or simplified. Makes sense.
I love Linux and many of the fruits of its messy evolution, but such fruits are certainly not all equally delicious. :(
"Modern" Windows apps actually have a sandbox (which includes a private temporary folder), and require permissions to access pretty much anything outside. This is all implemented in terms of the existing Win32 security model, fundamentally.
In principle, there's nothing precluding e.g. having a separate user per app on Linux, either...
I'm guessing, but I would think that the idea is to have all the junk in one place so that it can be safely cleared at startup and excluded from backups.
If the user tmp files were placed in /tmp/${USER}/ then that would achieve the same goal.
You'd need to pin pages in physical memory to guarantee it stays in physical memory. What happens if an 'attacker' (or accidental user) exceeds available physical memory? OOM Kill other applications? Just don't accept temp data, leading to failures in operations requested by the user or system?
Pages in physical memory are not typically zero'ed out upon disuse. Yes, they're temporary... but only guaranteed temporary if you turn the system off and the DRAM cells bleed out their voltage.
By default a tmpfs has a really low RAM priority so the OS will try to move it in swapspace if memory gets low. tmpfs size is specified on creation of the tmpfs (and cant be larger than the total memory available, which is swap + RAM) but it's only "occupied" when files begin to fill the tmpfs.
If it gets too full for regular OS operations, you get the fun of the OOM Killer shutting down services (tmpfs is never targeted by the OOM Killer) until the entire OS just deadlocks if you somehow manage to fill the tmpfs up entirely.
Only if memory gets low, otherwise it'll stay in RAM and give the benefit GGP intended. IIRC tmpfs data shouldn't be evicted to swap just to allow more room for cache, or if an app requests a large chunk of memory but doesn't use it, just to allow more room for application pages that are actively in use.
Normal case: tmpfs data stays in RAM
Worst case: it is pushed to swap partitions/files, which is no worse than it being in a filesystem on physical media to start with (depending on access patters and how swap space is arranged it may still be a little more efficient).
It isn't quite the same as /tmp being on disk anyway but under normal loads in cache, because the data will usually get written to disk even if only ever read from cache and the cached data from disk will be evicted to make room for caching other data where tmpfs data is less likely to.
True overall, but I think it makes a lot of sense to evict rarely used data in tmpfs to swap, so that the DRAM it occupied can be used for valuable caches instead of holding some obscure temporary data that will be rarely or ever accessed.
Rarely used data that got evicted then behaves more or less like a normal /tmp filesystem when it does eventually get accessed, i.e. it gets read in from disk, while other data still gets all the benefits from tmpfs (e.g. ephemerality).
(If you take the thought experiment to its logical conclusion, you'll anyway end up in transparent hierarchical storage a la AS/400, where all data is just addressed by a single pointer in a very very large address space and the OS decides where that currently points to, but let's stay within the confines of what we're mostly used to...)
Use of /tmp on regular file system has almost the same behavior because the kernel has a file system cache… if you’re using the file, it will remain available in RAM. There’s some subtle differences, but I’ve seen enough benchmarks around this to have realized that tmpfs doesn’t really have an impact.
That depends on how you view swapspace; on most devices, swapspace is either created as a separate partition on the disk or as a file living somewhere on the filesystem.
For practical reasons, swapspace isn't really the same thing as keeping it in an actual storage folder - the OS treats swapspace as essentially being empty data on each reboot. (You'd probably be able to extract data from swapspace with disk recovery tools though.)
On a literal level it's not the same as "keep it in RAM", but practically speaking swapspace is treated as a seamless (but slower) extension of installed RAM.
Well I guess you could tell Linux to not use some memory addresses using the BadRAM feature, then setup an `mtd` device to those memory addresses and create a RAM-based block device, then use `cryptsetup` to encrypt it. If your Linux box is headless and you have a GPU with RAM there mostly sitting unused then you could use the VRAM.
> *nix started from a better _initial_ posture as it was multi-user, permissioned, and network-aware from the start (vs. corporate MS-DOS => single user => GUI => networked)
Windows NT started as a multi-user, permissioned, and network-aware OS. The team that built NT came from DEC, not the MS-DOS team.
Windows Me was the last version of Windows that had any form of DOS underpinnings.