Hacker News new | past | comments | ask | show | jobs | submit | nullindividual's comments login

I haven't had it fail once, and I've created many addresses. I do have a custom domain hooked up to my iCloud account, if that is making any difference, I don't know.


> I'm at a crossroads and unsure of how to proceed.

Have you considered that perhaps online communities are simply unhealthy and withdrawing (at least to lurking) is the best course of action?


I think some are healthy and some aren’t.

There are places where people will help you with your photography problems (dpreview.com) or who will get close to you and enjoy your avatar and praise you (VRChat) and then there are places where people believe there are two kinds of people and you better know what kind I am (Twitter clones)


Cox Automotive generated the data[0] sourced in the second "paragraph" from the original article.

[0] https://www.coxautoinc.com/wp-content/uploads/2024/10/Kelley...


Doesn't look like this is a ping[0]! Which is good. Rather it is a socket stream connecting over tcp/443. Ping (ICMP) would be a poor metric.

[0] https://github.com/mda590/cloudping.co/blob/8918ee8d7e632765...


ping is synonymous with echo-request, which is largely transport agnostic.

but you're right


why 443? are you assuming ssl here? serious question, I'm not sure. But if it is, wouldn't it be hard to disregard the weight of SSL in the metric?


The code closes the connection immediately after opening a plain TCP socket, so no SSL work is done. Presumably 443 is just a convenient port to use.


tcp/443 is likely an open port on the target service (Dynamodb based on the domain name). TLS is not involved.

ICMP ECHO would be a bad choice as it is deprioritized by routers[0].

[0] https://archive.nanog.org/sites/default/files/traceroute-201...


The script connects to well known 'dynamodb.' + region_name + '.amazonaws.com' server that expects HTTPS


Not any longer. They now use a 105mm Howitzer among other means.

https://wsdot.wa.gov/travel/operations-services/avalanche-co...


Why not both, like Windows?

$HOME/.tmp for user operations and /tmp for system operations?

EDIT: I see from other posters it can be done. Why the heck isn't this the default?!


IMO even a home-level, per-user tmp directory isn't ideal (though it is better). In a single-user environment, where malware is the biggest concern in current times, what difference does it make if it's a process running under a different user or one that is running under your current user that is attacking you?

In other words, for many systems, a home-level temp directory is virtually the same as /tmp anyway since other than system daemons, all applications are being started as a single user anyway.

And that might be a security regression. For servers you're spinning up most services at bootup and those should either be running fully sandboxed from each other (containerization) or at least as separate system users.

But malware doesn't necessarily need root, or a daemon process user id to inflict harm if it's running as the human user's id and all temp files are in $HOME/.tmp.

What you really want is transient application-specific disk storage that is isolated to the running process and protected, so that any malware that tries to attack another running application's temp files can't since they don't have permission even when both processes are running under the same user id.

At that point malware requires privilege escalation to root first to be able to attack temp files. And again, if we're talking about a server, you're better off running your services in sandboxes when you can because then even root privilege escalation limits the blast radius.


> In a single-user environment, where malware is the biggest concern in current times, what difference does it make if it's a process running under a different user or one that is running under your current user that is attacking you?

In these systems, the responsibility passes to EDRs or similar. But neither a $HOME/.tmp or /tmp matter in these scenarios. _Shared_ systems are where the concept of $HOME/.tmp might be more interesting.


> In a single-user environment, where malware is the biggest concern in current times, what difference does it make if it's a process running under a different user or one that is running under your current user that is attacking you?

Very true, and this is a real weakness of the UNIX (and Windows, even worse!) style security model in the modern environment. Android/iOS do a lot better.


> Android/iOS do a lot better.

They would if they were designed with the user's security in mind, instead of Google's/Apple's control.

But I disagree, they don't do better at all. Any software that wants to get access to everything just needs to insist.


Check pledge/unveil under OpenBSD. You get isolated software yet with freedoms.


I've recently packed some Linux software in flatpak. It's surprisingly good.

Not as good as a real capability-based access control, but quite good compared to the other things that are usable on Linux.


Why are capabilities restrictions not the norm when the concept is so old and seemingly so sound?


Linux doesn't have a good capability system.

And no good system makes it into Linux because it has a huge, well supported one, and some 3 other candidates pushing to get there already.


So something crummy but usable-enough for experts (SELinux?) worse-is-better'd its way onto the Linux scene, and now it has matured enough that on the one hand it can't be displaced but on the other its model is ossified and can't be untangled or simplified. Makes sense.

I love Linux and many of the fruits of its messy evolution, but such fruits are certainly not all equally delicious. :(


They're really annoying to use.

Also the "UNIX ideal" is composable tools, which doesn't combine very well with any kind of sandboxing.


The thing about capabilities is that they compose very well.


"Modern" Windows apps actually have a sandbox (which includes a private temporary folder), and require permissions to access pretty much anything outside. This is all implemented in terms of the existing Win32 security model, fundamentally.

In principle, there's nothing precluding e.g. having a separate user per app on Linux, either...


I'm guessing, but I would think that the idea is to have all the junk in one place so that it can be safely cleared at startup and excluded from backups.

If the user tmp files were placed in /tmp/${USER}/ then that would achieve the same goal.


What system operations exist that need temp storage shouldn't have a separate user anyhow?


I see where you're going with your question, but like Windows' Services/scheduled tasks, most of those 'users' don't have a $HOME folder.

Not to say they couldn't have one!


Services on windows have home folder e.g. in \Windows\ServiceProfiles\LocalService


You'd need to pin pages in physical memory to guarantee it stays in physical memory. What happens if an 'attacker' (or accidental user) exceeds available physical memory? OOM Kill other applications? Just don't accept temp data, leading to failures in operations requested by the user or system?

Pages in physical memory are not typically zero'ed out upon disuse. Yes, they're temporary... but only guaranteed temporary if you turn the system off and the DRAM cells bleed out their voltage.


By default a tmpfs has a really low RAM priority so the OS will try to move it in swapspace if memory gets low. tmpfs size is specified on creation of the tmpfs (and cant be larger than the total memory available, which is swap + RAM) but it's only "occupied" when files begin to fill the tmpfs.

If it gets too full for regular OS operations, you get the fun of the OOM Killer shutting down services (tmpfs is never targeted by the OOM Killer) until the entire OS just deadlocks if you somehow manage to fill the tmpfs up entirely.


> OS will try to move it in swapspace if memory gets low

That defeats the idea GP presented.


Only if memory gets low, otherwise it'll stay in RAM and give the benefit GGP intended. IIRC tmpfs data shouldn't be evicted to swap just to allow more room for cache, or if an app requests a large chunk of memory but doesn't use it, just to allow more room for application pages that are actively in use.

Normal case: tmpfs data stays in RAM

Worst case: it is pushed to swap partitions/files, which is no worse than it being in a filesystem on physical media to start with (depending on access patters and how swap space is arranged it may still be a little more efficient).

It isn't quite the same as /tmp being on disk anyway but under normal loads in cache, because the data will usually get written to disk even if only ever read from cache and the cached data from disk will be evicted to make room for caching other data where tmpfs data is less likely to.


True overall, but I think it makes a lot of sense to evict rarely used data in tmpfs to swap, so that the DRAM it occupied can be used for valuable caches instead of holding some obscure temporary data that will be rarely or ever accessed.

Rarely used data that got evicted then behaves more or less like a normal /tmp filesystem when it does eventually get accessed, i.e. it gets read in from disk, while other data still gets all the benefits from tmpfs (e.g. ephemerality).

(If you take the thought experiment to its logical conclusion, you'll anyway end up in transparent hierarchical storage a la AS/400, where all data is just addressed by a single pointer in a very very large address space and the OS decides where that currently points to, but let's stay within the confines of what we're mostly used to...)


Use of /tmp on regular file system has almost the same behavior because the kernel has a file system cache… if you’re using the file, it will remain available in RAM. There’s some subtle differences, but I’ve seen enough benchmarks around this to have realized that tmpfs doesn’t really have an impact.


Yeah, that's why I think the prime feature of tmpfs is more ephemerality than anything else.


That depends on how you view swapspace; on most devices, swapspace is either created as a separate partition on the disk or as a file living somewhere on the filesystem.

For practical reasons, swapspace isn't really the same thing as keeping it in an actual storage folder - the OS treats swapspace as essentially being empty data on each reboot. (You'd probably be able to extract data from swapspace with disk recovery tools though.)

On a literal level it's not the same as "keep it in RAM", but practically speaking swapspace is treated as a seamless (but slower) extension of installed RAM.


> On a literal level it's not the same as "keep it in RAM"

I read the GP as 'literal level' in-RAM. If I interpreted that incorrectly, apologies to GP.


It may or may not be what the OP was talking about, depending on your threat model.


> exceeds available physical memory?

shm and memory mounts use half the available system memory by default. so this is not typically possible.

> are not typically zero'ed out upon disuse

They're zeroed when they're reallocated.

> and the DRAM cells bleed out their voltage.

This occurs in less than a second in almost every room temperature environment.


Well I guess you could tell Linux to not use some memory addresses using the BadRAM feature, then setup an `mtd` device to those memory addresses and create a RAM-based block device, then use `cryptsetup` to encrypt it. If your Linux box is headless and you have a GPU with RAM there mostly sitting unused then you could use the VRAM.


I use this with a size of a few GB: https://wiki.archlinux.org/title/Tmpfs


> *nix started from a better _initial_ posture as it was multi-user, permissioned, and network-aware from the start (vs. corporate MS-DOS => single user => GUI => networked)

Windows NT started as a multi-user, permissioned, and network-aware OS. The team that built NT came from DEC, not the MS-DOS team.

Windows Me was the last version of Windows that had any form of DOS underpinnings.


> Suffering can be alleviated

Lol. What world are you living in?

Sure, I suppose if you give someone enough downers and obliterate their mind, suffering becomes a non-issue.

Don't pretend this world can alleviate 'all suffering'. It's simply false.


> How do you keep yourself excited and focused on growth?

Do something that connects you with the greater world. For me? Hiking and traveling in a State with a lifetime of natural wonders to explore.

Try nature. See how amenable it is to you.

(I also play video games... too much)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: