The 9P protocol is designed to deal with arbitrary blocking, as synthetic filesystems commonly block reads until some data is present. There are also automatic reconnection proxies (aan), and when a mountpoint is entirely unresponsive (lost connectivity), you can just unmount it. The pending calls will all fail gracefully.
The abstraction works well. The main issue is when you lose access to your root device. Just like if you booted a Linux machine from NFS, the root disk connection must be stable (how do you unmount without being able to read the binary). All other mounts can be arbitrarily flakey.
> how do you unmount without being able to read the binary
We have enough RAM now that there's no reason (even on embedded devices!) to ever unload the initrd/initramfs. An OS should be able to be configured such that you can unmount your rootfs whenever you like, and just automatically be de-pivot-root'ed and end up back in your initramfs, where you can mount the rootfs again.
What's your definition of embedded? It's not embedded if you're running a full blown OS on a regular ARM chip.
Anyway, you could make a ramdisk, put things in it and put it in your path. I doubt that a linux box will survive a dead root without hardcore sysadm skills, but it should work on plan9. Initrd is not meant to stay behind after boot, though.
It doesn't put more trust in the network than NBD or NFS does. You don't access your root filesystem over a 3G connection or a flakey VPN on any platform. But, if you mount a remote filesystem that isn't root, disconnecting isn't fatal, you just have to reconnect when the connection is back (aan can handle that).
One of the plan9 users put a lot of work into a terminal boot disk that supports rootfs over flakey connections.
Also, last I checked I believe 9front supports mounting root over AAN which supports roaming between networks and flakey connections.
There is no "them" or "us". "They" are just people. They are rarely even people that we disagree with - they're innocent civilians that were unlucky and lived in what turned into a war zone, or soldiers that are simply following orders to protect what they consider their home. The decision is whether or not to take hundreds, thousands, or even millions of lives exactly like the one of the navy officer with the launch code embedded in his chest. Lives that had nothing to do with the conflict at all.
The "bad guys", the ones you might actually refer to as "them", are sitting somewhere comfortable, far out of reach of any of this. Their only interaction in this is that they pressed a button.
It's scary that even here on HN, a comment like this would get downvoted. Seeing the world through a lens of "us" and "them", and seeing "them" as being less worthy of our humanity & consideration, has led to more human-atrocities throughout history, than almost anything else.
It's pretty clear that the grandparent meant that for better or worse, ingroup-outgroup bias is a very real thing in human psychology abd sociology, and there wouldn't be many wars without demonization of the outgroup. I saw no indication that they actually thought that this is a good distinction to make. Is-ought problem.
You can label whatever group you find yourself in to be "us", and the rest as "them", but they're no different than you. You're just being selfish in valuing your life more than theirs.
This selfishness if of course natural for any human, but that does not make it better.
It is naive to think we are all one happy human race. I wish it were so but it isn't. The "others" will have no problem killing you and dance on your grave. I guess that is why the saying goes "If you want peace, prepare for war"
It doesn't matter which army is battling me, whether it is Russian, Chinese or American. They will have no problem killing me, because that's their job, and there is war. Dancing on my grave would be difficult, because that would require retrieving my body.
I'm not saying that war is not necessary, but pretending that you're fighting zombies thirsting for nothing but your blood is idiotic. The soldiers you're fighting joined their army to protect their people. The people itself most likely just want to passively continue with their life, not understanding what the war is about, knowing nothing but the date of the first bomb. The people that started the war is sitting somewhere cozy, out of reach.
You're not reading the same history books, obviously.
Because again and again, humans are well-documented to NOT dance on each other's graves. We consistently pull back from the brink of destroying the "other" and not only let them live, but help them rebuild.
Armchair psychology: I suppose media can be blamed partly for this view. Bad news sell, and the further away places are correspondingly, stereotypically, only the worst of the news are of reporting value. Thus, people in far away places are associated with fear and chaos.
Also: Baseline human psychology is tribalistic and violent. We need education and culture to behave as modern - and sustenance amd safety.
I have no one labelled as "my own", at least not in the nationalist/patriotic sense. My family and friends my go into that category if we really must, but that's it.
If you put a gun to my head, I'm certainly not going to care if you're my local baker or a North Korean soldier, just to name a random nation. Hell, I'll probably have sympathies for the soldier, as they are just doing what they were told, like any other soldier would. Getting angry at them would be pointless, although I'd of course do my best to fight for survival, and if I came out the winner, I would be upset that the life I had to take was not the life that was to blame.
For the baker, it would be hard to tell if the hand was forced, or if the idea of slaughter was of their own making.
I don't like Java, but don't underestimate the performance of the JVM. Unless you're a crazy perf wiz, then your average C code won't beat your average Java code. It's fast enough for short processes, and for long processes the JIT is pretty darn good. Also note that a JIT can perform runtime assumptions and optimize code based on what is currently the case, which an AOT compiler cannot.
It takes a lot more than a toolchain to write fast code.
I'd like to point out, however, that installing a new version of git is not in any way blocked by either Microsoft or Apple. If you install git with homebrew, you get the newest version, which will take precedence over the Xcode variety unless you mess with your $PATH. Tricking you into using the old version would require execution rights on the machine. You can also remove the /usr/bin/* binaries if you boot the machine without the system integrity features. You can boot back to normal after the modification.
It is inconvenient that these dev tools are not updated frequently (bash, zsh, and many other command line tools are terribly out of date), but it is not terribly difficult to install a fresh version in parallel.
> installing a new version of git is not in any way blocked by either Microsoft or Apple. If you install git with homebrew, you get the newest version, which will take precedence over the Xcode variety unless you mess with your $PATH
Sure, but the fact remains that OS X deliberately hides things from you, and you have no way of knowing that you've found all the hidden things. And for a developer, I think that's unacceptable. I want full root access to my development machine, not a dumbed down version of "root" that doesn't let me mess with certain things. This is one of the key reasons why I will not use a Mac as a development machine.
It doesn't hide anything related to git. The binary in /usr/bin is there to shat xcode-select works and points to /Application/Xcode.app/Contents/.../bin/git. It's not hidden, it's merely a convenience so that you can actually get git, and stay at least a little up to date. There's nothing hidden about it, and installing new, modifying your path, whatever is done just like you would on any other machine.
The binaries are protected by "rootless mode"/System Integrity Protection, but you can disable this and get full root access to your development machine. Just run "csrutil disable" from recovery mode (It wouldn't really help if you could disable it from a running system).
The only locked down things on OS X are proprietary GUI stuff, such as windowserver or some menubar API's. Regular operation is not locked down.
The link between /usr/bin/git and /Application/Xcode.app/Contents/.../bin/git is hidden.
I don't know how you would discover this: ls indicates /usr/bin/git is a regular file rather than a symlink; stat -f "%i" says the two files have different inodes, so they're not hardlinked.
What is the nature of the link, and how would you find this if you didn't already know?
man xcode-select - It's right there in the manpage.
/usr/bin/git is a "toolshim" that effectively calls "xcrun git" (it actually calls xcselect_invoke_xcrun, from /usr/lib/libxcselect.dylib, if you really want the details - this can be found by inspecting the binary). xcode-select's manpage tells you that these shims call the respective binary in the active developer directory, whereas xcrun's manpage describes its capabilities in more detail.
Stop with this "because." Of course! Because OS X and Windows are made deliberately as popular operating systems. The "duh" bell just isn't loud enough, is it?
The old saying that Linux is for geek, and other OSes are for regular computer user is still quite true. The truth is, half of the time I don't touch the root of OS X. Why would I need to. I use brew to install my git, I am good. Similarly I don't modify my Ubuntu workstation unless I have a reason to. I can't remember the last time I needed to edit anything really special to get my Mac customized or let along attaching a debugger to a running process on Mac. Well my work doesn't involve troubleshooting Mac software so there is no incentive. I spend more time on customizing my VIM then customizing my OS X.
So back to reality, please use what fits your desire and your mileage. I just need a computer in a Linux-like environment so I can navigate shit around and complete my work on a nice polished computer.
It's interesting, lately I've been changing my view on this and looking at everything outside of my home directory as the realm of my OS maintainer (Apple or some Linux distro's community). I configure my package managers (brew, pip, gem…) to install to my home directory and set up my PATH accordingly. This way, I don't have to worry at all about the state of my OS (e.g. whether an upgrade will damage some customization).
In theory I could even switch operating systems completely (within the *NIX family) with minimal work.
> lately I've been changing my view on this and looking at everything outside of my home directory as the realm of my OS maintainer
As an ordinary user, this can be perfectly reasonable. I specifically said "as a developer" to make clear that my requirements in this respect are not necessarily the same as those of an ordinary user. Developers need a level of control over their machines and configurations that ordinary users, as a rule, don't.
As a developer, and occasionally of the kernel variant, I also prefer to keep the OS stock. It's convenient, and allows for very speedy reinstalls. It also makes my environment identical across operating systems.
Having control does not mean you have to change anything. Also, most developers are regular users.
> What kinds of things do you count as needing "that level of control"?
As a developer, I want complete control over everything on my development machine. It's not enough to just control my home directory. I want to be able to control exactly what system services are running, so that I can test services in the same environment they'll be running in in production. I want to be able to control exactly what versions of things are installed as system binaries, not just in my home directory, so that I can be sure there is no possibility of a version being there that I don't want there. I want to be able to control exactly what device drivers and kernel modules are running. And so on.
Perhaps not all developers take this attitude; it probably depends on what kinds of things you are developing.
I understand this point of view, and this was also my usual approach. The reason I stopped was that the result was unmanageable systems. If you wanted to change something that you had configured on system level a year before, or even just wanted to replicate it, it became a major overhead. More creative modifications could also sometimes cause conflicts on larger upgrades.
So instead, I keep my systems small. I do not install anything I don't need, and do not touch something that is not necessary. On my laptop, I have 3 "full" applications, 15 convenience tools from homebrew (bash, git, nmap, ...) and 3 kernel extensions (including one of my own) installed. Nothing else that counts as a system-wide modification. Most of my servers are completely stock Alpine, Arch or Ubuntu systems, only running static binaries I provided.
All this saves me from dependency hell, and means that I do not need to hesitate to wipe a machine for whatever reason. It takes me 5 minutes to set a new one up, including my local own work environment.
There's a different between having control (which I have, including on my OS X machine), and actually practicing it.
> The reason I stopped was that the result was unmanageable systems.
I haven't had this issue; but I don't leave things that I test sitting around on my development machine when I'm done testing them. So the "baseline" configuration of my development machine doesn't change much; it has the basic development tools I need and that's it. In fact, I'm not sure I see how the kind of development system you're describing is that different from the kind of development system I was describing.
> having control (which I have, including on my OS X machine)
How do you deal with the issue that prompted the original article discussed in this thread? (I assume you use the csrutil disable method that you described elsewhere in the thread?)
Imagine that you are a corp IT and managing a fleet of developers with Macs. You can push a newer version of git to them, and you can even change their default PATH so that the version of git you pushed are before the git comes with Apple. But you still cannot remove the one comes with Apple, and you cannot prevent it from being used.
Well, you can, but it's inconvenient and requires manual intervention with each machine. I would argue, though, that if you have developers that intentionally circumvent the version of git you provided them with, despite being told that it's there for security reasons (and is newer, better, flashier and all), then you're dealing with people that can't be helped, and shit will happen regardless.
I'm not arguing that it's okay that Apple bundles an affected version of git, but if they start undoing what you did to protect them, I don't think they can be helped. I'm a bit pessimistic in this sense, but I keep getting surprised by the kind of crap that makes their way onto peoples machines, sometimes people that really should know better.
That might be unintended. For example some software might stupidly hardcoded /usr/bin/git instead of using the default one from PATH, or the PATH is actually quite tricky on Mac (bashrc controls what comes by default from bash, but the PATH in GUI is controlled by other files).
As, yeah, stupid applications is hard to guard against, but stupid applications might/will have their own share of code execution bugs, which you also have to control. Everything sucks.
As for the environment, that's the same for any UNIX, though. .bashrc is run only if you start bash. Getting an ubuntu dist. to set up your environment variables in GUI applications certainly won't be fixed with a .bashrc. It might inherit /etc/profile, if you're lucky.
For OS X, launchd handles the environment by simply being the one responsible for starting the applications that you inherit your environment from (such as Finder, Dock and Spotlight), and .bashrc is just a file that bash executes itself that might set additional environment variables. This is not unlike a Linux setup, where you only inherit environment variables written in .bashrc if you started the application from bash.
(OS X does have a path management system for shells in the form of path-helper and /etc/path.d/, but that's run through the profile, which won't affect GUI applications.)
I'm not really trying to defend OS X here, other than pointing out that if you cut out the proprietary GUI stuff, it's basically just your run-of-the-mill custom UNIX dist. As a long time Linux user (I use a Mac as laptop, because screw trying to get Linux working perfectly on a laptop), I find everything to be an equal pain in the ass to deal with. systemd or launchd, X11 or windowserver, Finder, Nautilus, Konqueror or even Windows Explorer - They all suck. Pick your poison.
As, yeah, stupid applications is hard to guard against
Stupid applications aren't the ones to worry about. As an attacker, if I know that every mac has a git vulnerability, and all I have to do is to hard code a path to it, then I'm going to do that.
This seems to be a recurring topic: If you're writing an application, why bother hardcoding a path to a git version with a known RCE? You're already running on the machine.
Hell, if you want to hide your fault, bundle a random tool or lib that you know have an issue and exploit that. It'll be much more stable than relying on a local binary.
> Tricking you into using the old version would require execution rights on the machine
Not really, they just need to be able to modify a single file owned by the user (.bashrc), or just the current shell session (again, with a variable owned by the user). Re-ordering `/usr/bin` and `/usr/local/bin` in $PATH isn't that hard.
With the ability to modify .bashrc, you have execution right to the machine. If not before, to modify the file, then after, because you modified a shell script that is automatically run all the time. Thr machine is already pwned, and one privilege escalation bug away from being completely lost.
With this level of privileges, you can, on any machine, mask existing binaries with whatever you want. It is hardly related to the issue with git.
> With the ability to modify .bashrc, you have execution right to the machine.
You have the user's execution rights only; you don't have root access.
> Thr machine is already pwned, and one privilege escalation bug away from being completely lost.
This is true of any program the user runs. The fix is for the user to not run untrusted code. Trusted code should not be modifying .bashrc without the user's knowledge.
> With this level of privileges, you can, on any machine, mask existing binaries with whatever you want.
And the user can unmask them just as easily. It's not at all the same as having root access.
> You have the user's execution rights only; you don't have root access.
That is also what I said - code execution. Like the git RCE gives you. But, it would be rather redundant to use code execution as a local user to obtain code execution as a local user, no? With .bashrc, it doesn't matter how new your git is, there's no reason to exploit it.
Also, privilege escalation bugs.
> This is true of any program the user runs. The fix is for the user to not run untrusted code. Trusted code should not be modifying .bashrc without the user's knowledge.
Exactly my point. If the .bashrc has been modified in an evil way, as you suggested, you're screwed because someone is executing code as your user, which is usually exactly what they need. Add one privilege escalation, and they can do whatever they want, but that's not really necessary, depending on what they want to do.
In essence, if I can write to a file of my liking on your machine as your user, then I have code execution rights as your user (potentially with a time delay, depending on what I tamper with).
> And the user can unmask them just as easily. It's not at all the same as having root access.
Exactly, like installing a different version of git with homebrew and masking the old one. Also, privilege escalation bugs. I think you're forgetting how common they are. A decent set of privilege escalation bugs is part of any decent hackers toolkit.
If someone modifies .bashrc, as you initially suggested ("Not really, they just need to be able to modify a single file owned by the user (.bashrc), or just the current shell session (again, with a variable owned by the user)"), then the git RCE is redundant, as you already have obtained code execution on the machine.
Code execution how? How are you going to execute code on my machine if you don't have physical access to it? That's why the RCE makes a difference.
> it would be rather redundant to use code execution as a local user to obtain code execution as a local user, no?
As I understood the comment that started this subthread, it was talking about using a remote code execution vulnerability to modify .bashrc. Nobody, as far as I can tell, in this discussion is talking about having local (i.e., physical) access to the machine.
The original comment was about circumventing a users measure to use a new git from homebrew by having the bad guy modify .bashrc. If the bad guy has that ability, he does not need an RCE. My argument being that using homebrew to install a nee git is fine. The counterarguments being "evil applications" and "evil .bashrc", ehich already means that you're executing the bad guys code.
> And the user can unmask them just as easily. It's not at all the same as having root access.
I don't think so. Running as your user, I could add an entry to your .bashrc that execs your shell with an injected shared library that hides itself (e.g. any child process that reads your .bashrc sees an unmodified version). Same for GUI apps, by touching other files. The only way to detect it would be to log into another account or single user mode, just like a real rootkit may only be detectable if you use another system to examine the disk.
Access to a user's account is no less damaging to them than root access — the damage just doesn't extend to the rest of the machine which, in many cases, doesn't matter.
I would argue it's not even that difficult. In the README.md, put some kind of note that there's a bug in the current version of git (and maybe point to some random Google group posting) and mention that on OS X it's better to use the XCode version.
This is indeed an issue. I would like to think that a person capable of using git would have enough of a critical mindset not to do what random people on the internet tell you to do, but... StackOverflow kinda proves the opposite.
I don't think you'll ever be able to help people that follow internet advice blindly, for these people, it might be better not to bundle anything at all.
No, the dev tool binaries are magic binaries that check if the "Command Line Tools" package is installed, and if not, asks you if you want to download it. They're protected by System Integrity Protection, so you have to temporarily boot with that disabled to remove or modify them.
I think it's a fairly common measure to download a lot of what OS X comes with over homebrew, simply because OS X's versions tend to be annoyingly outdated (Like bash 3.2.57, vs. 4.3.42 from homebrew).
A little, but you have enough to get started. Rather an old bash than no bash. First task on OS X is usually to use the bundled curl and ruby to get homebrew. :)
Hah, and that might not even take that long at the current rate! But we still got a better terminal emulator by default over here! Terminal.app is a pretty darn good terminal emulator, although iTerm2.app is better. That is actually one of the things I like about OS X - I have not found a terminal emulator for Linux (and certainly not for Windows) that compares with iTerm2. Not in speed, nor in interface.
A rewrite in C++ would not make it any safer. Rewriting it to make it easier to understand could make it safer, but you do not need a different language for that.
If you can change the name_path struct to a standard STL data structure (std::vector<std::string> might work just fine), you can easily rewrite the function such that all memory allocation and all exploitable memory access is delegated to STL containers. If your STL std::string has a buffer overflow issue, you’re of course still in trouble, but the same is true if there are bugs in the STL/implementation of Go/Rust/Python/Java/Haskell…
Something like:
std::string path_name(std::vector<std::string> const& dirs,
std::string const& name) {
std::string p;
for(auto const& dir : dirs) { p += (dirs + "/"); }
p += name;
return p;
}
should work. If you feel like it, you can also add something like:
std::size_t len = name.size();
for(auto const& dir : dirs) { len += dir.size(); }
p.reserve(len);
Of course, len may overflow, but even if it does, all the harm that causes is that the string will have to reallocate memory during growth until running in a segfault when further memory allocation fails.
Right, and you can do the same with C, using libraries providing this functionality wrapped as well. There's plenty of std::string and std::vector like containers, which handle the magic for you. But even then, you can work with struct {len int; void* data;} to get your vector, and replace void* with char* to get a string. A simple vector and simple string is in no way difficult to implement, and many implementations exist.
I'm just trying to point out that the convenience of some C++ standard library features is not isolated to C++, and C++ is not a "memory safe" language by meaning of the word.
The difference is that using STL containers such as std::string or std::vector are very much the default in C++ (much the same way ‘safe’ code is the default in Rust), whereas you have to do some manual work in C to get them. The result is that using std::string and std::vector here is the natural solution in C++, whereas likely very experienced C programmers stuck to the manual approach.
Oh, yes, sorry and now it’s too late to fix (both that and the horrible formatting). Though I suppose it would have been a compile-time error, so at least it shouldn’t be exploitable :')
Yes it would, because this is basically string processing where C is the worst possible language one could use. I've detailed in another comment how std::string could be used.
You're talking about naked char[]. You can just as easily make a struct{ int len; char* str; } in C, and combined with the "n" variants of string operations, would work just fine with common tools.
Again, C++ does not make anything safer, and its types can easily be replicated in any other language.
Being possible, doesn't mean the majority of C developers make use of it.
The "n" variants are a joke in terms of security, even the C99 annex, that was demoted to optional in C11.
I call them a joke, because tracking the pointer and length separately is hardly an improvement in terms of security.
The only improvement that the "n" variants added is that the null character is always added to the end, instead of how strncpy does it, by only adding the character if there is enough space.
The abstraction works well. The main issue is when you lose access to your root device. Just like if you booted a Linux machine from NFS, the root disk connection must be stable (how do you unmount without being able to read the binary). All other mounts can be arbitrarily flakey.