Companies like Apple and Microsoft prevent you from modifying the software installed on your computer to improve your security.
Ironically, when they do that, they also make it difficult, impractical, or impossible for you to upgrade or disable vulnerable software (in this case, an old, insecure version of git with remote-code-execution vulnerability).
People like Richard Stallman have been warning about this sort of thing for decades.
I'd like to point out, however, that installing a new version of git is not in any way blocked by either Microsoft or Apple. If you install git with homebrew, you get the newest version, which will take precedence over the Xcode variety unless you mess with your $PATH. Tricking you into using the old version would require execution rights on the machine. You can also remove the /usr/bin/* binaries if you boot the machine without the system integrity features. You can boot back to normal after the modification.
It is inconvenient that these dev tools are not updated frequently (bash, zsh, and many other command line tools are terribly out of date), but it is not terribly difficult to install a fresh version in parallel.
> installing a new version of git is not in any way blocked by either Microsoft or Apple. If you install git with homebrew, you get the newest version, which will take precedence over the Xcode variety unless you mess with your $PATH
Sure, but the fact remains that OS X deliberately hides things from you, and you have no way of knowing that you've found all the hidden things. And for a developer, I think that's unacceptable. I want full root access to my development machine, not a dumbed down version of "root" that doesn't let me mess with certain things. This is one of the key reasons why I will not use a Mac as a development machine.
It doesn't hide anything related to git. The binary in /usr/bin is there to shat xcode-select works and points to /Application/Xcode.app/Contents/.../bin/git. It's not hidden, it's merely a convenience so that you can actually get git, and stay at least a little up to date. There's nothing hidden about it, and installing new, modifying your path, whatever is done just like you would on any other machine.
The binaries are protected by "rootless mode"/System Integrity Protection, but you can disable this and get full root access to your development machine. Just run "csrutil disable" from recovery mode (It wouldn't really help if you could disable it from a running system).
The only locked down things on OS X are proprietary GUI stuff, such as windowserver or some menubar API's. Regular operation is not locked down.
The link between /usr/bin/git and /Application/Xcode.app/Contents/.../bin/git is hidden.
I don't know how you would discover this: ls indicates /usr/bin/git is a regular file rather than a symlink; stat -f "%i" says the two files have different inodes, so they're not hardlinked.
What is the nature of the link, and how would you find this if you didn't already know?
man xcode-select - It's right there in the manpage.
/usr/bin/git is a "toolshim" that effectively calls "xcrun git" (it actually calls xcselect_invoke_xcrun, from /usr/lib/libxcselect.dylib, if you really want the details - this can be found by inspecting the binary). xcode-select's manpage tells you that these shims call the respective binary in the active developer directory, whereas xcrun's manpage describes its capabilities in more detail.
Stop with this "because." Of course! Because OS X and Windows are made deliberately as popular operating systems. The "duh" bell just isn't loud enough, is it?
The old saying that Linux is for geek, and other OSes are for regular computer user is still quite true. The truth is, half of the time I don't touch the root of OS X. Why would I need to. I use brew to install my git, I am good. Similarly I don't modify my Ubuntu workstation unless I have a reason to. I can't remember the last time I needed to edit anything really special to get my Mac customized or let along attaching a debugger to a running process on Mac. Well my work doesn't involve troubleshooting Mac software so there is no incentive. I spend more time on customizing my VIM then customizing my OS X.
So back to reality, please use what fits your desire and your mileage. I just need a computer in a Linux-like environment so I can navigate shit around and complete my work on a nice polished computer.
It's interesting, lately I've been changing my view on this and looking at everything outside of my home directory as the realm of my OS maintainer (Apple or some Linux distro's community). I configure my package managers (brew, pip, gem…) to install to my home directory and set up my PATH accordingly. This way, I don't have to worry at all about the state of my OS (e.g. whether an upgrade will damage some customization).
In theory I could even switch operating systems completely (within the *NIX family) with minimal work.
> lately I've been changing my view on this and looking at everything outside of my home directory as the realm of my OS maintainer
As an ordinary user, this can be perfectly reasonable. I specifically said "as a developer" to make clear that my requirements in this respect are not necessarily the same as those of an ordinary user. Developers need a level of control over their machines and configurations that ordinary users, as a rule, don't.
As a developer, and occasionally of the kernel variant, I also prefer to keep the OS stock. It's convenient, and allows for very speedy reinstalls. It also makes my environment identical across operating systems.
Having control does not mean you have to change anything. Also, most developers are regular users.
> What kinds of things do you count as needing "that level of control"?
As a developer, I want complete control over everything on my development machine. It's not enough to just control my home directory. I want to be able to control exactly what system services are running, so that I can test services in the same environment they'll be running in in production. I want to be able to control exactly what versions of things are installed as system binaries, not just in my home directory, so that I can be sure there is no possibility of a version being there that I don't want there. I want to be able to control exactly what device drivers and kernel modules are running. And so on.
Perhaps not all developers take this attitude; it probably depends on what kinds of things you are developing.
I understand this point of view, and this was also my usual approach. The reason I stopped was that the result was unmanageable systems. If you wanted to change something that you had configured on system level a year before, or even just wanted to replicate it, it became a major overhead. More creative modifications could also sometimes cause conflicts on larger upgrades.
So instead, I keep my systems small. I do not install anything I don't need, and do not touch something that is not necessary. On my laptop, I have 3 "full" applications, 15 convenience tools from homebrew (bash, git, nmap, ...) and 3 kernel extensions (including one of my own) installed. Nothing else that counts as a system-wide modification. Most of my servers are completely stock Alpine, Arch or Ubuntu systems, only running static binaries I provided.
All this saves me from dependency hell, and means that I do not need to hesitate to wipe a machine for whatever reason. It takes me 5 minutes to set a new one up, including my local own work environment.
There's a different between having control (which I have, including on my OS X machine), and actually practicing it.
> The reason I stopped was that the result was unmanageable systems.
I haven't had this issue; but I don't leave things that I test sitting around on my development machine when I'm done testing them. So the "baseline" configuration of my development machine doesn't change much; it has the basic development tools I need and that's it. In fact, I'm not sure I see how the kind of development system you're describing is that different from the kind of development system I was describing.
> having control (which I have, including on my OS X machine)
How do you deal with the issue that prompted the original article discussed in this thread? (I assume you use the csrutil disable method that you described elsewhere in the thread?)
Imagine that you are a corp IT and managing a fleet of developers with Macs. You can push a newer version of git to them, and you can even change their default PATH so that the version of git you pushed are before the git comes with Apple. But you still cannot remove the one comes with Apple, and you cannot prevent it from being used.
Well, you can, but it's inconvenient and requires manual intervention with each machine. I would argue, though, that if you have developers that intentionally circumvent the version of git you provided them with, despite being told that it's there for security reasons (and is newer, better, flashier and all), then you're dealing with people that can't be helped, and shit will happen regardless.
I'm not arguing that it's okay that Apple bundles an affected version of git, but if they start undoing what you did to protect them, I don't think they can be helped. I'm a bit pessimistic in this sense, but I keep getting surprised by the kind of crap that makes their way onto peoples machines, sometimes people that really should know better.
That might be unintended. For example some software might stupidly hardcoded /usr/bin/git instead of using the default one from PATH, or the PATH is actually quite tricky on Mac (bashrc controls what comes by default from bash, but the PATH in GUI is controlled by other files).
As, yeah, stupid applications is hard to guard against, but stupid applications might/will have their own share of code execution bugs, which you also have to control. Everything sucks.
As for the environment, that's the same for any UNIX, though. .bashrc is run only if you start bash. Getting an ubuntu dist. to set up your environment variables in GUI applications certainly won't be fixed with a .bashrc. It might inherit /etc/profile, if you're lucky.
For OS X, launchd handles the environment by simply being the one responsible for starting the applications that you inherit your environment from (such as Finder, Dock and Spotlight), and .bashrc is just a file that bash executes itself that might set additional environment variables. This is not unlike a Linux setup, where you only inherit environment variables written in .bashrc if you started the application from bash.
(OS X does have a path management system for shells in the form of path-helper and /etc/path.d/, but that's run through the profile, which won't affect GUI applications.)
I'm not really trying to defend OS X here, other than pointing out that if you cut out the proprietary GUI stuff, it's basically just your run-of-the-mill custom UNIX dist. As a long time Linux user (I use a Mac as laptop, because screw trying to get Linux working perfectly on a laptop), I find everything to be an equal pain in the ass to deal with. systemd or launchd, X11 or windowserver, Finder, Nautilus, Konqueror or even Windows Explorer - They all suck. Pick your poison.
As, yeah, stupid applications is hard to guard against
Stupid applications aren't the ones to worry about. As an attacker, if I know that every mac has a git vulnerability, and all I have to do is to hard code a path to it, then I'm going to do that.
This seems to be a recurring topic: If you're writing an application, why bother hardcoding a path to a git version with a known RCE? You're already running on the machine.
Hell, if you want to hide your fault, bundle a random tool or lib that you know have an issue and exploit that. It'll be much more stable than relying on a local binary.
> Tricking you into using the old version would require execution rights on the machine
Not really, they just need to be able to modify a single file owned by the user (.bashrc), or just the current shell session (again, with a variable owned by the user). Re-ordering `/usr/bin` and `/usr/local/bin` in $PATH isn't that hard.
With the ability to modify .bashrc, you have execution right to the machine. If not before, to modify the file, then after, because you modified a shell script that is automatically run all the time. Thr machine is already pwned, and one privilege escalation bug away from being completely lost.
With this level of privileges, you can, on any machine, mask existing binaries with whatever you want. It is hardly related to the issue with git.
> With the ability to modify .bashrc, you have execution right to the machine.
You have the user's execution rights only; you don't have root access.
> Thr machine is already pwned, and one privilege escalation bug away from being completely lost.
This is true of any program the user runs. The fix is for the user to not run untrusted code. Trusted code should not be modifying .bashrc without the user's knowledge.
> With this level of privileges, you can, on any machine, mask existing binaries with whatever you want.
And the user can unmask them just as easily. It's not at all the same as having root access.
> You have the user's execution rights only; you don't have root access.
That is also what I said - code execution. Like the git RCE gives you. But, it would be rather redundant to use code execution as a local user to obtain code execution as a local user, no? With .bashrc, it doesn't matter how new your git is, there's no reason to exploit it.
Also, privilege escalation bugs.
> This is true of any program the user runs. The fix is for the user to not run untrusted code. Trusted code should not be modifying .bashrc without the user's knowledge.
Exactly my point. If the .bashrc has been modified in an evil way, as you suggested, you're screwed because someone is executing code as your user, which is usually exactly what they need. Add one privilege escalation, and they can do whatever they want, but that's not really necessary, depending on what they want to do.
In essence, if I can write to a file of my liking on your machine as your user, then I have code execution rights as your user (potentially with a time delay, depending on what I tamper with).
> And the user can unmask them just as easily. It's not at all the same as having root access.
Exactly, like installing a different version of git with homebrew and masking the old one. Also, privilege escalation bugs. I think you're forgetting how common they are. A decent set of privilege escalation bugs is part of any decent hackers toolkit.
If someone modifies .bashrc, as you initially suggested ("Not really, they just need to be able to modify a single file owned by the user (.bashrc), or just the current shell session (again, with a variable owned by the user)"), then the git RCE is redundant, as you already have obtained code execution on the machine.
Code execution how? How are you going to execute code on my machine if you don't have physical access to it? That's why the RCE makes a difference.
> it would be rather redundant to use code execution as a local user to obtain code execution as a local user, no?
As I understood the comment that started this subthread, it was talking about using a remote code execution vulnerability to modify .bashrc. Nobody, as far as I can tell, in this discussion is talking about having local (i.e., physical) access to the machine.
The original comment was about circumventing a users measure to use a new git from homebrew by having the bad guy modify .bashrc. If the bad guy has that ability, he does not need an RCE. My argument being that using homebrew to install a nee git is fine. The counterarguments being "evil applications" and "evil .bashrc", ehich already means that you're executing the bad guys code.
> And the user can unmask them just as easily. It's not at all the same as having root access.
I don't think so. Running as your user, I could add an entry to your .bashrc that execs your shell with an injected shared library that hides itself (e.g. any child process that reads your .bashrc sees an unmodified version). Same for GUI apps, by touching other files. The only way to detect it would be to log into another account or single user mode, just like a real rootkit may only be detectable if you use another system to examine the disk.
Access to a user's account is no less damaging to them than root access — the damage just doesn't extend to the rest of the machine which, in many cases, doesn't matter.
I would argue it's not even that difficult. In the README.md, put some kind of note that there's a bug in the current version of git (and maybe point to some random Google group posting) and mention that on OS X it's better to use the XCode version.
This is indeed an issue. I would like to think that a person capable of using git would have enough of a critical mindset not to do what random people on the internet tell you to do, but... StackOverflow kinda proves the opposite.
I don't think you'll ever be able to help people that follow internet advice blindly, for these people, it might be better not to bundle anything at all.
No, the dev tool binaries are magic binaries that check if the "Command Line Tools" package is installed, and if not, asks you if you want to download it. They're protected by System Integrity Protection, so you have to temporarily boot with that disabled to remove or modify them.
I think it's a fairly common measure to download a lot of what OS X comes with over homebrew, simply because OS X's versions tend to be annoyingly outdated (Like bash 3.2.57, vs. 4.3.42 from homebrew).
A little, but you have enough to get started. Rather an old bash than no bash. First task on OS X is usually to use the bundled curl and ruby to get homebrew. :)
Hah, and that might not even take that long at the current rate! But we still got a better terminal emulator by default over here! Terminal.app is a pretty darn good terminal emulator, although iTerm2.app is better. That is actually one of the things I like about OS X - I have not found a terminal emulator for Linux (and certainly not for Windows) that compares with iTerm2. Not in speed, nor in interface.
People like Richard Stallman have been warning about this sort of thing for decades.
Any GNU/GNU, GNU/Linux, or GNU/anything system could do exactly the same thing without violating any terms of the GPL. Requiring a reboot into a special "I know I'm messing with the system" mode in order to mess with important core binaries is not an unreasonable way protect the user from malware, and does not in any way abridge any of the core freedoms Stallman promotes.
(and yes, you literally can run a command and reboot your Mac to be able to make changes to the protected stuff, and the process to do this is well-documented)
Router companies have the exact same problem. They package software that will probably be out of date by the time it ships to the customer and there is rarely a mechanism to update it except "buy a new one."
Isn't this why projects such as Homebrew thrive? For me personally, I just `brew install git`, and I keep it updated that way (`brew update && brew upgrade`)...
Sure, Apple should ship a fix, but there are ways around it for now.
It's a bit too precarious to be an adequate solution, in my opinion. It depends on /usr/local/bin always being ahead of /usr/bin in $PATH, and on scripts never invoking the system git via its full path, and on Homebrew never accidentally uninstalling git due to a botched upgrade. Not to mention the fact that Homebrew itself uses the system git to install itself.
> Not to mention the fact that Homebrew itself uses the system git to install itself.
To me, this is the biggest problem, and it's not just Homebrew. Any source package manager that uses Git will potentially have this problem. With a vulnerable Git on your system, you have to second-guess every build script you ever run that might make use of Git, to make sure it obeys the path you set instead of choosing its own.
If you prefix it with something already in there then it's effectively rearranged.
You wouldn't do this deliberately, of course, but it happens a lot - people end up with massive PATHs because they blindly prefix when something's gone wrong and it _may_ be the solution.
It's times like this that prove I made the right choice sticking to Linux.
There's nothing I hate more than the inability to fix things that are broken on my system or the fact that I would have to jump trough a lot of unnecessary hoops to do it.
The few small advantages are just not worth it in the end for me.
There really isn't a hoop tp jump through... Everyone uses homebrew, which happens to work extremely well (to the point where it's apparently being ported to linux now – go figure).
With the homebrew git installed, the stars really have to align for that vulnerability to be exploited. You can possibly get a user to clone from your repository. But if you can also get him to use the git version you want, we're at the point where you apparently have control of the system already.
In comparison with linux, this vulnerability pales in comparison to the-common-void-that-shall-not-be-talked-about, i. e. the around 400 gems, npms, brews, pips, go(es?), roles and ppas installed & running on a typical dev workstation, no matter if it's Linux or Mac. It's just a matter of time until someone gets his version of leftpad installed on >100,000 workstations & servers before he flips the switch to turn them into cryptolocked hostages.
Also: I'd hate if I had to fiddle around with kernel parameters to get printing, sleeping, networking, waking, font-displaying, account-switching, video-playing, time-knowing or up-backing to work every time canonical decides it's time for a new <x>subsystem. I love linux on the server, but maintaining a function desktop system is simply a waste of time. A somewhat evil waste of productivity, actually, because it feels like work and at the same time provides those frequent little victories that can turn it into an obsession.
> Also: I'd hate if I had to fiddle around with kernel parameters to get printing, sleeping, networking, waking, font-displaying, account-switching, video-playing, time-knowing or up-backing to work every time canonical decides it's time for a new <x>subsystem. I love linux on the server, but maintaining a function desktop system is simply a waste of time. A somewhat evil waste of productivity, actually, because it feels like work and at the same time provides those frequent little victories that can turn it into an obsession.
On the other hand. Once you master this (which is not such a huge intend), you know your system.
Which is a valuable "smart skill" when developing (even web apps). For example, knowing how to use awk and sed instead of having to start a node or ruby instance is a thing that shows a developer actually knows how to run linux and not only "how to run stuff on linux".
You'll also understand $PATH. Which apperently is a thing most MAC users do not understand. Having to start "docker-shell" because they don't know how to extens theyr $PATH is a freaking joke and a workflow killer.
I understand that MAC's are comfortable to use and maintain. But as developers we should embrace leaving the comfort zone and face the real deal. We shouln't be some bunch of kids who need mac because it's comfortable.
Lets grow from little kids that need "mama mac" to take care of our stuff and become grown up's that can handle a system, because they know the system.
Long time Mac user here, I use awk & sed on a regular basis. Been familiar with $PATH since the DOS days and am equally comfortable SSH'd into a CentOS box as I am on the Mac.
I don't think I'm special here, but I'm for sure a subset of Mac users. My point is, seeing a Mac at someone's desk shouldn't make you assume they're idiots, just like you don't assume someone's a l33t-ub3r-h4ck3r if they're walking around with a lenovo.
My point was exactly that 'knowing the system' isn't useful for a webdev when the system is the graphics subsystem. I'll gladly learn it when it becomes relevant - indeed I spend half the day in a console and can configure a linux cluster like the best of 'em. When there's time left, I prefer to choose the subject of my studies myself. Right now, I prefer to dabble in AI to triaging obscure linux bugs.
I don't get the idea that Mac users are attracted to the system because they cannot handle windows or linux – they just don't want to. Isn't it kinda obvious that the stereotype can't survive when you see >3/4 of all google employees using Macs?
But hey, maybe I should write an App that randomly introduces bugs into my stack to finally learn a bit more about it. And when my car breaks down, I'll be thankful for the learning experience.
Yeah, but a script that intentionally invokes /usr/bin/git has already achieved the non-privileged access the git vulnerability could provide. A script that unintentionally invokes (i. e. not to exploit) would then need to be combined with a malicious repository, which may be tricky.
But I don't want to dismiss this vulnerability – it's so easy to fix on Apple's part that they don't have an excuse. There are a few too many neglected corners of their OS where they seriously have to get their act together. But in practical terms, people focus too much on the technologically exciting or Apple/MS/<other divisive entity>-drama provoking vulnerabilities, while there's probably like one or two people working in software who actually verify every hash of every download and audit the source code for every version of every vim plugin they install.
While I agree with canonical being a pain the answer to that is simple just don't user Ubuntu.
I prefer Mint if I need a Ubuntu fork that's quite stable and has most things already configured generally used at work and as my personal desktop I use Arch mostly because compatibility with the hardware required the latest kernel at the time.
I like playing with the latest features and not having to install the OS every other year because some major update from canonical broke everything.
As far as fiddling with the kernel I never had to do anything like that to get the things you mentioned working the most I had to do is install some software and configure it correctly.
In the years I've been running Arch on the desktop it only breaks on average about 2 or 3 times a year which is quite decent considering it's a rolling release and I haven't had any major issues with mint since I started using it about 2-3 years ago.
I crashed the window manager a few times but that's about it in comparison Unity used to crash on me constantly and the entire experience of using plain Ubuntu as a desktop was awful so I understand why you would be against using it if that is all you knew of Linux as a desktop.
Would much rather have a system that just worked but required a "hoop to jump through" in order to use git, vs the multitudes of hoops you have to jump to in order to get a Linux desktop system functional in a corporate environment.
At home when I'm futzing around I don't mind (and quite enjoy it). But at work I don't have time to diddle my device drivers and OMG the xorg.conf crap I had to deal with in the past that still give me nightmares...
Really? Talking about how homebrew is pretty painless is "Apple fanboyism at its best?" Sure it's another thing to do, but look at any thread about running Linux on a laptop. How many people are like X laptop runs great you just need to add Y kernel parameter to the boot options. Isn't that a "hoop to jump through" too? Or all of the people fighting to keep their Windows install from upgrading to Windows 10. Isn't that a hoop to jump through?
Installing homebrew is a hoop; in an ideal world, your OS vendor would provide a means for installing such packages safely (the Mac App Store would count if Apple cared about it).
This is also a bit weird since getting git installed on OSX in the first place requires it's own hoop: "buying" the free copy of XCode and installing it is required for the command line tools that homebrew relies on.
To be fair, if you're pulling from a compromised repo, you're already in a bad spot. There's a good chance you're going to be making and running the code you cloned, at which point you'll execute whatever arbitrary code anyways. If it's executed from a random script, there's a good chance you're not checking the result either before building.
Sure, as many other software with vulnerability, but with local software like brew (I wrote also [dotsoftware](http://g14n.info/dotsoftware) for the same reasons) you don't have it in your PATH so you are not using it.
Using local software has many benefits, among others, a shorter release cycle.
As I said, one config mistake or running a script which uses /usr/bin/git and you're subject to RCE. For example, many GUI programs, such as Atom editor, when not launched from terminal, don't know your PATH.
If you do not have Xcode installed but do have the Command Line Tools, you will find the vulnerable git /Library/Developer/CommandLineTools/usr/bin/git
I'm pretty sure git isn't the only thing that is (or will be) vulnerable. Vulnerabilities happen, it's a fact of life. You will have to constantly update your systems no matter which OS you run.
Yes we did, and the author appears to be posting with the primary intent of spreading anti-Apple sentiments. The opening paragraph begins by insulting startups for being full of Macs. The rest of the post is full of snide comments. They go off on a tangent about System Integrity Protection and the fact that OS X is not Linux ("Apple... keeps you from twiddling", "Well, sorry. You also can't chmod", "I'll just strace it to see what it execs! Oh wait, this isn't Linux."). This could easily have been posted as a simple statement of the CVEs in question and the version of git shipped with latest OS X patch. Not difficult to post facts about a specific issue without insulting an entire operating system - and taking cheap shots at the people that use it.
>> If you rely on machines like this, I am truly sorry. I feel for you.
I don't feel sorry for myself. Odd that a stranger finds it necessary to offer me their sympathy, let alone condescending pity.
"Sometimes I think about all of those pictures which show a bunch of people in startups. They have their office space, which might be big, or it might be small, but they tend to have Macs. Lots of Macs. A lot of them also use git to do stuff, perhaps via GitHub, or via some other place entirely. There are lots of one-off repos all over the place."
If you can see a cheap shot in that paragraph, then you are reading things that aren't there. A blog post gives a certain amount of freedom for the author to elaborate on a theme.
You seem a bit defensive. I'm not sure why, but I certainly don't think that Rachel was attacking those who use Macs.
I didn't get a negative vibe from the opening paragraph on its own. If you read the rest of the post, you find a lot of unnecessary negativity regarding the entire Apple ecosystem.
>> I know, I'll just strace it to see what it execs! Oh wait, this isn't Linux. Uh, I'll dtruss it to see what it execs!
Someone who is not in the process of bashing as much as possible would have simply said something like "I'll dtruss it (OS X's equivalent to Linux's strace) to see what it execs". All the exclamation marks and passive aggressive "Uh", "Oh wait", "Well, sorry" phrasing to drive home just how terribly awful the operating system is.
The closing section with the "If you rely on machines like this, I am truly sorry" sealed it. This clear dislike for the ecosystem adds - at least for me - new meaning to the opening paragraph. Typical startup bashing for "trying to be hip and trendy". It's hardware and an operating system. Everyone has a preference for the tools they use; there's no need for passive aggressive hostility.
All that first sentence points out is that Rachel was more familiar with Linux than OS X, and mistakenly thought that strace would be on the system, and then reverted to using dtruss. She's just showing her process for trying to troubleshoot Unix processes to see what is going on under the hood, in a conversational tone.
As for the "If you rely on machines like this, I am truly sorry" - I don't really blame her. She says she's sorry because she was trying to administer a system that a. has vulnerable software she couldn't easily upgrade or even remove, and b. some of the common utilities that she uses to troubleshoot Unix systems just don't work and this makes a competent Unix admin's life harder than it needs to be.
And Rachel, by all accounts, is a very competent - no, scratch that - talented administrator. So she feels the pain of not being able to use commonly available tools and not being able to keep systems as secure as she would like.
If you feel offended by this, then it seems to me you are actively looking to be offended and you have your own agenda.
The sentiment against OS X might not show that strong, but it is clearly there. She doesn't even mention that what protects /usr/bin is System Integrity Protection, or that it can be disabled. The very idea that people might not be using system git is not even mentioned.
All in all, it really feels as if it was written from the perspective of someone that does not usually work with OS X and does not know the system well. In other words, she has not done her homework. Which is fine, if you acknowledge what you don't know. But then the condescending tone would be totally out of place.
The sentiment is there, and it does not help in spreading the message, unless what you really want is to flare up all the emotions. Otherwise, it's not the best course of action.
Fun fact, since we're comparing default system installations, my Ubuntu apparently still has git 2.5.0. I suppose I should find some PPA or something to update it.
That point about Linux distros such as Ubuntu is something I was looking to mention, but I'm not 100% sure I have my facts straight. That said, here's what I believe is the case. ;)
The repositories can be similar to OS X in terms of providing really outdated versions of many packages. The same day Ubuntu releases a new version of the OS, packages can already be over a year out of date from the releases made by the software's developer.
The distros won't update the official repositories with newer versions of software once the version is pinned during the testing phase of the OS, due to the extensive amount of quality assurance that goes into ensuring system-wide stability. Their reasons are justified, but the end result still means you're typically not running the best and latest of anything.
Things are a little more difficult to understand with versioning in Linux. You may have git 2.5.0, but if you're on a release of the distro for which support is still ongoing, those CVEs are probably fixed due to backported security patches that don't bump the software's version number. In this manner, official repositories on Linux distros usually give you outdated software in terms of new features, but keep you entirely up to date in terms of security.
I've been using Macs for 30 years, including OS X since the 10.0 beta, and the recent changes have left me with a "sentiment against OS X" not unlike the "sentiment against Mac OS" that we had in the 90s when things went pear shaped.
There were people saying we shouldn't speak ill of System 7.5 back then, too.
OS X isn't a systemically marginalized group, it's a product that people are increasingly unhappy with. You may disagree as to why, or with the trade-offs involved, but we're not ignorant as you think we are; SIP likely isn't mentioned because it's obvious.
14.04 is an LTS release, which is, by definition, stable. 2.8.0 (latest, what I'm running) is vastly different from 1.9.1.
Stability is not the same thing as insecurity. As long as a stable release is supported, the maintainer promises to keep it secure. If your version of git had that vulnerability, Ubuntu would have backported the patches/fixes and made it available to you.
The version number 1.9.1 is a release identifier, not a security status identifier.
Yes, indeed, but Mac OS X users who use Homebrew are a subset of all Mac OS X users. The problem is in the default software. Apple's update model isn't good for this type of software, so the fact that it is possible for a user to install secure versions from Homebrew (or compile their own) doesn't matter.
Yes, but if Homebrew is malicious, you'd have more problems than a specially-crafted repository that exploits a git vulnerability. You are installing something that has all user access rights.
You _could_ get recent git by some other means prior to installing homebrew. Anything short of compiling from source wouldn't require ever installing XCLT.
You can just install the command line tools without xcode. From those tools, most can be replaced with their homebrew versions so you only need them for bootstrapping. But if you want to do hardware- or OS X related development, you will need to keep the tools around. CUDA, for example, needs clang et. all.
They don't take much space, though, and the toolchain is treated a bit better by Apple than utilities like git, vim etc.
I don't know about homebrew, but macports has clang and other toolchain stuff, so I think it might be possible to uninstall Xcode after installing all necessary tools through macports?
If you're going to be that snarky and nonconstructive about it, you're going to get snarky and nonconstructive comments back. Or at the very least, you're not going to inspire any thoughtful and interesting observations from anyone.
Xcode is distributed and released over the AppStore and can be rev-ed at any frequency, independently of the OS; Apple's update model not does prevent an expedient update.
Perhaps the main cause for delay is the associated QA efforts to make sure that other components in the stack which depend on git don't break in the case that git has broken binary compatibility (i.e. changed its public interface).
If things are tied up in QA, that is a problem in and of itself, because relevance is an important quality for a security bugfix to have. If my system is compromised today, it will do me little good that the bugfix Apple ships next month was tested extensively for compatibility with Xcode.
It is too late for there to be an expedient update from Apple. The vulnerability was disclosed to oss-security over a month ago, on March 15[0]. SUSE had a patch out the next day[1]. By March 24, Debian, Ubuntu, Red Hat, CentOS and Oracle had all issued fixes.[2]
Isn't this the perfect setting where an attacker will ask you to replace this binary with a custom binary with an additional backdoor?
In the best case the attacker would fake an email that looks like it came from your IT department. Even if you were suspicious, a quick search on the web would confirm that Apple really ships a vulnerable binary. So you believe the email is real. Then you go along and replace the binary with the malicious binary provided in the mail.
> So, what's the big deal? Crappy C code gets exploited every day, and we upgrade it, and then we're "safe" until the next huge hole that's been there forever is reported. (In the meantime, people party with their private stash of vulnerabilities.)
I looked at the offending function, and it would be trivial to rewrite it in C++ in an easy to understand and safer way. It probably wouldn't be as safe as Rust, but still a lot better.
A lot of Linux C utilities would benefit from such a treatment.
If you're going to rewrite in any case surely it's worth going to Rust (or Ada, or another language that simply doesn't have the myriad unsafe possibilities that C++ does).
There are many arguments to be made in favor of C++, such as talent availability, experience from other porting projects, the possibility of doing incremental porting, etc.
My impression is that some projects are already experimenting with or using C++ in their C code bases, so C to C++ is quite likely.
Most languages have some C interop, and that satisfies the minimal definition of "incremental".
However, using C and C++ together in a project is especially easy. If I were to do this, I would first get the code base compiling with a C++ compiler (this already brings some extra type safety) and is not particularly difficult.
Then I'd start replacing C code blocks with safer C++ code. This could mean changing a function, some parameter-passing conventions, replacing char* with std::string, etc.
This has the biggest chance of success I feel, and there's already success stories and strategies available that describe this method. E.g: GCC.
Rust <-> C interop is pretty strong. AFAIK, there is equivalent call overhead between Rust & C as there is between C & C++ (i.e. none). You're right that incrementally porting to Rust would be a little bit more overhead than C++, simply because you'd need two compilers, and instead of rewriting function definitions in-place you'd need to write a fresh function in Rust and delete it in C. But at link time everything is sane and once you set up the build, it's really not hard.
I'm not suggesting that C -> Rust is easier than or quite as easy as C -> C++, just that it is much easier than C -> most other languages, and that it is close enough to C -> C++ that it is worth investigating. It is definitely more robust than the minimal definition of "incremental."
That still gives you Lisp, Ada, Go, Haskell, and Java, if you make the (imo incorrect) assumption that "low level" tools can only be written in languages which compile down to bytecode.
Of course, Mercurial gives lie to this assumption.
I don't like Java, but don't underestimate the performance of the JVM. Unless you're a crazy perf wiz, then your average C code won't beat your average Java code. It's fast enough for short processes, and for long processes the JIT is pretty darn good. Also note that a JIT can perform runtime assumptions and optimize code based on what is currently the case, which an AOT compiler cannot.
It takes a lot more than a toolchain to write fast code.
@falcolas -- I think you mean machine code not bytecode. And w.r.t. Mercurial afaik the project's hot paths are all written in Cython extensions, and there's ongoing work to improve the Python part by working with the PyPy folks. So, there's definite technical advantages in using Python for greater developer productivity, but there's also a cost.
My impression is that projects that mix C and C++ tend to use the C-like subset of C++ (with classes), so they wouldn't gain any safety advantage from moving to C++.
Many other languages support C linkage in a way that's comparable to C++.
C++ is far safer than C precisely for the reasons listed. The standard is safe by default (it's not opt-in). If you want or need backward compatibility with C, then you can use the more error prone C constructs for that. Otherwise, pure C++ code is safe.
People like myself that know C++ since the "C++ Annotated Reference Manual" tend to keep using STL to designate the standard library, but I guess you already knew that.
If it makes you happy I can use the ANSI C++ section number instead.
Wasn't sure if that's were you were going. So, who in this day and age, forbids the use of the standard library? That seems pretty bizarre. What would be the motivation behind such a policy?
Whenever someone complains to me that the standard library is slow or "bloated" (whatever that means), I ask if they've ever profiled their code vs. the standard library version. 9 times out of 10 they have not, and are operating out of mythology rather than measurement.
I do like to use C++ a lot on personal projects, but there I can make full use of C++ best practices.
At work, I tend to avoid using it, because most C++ developers I have met on my career, actually use it as Better C, keeping all safety loopholes from C.
I had my share of spending weeks tracking down memory corruption issues.
My comment wasn't meant to knock C++; it was to point out that Linus' opposition to the language is well-documented, and an official git-rewrite in C++ is very unlikely.
Linus is almost entirely uninvolved with git these days, though. `git log --author=Torvalds` finds no commits in 2015 or so far this year, three patches in 2014, none in 2013, five in 2012, four in 2011... if the git core team felt using C++ was a possible course, Linus would not be involved in the decision.
(That's probably why I misunderstoood your statement an argument against C++ on its own merits.)
A rewrite in C++ would not make it any safer. Rewriting it to make it easier to understand could make it safer, but you do not need a different language for that.
If you can change the name_path struct to a standard STL data structure (std::vector<std::string> might work just fine), you can easily rewrite the function such that all memory allocation and all exploitable memory access is delegated to STL containers. If your STL std::string has a buffer overflow issue, you’re of course still in trouble, but the same is true if there are bugs in the STL/implementation of Go/Rust/Python/Java/Haskell…
Something like:
std::string path_name(std::vector<std::string> const& dirs,
std::string const& name) {
std::string p;
for(auto const& dir : dirs) { p += (dirs + "/"); }
p += name;
return p;
}
should work. If you feel like it, you can also add something like:
std::size_t len = name.size();
for(auto const& dir : dirs) { len += dir.size(); }
p.reserve(len);
Of course, len may overflow, but even if it does, all the harm that causes is that the string will have to reallocate memory during growth until running in a segfault when further memory allocation fails.
Right, and you can do the same with C, using libraries providing this functionality wrapped as well. There's plenty of std::string and std::vector like containers, which handle the magic for you. But even then, you can work with struct {len int; void* data;} to get your vector, and replace void* with char* to get a string. A simple vector and simple string is in no way difficult to implement, and many implementations exist.
I'm just trying to point out that the convenience of some C++ standard library features is not isolated to C++, and C++ is not a "memory safe" language by meaning of the word.
The difference is that using STL containers such as std::string or std::vector are very much the default in C++ (much the same way ‘safe’ code is the default in Rust), whereas you have to do some manual work in C to get them. The result is that using std::string and std::vector here is the natural solution in C++, whereas likely very experienced C programmers stuck to the manual approach.
Oh, yes, sorry and now it’s too late to fix (both that and the horrible formatting). Though I suppose it would have been a compile-time error, so at least it shouldn’t be exploitable :')
Yes it would, because this is basically string processing where C is the worst possible language one could use. I've detailed in another comment how std::string could be used.
You're talking about naked char[]. You can just as easily make a struct{ int len; char* str; } in C, and combined with the "n" variants of string operations, would work just fine with common tools.
Again, C++ does not make anything safer, and its types can easily be replicated in any other language.
Being possible, doesn't mean the majority of C developers make use of it.
The "n" variants are a joke in terms of security, even the C99 annex, that was demoted to optional in C11.
I call them a joke, because tracking the pointer and length separately is hardly an improvement in terms of security.
The only improvement that the "n" variants added is that the null character is always added to the end, instead of how strncpy does it, by only adding the character if there is enough space.
(EDIT: Some stuff has changed since February, so some of these claims are now out of date. But not all of them, the overall situation is still similar.)
Rewriting in any language won't help with this one unless the language has builtin integer overflow checking.
The solution here is actually quite trivial: just restrict filenames to 255 bytes and nesting to 255 levels, which limits paths to 64KB at most. Anyone trying to use git repositories exceeding either of those limits should be considered insane.
I think that using std::string would prevent the bug, because it would throw length_error on append. string::resize can be used to avoid excessive allocations.
Sanity checks would provide extra safety, but the code should fail cleanly even without them.
> just restrict filenames to 255 bytes and nesting to 255 levels, which limits paths to 64KB at most. Anyone trying to use git repositories exceeding either of those limits should be considered insane.
Funny, because I've actually been locked up for being insane five times. Speaking from experience, they'd only consider me insane if I said something like 255 was an important number because there are two sides to every problem and five fingers on each hand, and the path, which two feet take, is six one way / half a dozen the other, to the four corners of the earth, which is the natural limit.
So... you don't really know what insane is. Jus' sayin'.
As someone who has also spent time in a mental institution, I just want to say that I really think you need to be less sensitive on this one. That's just an expression and not in any way aimed at people who have mental illness.
You'll need to put an updated subprocess.py in your Python path, editing the one provided by OS X is prevented by the System Integrity Protection... ¯\_(ツ)_/¯
Are you not using virtualenv? I always use virtual environments as much as possible to avoid being locked in on an older version (for example Python in this case).
For those of us who have only installed the Command Line Tools, you will find the Apple supplied git at /Library/Developer/CommandLineTools/usr/bin/git
so the command to disable this vulnerable version is:
I'm still trying to understand why Apple install XCode command line utilities into an area of the system that is protected by SIP.
There is nothing hugely special about their tools - every other app apparently has to install into a different location as best practice, so I don't understand why Apple don't follow their own guidelines!
And as has also been asked - why do they symlink to the /usr/bin directory in the first place then?
They have three perfectly reasonable directories they can install into, which is what they recommend that application developers do when installing their software:
/Applications
/Library
/usr/local
So why do they feel the need to link git into /usr/bin? They could link it into /usr/local/bin - the $PATH variable includes this directory already.
You state that plenty of comments explain where the Xcode git actually is, but that misses the point entirely. Why is it linked into /usr/bin?
There are no symlinks in /usr/bin, they are wrapper programs determining whether you already have the Command Line Tools or full Xcode installed. If not, they will open dialogs asking you to install these additional software packages.
There's a security feature on El Capitan that essentially makes certain files and directories on your host system immutable. Presumably it's some set of filesystem flags, but not even root can change them. There's a magic nvram command you can use to disable the security feature (which if you're a developer you'll have to do eventually since you can't touch anything in /lib or /usr/bin or /bin). The command in question is:
nvram boot-args="rootless=0"
It seems to me that this is related to that feature. Although, I don't get why it looks like the two files are hardlinked but you can modify one -- maybe it's because this magic was only applied to the /usr/bin directory (and thus a hardlink to the file can still be modified). That's a bit dumb IMO, but I can imagine this being a bug in the OS X kernel.
Unfortunately, I'm not sure how to actually create such files (maybe if I disable System Integrity Protection, create a file and hardlink to it and then re-enable SIP). If I figure it out on my friends' MacBook I'll comment below.
EDIT: Okay, so it might not be what I said earlier. If I do something along the lines of:
1. Disable SIP (csrutil disable in recovery).
2. Create a file in /usr/bin and hardlink it to a non-SIP location (like $HOME).
3. Re-enable SIP (csrutil enable).
4. Try to change the permissions on either of the hardlinks.
It will fail. So presumably SIP correctly propagates permissions to all dentries. I'm not sure what's happening then.
> It will fail. So presumably SIP correctly propagates permissions to all dentries. I'm not sure what's happening then.
/usr/bin/git is a small wrapper that invokes the actual git from either the command line developer tools or from xcode (whatever you have selected with `xcode-select`)
I don't think it's intentional subversion. AFAIR, having command line wrappers in /usr/bin pointing to command line tools or xcode tool chain predates SIP. If anything, protecting developer tools was either neglected, or changing the concept of how it works to suit SIP was deemed not worth the cost.
I mean, SIP is generally only partial protection anyway, isn't it?
I meant installing the actual binary executed into a non-protected directory. What is the point of having SIP if the only process that can bypass it (the App Store installer) doesn't break the code.
Yes, I know that wrappers isn't new. But if they wanted to secure it they could've kept the actual binary in a SIP-protected directory.
SIP relies on the sandboxing rules in /System/Library/Sandbox (notably rootless.conf iirc; I am on 10.10 still for locally important if slightly idiosyncratic reasons). Below /S/L/S/ is the Profiles directory which is full of .sb files which are descriptions in a restricted R5RS Scheme of what system calls are allowed under what circumstances.
One of the features of SIP is that barring interference (e.g. using csrutil in the recovery boot mode) one can be fairly confident about the contents of a number of directories to the extent that one can (in the boot time trampoline that system upgrading uses) delete everything in them and install in their place the contents of signed installation media, with no worries that this accidentally conflicts with local state (e.g. locally installed versions of system software or dynamic libraries or the like).
Actually doing a "Reinstall the SIP-protected parts of Mac OS X" thus has some pretty good guarantees of non-destructiveness and thoroughness, and "Safe Mode" can ignore anything that isn't SIP-protected, thus producing a much more predictable post-boot environment than in previous version of Mac OS X.
Some thought went into the initial design and SIP evolved during the beta process; until fairly late, one could subvert SIP with union mounts for example, and there was to-and-fro on what third party things one could could simply move from /System/Library to /Library (notably from /S/L/Filesystems to /Library/Filesystems).
It's interesting to compare with the approach taken by SmartOS (for example; https://wiki.smartos.org/display/DOC/Zones), which in the global zone and in Solaris zones is strictly read only for everything under /usr, and which in the global zone refuses to persist changes under /etc and a few other places. Some of the reasoning is the same (known state can make for safer version upgrading); but some of the reasoning is to take advantage of other virtues too, specific to SmartOS's focus on hosting VMs.
Going back further, network-mounted read-only filesystems like /usr was fairly commonplace in environments where UNIX workstations were plentiful, with per-workstation customization often made user-specific (e.g. via moira, back in the day... http://kb.mit.edu/confluence/pages/viewpage.action?pageId=39...) and non-persisted, along the lines of the guest login in modern Mac OS X.
Back to your last sentence: Apple tools (including the App Store) can't work around the sandboxing without a system restart -- once the sandboxing service is running, it stays running and cannot be disabled, and a complex trampoline is needed in single user mode to work around sandboxing early in the startup process (even in single user mode). XCode does not install anything that really requires a reboot, and is wholly optional, so forcing a reboot to install or uninstall it seems heavy-handed compared to trampolines that rely on xcode-select. (Similar to other optional installs like X11). Moreover, XCode itself is signed and by default you will still get warned (and your tool will not be run) if something under XCode has been modified or substituted.
Nothing really prevents them from protecting optional installs using SIP if they decide it's better that way. And indeed, local admins familiar with Sandboxing can extend SIP protections to them themselves via /Library/Sandbox, where one can add further (but not weaker than /S/L/S) rules.
AIUI /usr/bin/git, and various other utilities, are tiny wrapper programs which will either 1) prompt you to install the Xcode Command Line Tools if they're not installed, or 2) redirect to the actual binary in the Xcode Command Line Tools if they are installed. This means that if you don't have them installed, instead of an error saying that bash can't find git, you get a nice graphical prompt telling you where to get it.
It can't be removed because it's a file that comes with OS X and is therefore covered by System Integrity Protection, which prevents you from deleting or tampering with system components, even as root.
I have a macbook with command line tools installed and I have homebrew installed. I think I installed Python on brew, but which git tells me I'm using /usr/bin/git for that.
Is there a list somewhere of dangerously old software Apple have shipped to me that I should reinstall with brew immediately?
Starting in OS X El Capitan there's a framework to protect sensitive system files which disallows even root from modifying them. The only way to make changes is to (as root) disable that framework, then reboot, make the changes, re-enable and reboot again.
On my Chromebook, a linux system, the root filesystem is mounted read only. To modify a file in /usr/bin I would need to activate developer mode then remount it rw. I expect this is probably something similar and done for similar reasons.
The big market isn't developers but end users who have no idea how to protect themselves from real threats and who don't use git. And for them sandboxing, signed executables, verified boots and other measures make life far better.
If you are a developer using a Mac you most likely use brew or some sort of vm/container system anyway.
So what Apple have designed is a "system integrity protection" system that actively prevents you from mitigating security issues.
Only a very large company could come up with such an amazingly awful idea. I can just imagine the meeting where this was decided that it would "protect" users where someone said "Freeze all the system binaries even from the end users, that will make them more secure!".
Yes. This is by design, since the whole point is to defend against malware that has gotten root privileges; requiring recovery mode ensures that the physical user of the computer consents to the change.
They didn't change the user experience in any way - most updates do not require a reboot, the ones which do are fast - so I'm guessing that the kernel checks the code signature on the process doing the write when deciding what to block and possibly even requires a valid Apple signature on the new file.
I'd still love to know their thinking behind put git into a directory that SIP makes deliberately hard to update! I mean, git is additional software and not even part of their base operating system, my understanding about SIP was that it was meant to prevent people from tampering with the underlying system software and installing rootkits.
git (and ssh for that matter) aren't going cause rootkits by themselves - and all they are doing is forcing people to use homebrew to install versions that's aren't protected by SIP!
I think the idea is that they protect everything which they ship, so you can add other things but not replace Apple-provided components.
From a sysadmin's perspective this makes a lot of sense: beyond malware, I've seen security and stability problems caused by installers from large companies, developers cowboying up with “sudo make install", etc. but it definitely puts the onus on Apple to ship updates promptly.
That's the issue I have - not so much an immutable part of the filesystem (though I find that bizarre and flawed), but that Apple don't do enough updates fast enough.
I basically think that if Apple want to lock down their ecosystem and prevent folks from updating their own software, then they have a duty to provide timely updates that address security bugs. Currently they don't seem to be doing that.
:-) all good - I'm more curious to know how Apple updates their operating system. Evidently it's possible to modify these protected files, I'm now curious how they do it.
We use a mac server as a build machine. It's pretty annoying; given that you need to graphically log in to accept a license every now and then to use lldb (a non-graphical debugger) among other things.
In our case I think rebooting the server works, but you can't be aware of all usecases.
You are missing the wider point I fear. You seem to be so focused on git that you miss the point that there are other system utilities that are installed in /usr that you might need to patch.
And if I want to troubleshoot something odd, it would be nice to be able to hook in dtrace - without rebooting the server, flipping a switch to disable the "feature", and then troubleshoot my server's issue.
No, I'm just bored of faux-outrage and agenda pushing.
If this is such a critical issue, then reboot your Mac and disable SIP. 5 minutes and done. In the time it's taken you to post all your comments here, you could have fixed it.
[edited to add]
It's not like SIP was a secret - it was one of the major features of El Capitan. Didn't you do any research before upgrading your mission critical server to El Capitan?
You read into what is not there. I know about SIP already, but I've always been surprised about it as it seems pretty flawed to me - it turns out if you can get root access you can easily bypass the "protection" mechanism with a small utility that loads up a kernel extension and bypasses the mechanism anyway.
Incidentally, calm down a bit - you sound pretty outraged yourself!
You might want to address the dtrace issue though - let's say you didn't want to disable the protection that SIP provides in making the /usr filesystem immutable. How do you then run dtrace on system utilities when troubleshooting?
Genuinely curious how you answer that.
Edit to ask another question: another question for you, as you seem to have the answers here: why does Apple install git in a directory that is under the control of System Integrity Protection? Why not under /usr/local? It's not exactly a "system utility" - it's a DVCS and not in any way critical to the running of the system. Hell, I'd not even consider it system software.
And how does Apple do this? The last time I installed the XCode command line tools, I don't recall that I had to reboot my system, so it looks like Apple do indeed have an update mechanism to overwrite the files. In which case it is one exploit away from disabling the file immutability protections afforded by SIP...
From their readme:
"P.S.: 10.11.4 update removed csr_set_allow_all() function used to enable/disable SIP. It means this code does not work on El Capitan 10.11.4 or newer versions when released."
Also even when it did work it needed you to get a Kernel Extension signing certificate from Apple - which they could (probably) revoke pretty easily when they saw it being misused.
Ah, bummer. Thanks for this info, good to know :-)
Of course, there is a utility that sets the flag in recovery mode, unless there is a specially built kernel that only exists in that mode (I'm no OS X expert, there could be) then there must be some way of bypassing the protections. If you can still load a kernel extension then it occurs to me that you can still bypass it.
> Edit to ask another question: another question for you, as you seem to have the answers here: why does Apple install git in a directory that is under the control of System Integrity Protection? Why not under /usr/local? It's not exactly a "system utility" - it's a DVCS and not in any way critical to the running of the system. Hell, I'd not even consider it system software.
I can imagine git might be necessary for applying some updates (on FreeBSD svn is a critical part of the base system, because one way to update is by svn updating the source and rebuilding).
In that case, then git is a vector through which SIP can be bypassed. But it's not used to apply updates as it's part of XCode command line utilities and not the core system.
I'm more amused than upset. You seem rather steamed up there though... Now how about answering my actual question. What if I don't want to turn off SIP but I want to use dtruss to troubleshoot my system?
If Apple truly doesn't want me to use dtruss, then why do they bundle it?
You can use dtruss, you just can't use it on certain binaries. A friend said that you can disable certain parts of SIP to enable functionality like this but I haven't tried myself.
But that merely enumerates pathnames that contain the word "git", not programs that might still invoke the vulnerable version of git directly via "/usr/bin/git".
No... tadfisher's point was that even if a fixed version of git is installed, some other program -- say, a wrapper script for git -- might still invoke OS X's vulnerable version if it directly references "/usr/bin/git". So, such a system could become compromised if that wrapper script was used to access an untrustworthy repo.
All your invocation of find does is enumerate every file (or directory) under /usr named "git" and execute it with the -v option. In addition to dumping a lot of error messages, all that would do is eventually run "/usr/bin/git -v" and inform you that yep, your system still has the vulnerable version of git installed.
In other words, tadfisher's point, which I now wish had been made explicitly, is that simply installing the fixed version of git is possibly insufficient to secure your system. You also ought to either disable /usr/bin/git or convince yourself that no program will invoke it. Disabling /usr/bin/git is probably easier.
So... to try and get an answer to the unanswered question in the blogpost. The files are probably hardlinks to a single "superbinary" image somewhere. The actual code is probably checking the value of argv[0] to dispatch to the actual git code (or whatever command name was invoked). The question is, why is Apple doing this? Is it some sort of diskspace saving scheme (to avoid duplicating common boilerplate code)? I suspect some part of this answer includes the words "for security" but it's already been proven to be BS at this point.
Nah, the binary doesn't actually contain git. It's just a stub (in /usr/bin) that locates Xcode on the system (in /Applications/Xcode.app by default but configurable) and execs the real git binary from there. See also:
OK great. So now the question is why? I've seen this "multiplexed binary" scheme implemented elsewhere in more constrained circumstances but it doesn't really make sense to me for a desktop OS to be doing it.
Based on some Googling and my memory... the history of toolchain installation on OS X is a big mess.
In the old days, pre-2011, you would manually download the Xcode installer from Apple's site (after a free registration), and it would install stuff into both /Developer and /usr. IIRC /usr/bin/gcc was good enough for native compilation, but for iOS you would have to locate the compiler under /Developer/Platforms/something-or-other. (Today, with LLVM's ability to target multiple architectures in one compiler, the toolchain binaries are the same and only the sysroot depends on target.)
Then in 2011, Xcode was moved to the then-new Mac App Store, but the app you got on the store was just an installer (blatantly ignoring the App Store rules, and later sandboxing restrictions, that apply to everyone else), and the installation path was the same. Oh, and the store version was initially $4.99, which pissed everyone off until it was made free in the next minor update.
In 2012, Xcode was overhauled so that /Applications/Xcode.app was directly installed by the App Store and the toolchain was located inside the app bundle. This was a big improvement in part because it made the App Store's delta app updates work properly for Xcode - previously you had to keep the installer app around so it could delta update that, which doubled disk cost, and then you still had to run the installer and wait for it to re-copy the whole IDE and toolchain. It also made it easier to have multiple copies of Xcode installed side by side, and uninstallation was now a matter of dragging the app to the trash, like any other. But if you wanted standard Unix builds to work (as opposed to building within Xcode or manually specifying the compiler path), you now had to download the separate "Command Line Tools Package", which could be done either from within Xcode or directly from Apple's developer site (the latter welcomed as it saved bandwidth and disk for anyone who didn't want to download the several-GB Xcode package), and it would install a duplicate toolchain to /usr.
Finally, in 2013, the shim binaries in /usr/bin were introduced; this system has lasted to the present and has a few advantages:
- The Command Line Tools can now be mostly segregated into /Library/Developer/CommandLineTools rather than getting mixed up in /usr with the rest of the OS - however, the installer still dumps headers into /usr/include.
- If the user has the full Xcode installed, it isn't necessary to install the Command Line Tools, as the shims will just execute the binary from /Applications/Xcode.app (or wherever Xcode is installed - you can switch globally or using an environment variable). That is, unless you need /usr/include to exist.
- Of course, installing outside of /usr/bin made it easier to introduce SIP.
- Convenience: the shims are shipped with the base OS, so if you try to run a developer command and neither Xcode nor the Command Line Tools are installed, rather than 'command not found', you get a nice GUI dialog that downloads and installs the latter in one click. (Well, "nice"; if you have a configure script that probes for the compiler by trying to run it, but doesn't depend on it, and you don't want to install it, you now have to deal with a GUI dialog popping up every time you run configure. At least, this is my experience with the similar shim that exists for javac. This should be improved.)
I suppose the advantages of the first three points could mostly have also been accomplished if Apple built some system into /etc/profile or whatever to automatically add the relevant toolchain to the PATH. Why Apple decided to go with shims instead I can only guess at, but all in all it's a decent system, and it's nice that each toolchain is now mostly self-contained in one directory so they're easy to manage. Compared to most Linux distros, where compilers are just distro packages but there's no easy way to install a package to an alternate path or have multiple versions installed simultaneously (unless you build from source), I'd say it's an improvement.
If you execute csrutil disable && reboot, you will be able to disable System Integrity protection and do what you want including fixing /usr/bin/git. What is the problem?
The problem is that System Integrity protection was put in place for my benefit and marketed to me as a feature that had been "designed to help prevent potentially malicious software from modifying protected files and folders on [my] Mac", and now that I have it and paid money for it (or the Apple hardware it runs on), I find out that it is making me less secure by preventing me from removing a software component which has a remote code execution vulnerability, and that the only way around that is to disable the feature is by using Recovery to update my computer's NVRAM.
Yes. (a) You are not a typical user; (b) disabling SIP takes five minutes plus whatever productivity loss is caused by rebooting; (c) it isn't actually necessary to disable SIP to make git inaccessible, as described later in the post; (d) even if it were, you would get most of the protection by just installing your own git in a different location and changing PATH; and (e) the vulnerability in question is incredibly minor anyway, considering the percentage of the time that most people follow checking out a repository by intentionally running arbitrary code from it, which (f) would at least partially justify Apple not backporting the patch, but then again, nobody in this thread has even verified that they haven't.
It is not necessary to run anything except "git clone". An attacker can construct a repository such that merely attempting to clone it would execute arbitrary code. This is why the vulnerability was given a CVSS base severity score of 9.8 (out of a possible 10).
I know, but Git is mostly used for software development, and people mostly clone unfamiliar Git repositories with the intent to build and/or run the included software, mostly without performing a full manual inspection of the code beforehand; even if they only build, most build systems allow specifying arbitrary commands to run. Thus, in the common case, exploiting the vulnerability gives the author of a malicious repository only the power they would have gotten shortly afterward anyway. Of course, there are exceptions to all of those "most"s (especially the first one, I think), but my conclusion is still that the overall danger of that particular vulnerability is pretty minor.
Sometimes I think that Apple are increasingly trying to lock down OS X to prevent anything from being installed outside of its own walled garden.
The poster's point, as you haven't understood it, is that by preventing updates of utilities like the system git, vulnerabilities remain available on the system. This makes the system less secure, and the only way to fix the security issue is to disable the security feature that is preventing the security vulnerability from being fixed.
In other words - by making system files immutable even to root, it's not exactly making the system any more secure.
The poster's point, as you haven't understood it, is that by preventing updates of utilities like the system git, vulnerabilities remain available on the system. This makes the system less secure, and the only way to fix the security issue is to disable the security feature that is preventing the security vulnerability from being fixed.
I understood the point perfectly well, and that's why I think it's a bit overblown. This security feature is precisely designed to prevent you from modifying your system and encourage you to defer that to Apple. It should be obvious that such a feature will also prevent you from fixing things yourself, which Apple either hasn't gotten around to fixing or refuses to. But since they give you some way of disabling it, you just do that and fix it yourself (and presumably SIP will then protect your fixed version of git?)
I believe that toggling a boot-arg only disabled SIP on early developer betas of OS X 10.11. For shipping releases of El Capitan you need to use csrutil from the recovery partition as a parent comment mentioned.
Debian 'stable' releases mainly target people who like fixed upgrade cycles. No feature/API/etc. changes for the lifetime of a release, only security patches. Means that you can have your servers routinely applying updates with relatively little worry that something is going to stop working because of a change. Even minor feature changes can break things, especially when run in scripts, so they prefer not to risk non-security updates, even small ones. Then you upgrade to the next release at a time scheduled to debug any problems that come up. I believe RHEL releases work somewhat similarly.
For my developer machine I prefer rolling updates, so I run Debian 'testing' on that one, which is basically a snapshot of what is going to be the next stable release, with daily updates (there are also distros like Arch that only do rolling release).
"testing" is probably the worst choice. It's auto-generated from unstable with a 10 (iirc) day delay if no critical bugs are open. However, if a critical bug is fixed in unstable and another one found that affects both unstable and testing, then the release fixing the first bug won't migrate to testing, so you'll have an even more broken version for a longer time. Testing also receives no security updates. Unstable doesn't get security updates either (in the sense that the security team doesn't provide updates to unstable), but it usually gets fixed versions pretty quickly, and certainly before testing.
"unstable" with apt-listbugs installed works quite well for me. Sometimes I have to boot grml to roll back packages (check out the grml-rescueboot package), but that's very rare.
If you're on Debian, an want to try and run testing/unstable, you probably want to install "apt-listbugs" -- which will warn you about open bugs in software you're installing/updating. Another option is to run Debian stable, and have testing and/or unstable chroot, managed via schroot: https://wiki.debian.org/Schroot (that page and the manpages could use an update, but having a scroot backed by lvm, and set up to mount /home works well enough for running x11 apps (the MIT auth cookie is in $HOME)).
I would recommend trying to keep things separate like that (or via something like docker/kvm/vagrant/etc) -- rather than trying to mix'n'match. You'll likely be the only one trying a particular combination of versions, and it's unlikely to be much fun.
Generally if you want to use only debian packages, the answer is "stop wanting that".
But, you can get a little closer if you run debian Stable and also include apt sources from testing and/or unstable, and do some clever things in /etc/apt/preferences to pin package priorities. It can get messy fast, though.
You can override per-package, yes, if you really want to, and you can pin versions for C. As others have said, in practice a newer version of one package often depends on a newer version of another (the system will generally stop you rather than blindly installing incompatible things), and obviously you'll miss out on the testing of an integrated system that's the selling point of debian stable, but you can do it.
Not completely, but there are ways to achieve this partially. You can add multiple sources (say, stable and unstable) and set the priorities so that by default, stable is preferred. Then you can choose to install certain packages from unstable. However, since these frequently depend on newer libraries (e.g. libc6), it's hard to pull this off without upgrading most fundamental libraries to the version from unstable. In that case, you might just as well run unstable.
The problem with this concept of "stability" is that it depends on a particular software development philosophy that isn't shared by all projects.
It works well if developers make sure that bugs in older versions of their software are fixed even after the next version is released. I think that is the case with a lot of infrastructure sort of software.
But if developers do not maintain old versions and fix bugs only by releasing a newer version of their software, then debian's approach leads to stability only in the sense of a reliably buggy system.
Debian's system does not require developers to maintain older versions, only to clearly indicate security fixes. Debian patches its own older versions to include the security fixes.
> I understand that, but it means that debian inevitably distributes buggy software in a supposedly 'stable' distribution.
Bugs are a matter of perspective. If alleged bugfixes actually make you modify your currently working setup, then I don't consider that much of an actual bugfix, just something that makes me do work for no real benefit:
I like Debian stable. Two years is an entirely reasonable amount of time to be able to have most software in my OS immutable except for security fixes. For the tiny amount of software for which I may want the bleeding edge, there are language-specific "package" "managers" (lol npm) or I can just backport the software myself.
Many people are unhappy about living with old bugs or missing features for years, even if you are not. And as a result we get a proliferation of many different update mechanisms on the same system. Some of them interactive and unscriptable. Some less than secure. This is not an ideal situation by any stretch.
It's not a huge problem either as long as Linux is used almost exclusively by professionals and mostly on servers.
You know what people really hate? Change. Ask around how many people like it when Facebook changes its UI. Most don't. Sure, if there's a bug people hate living with the bug, but people really hate change even more.
I agree with you that this is a very strong sentiment. People don't want everything to change all the time underneath them, especially not the UI.
But freezing everything for years puts too many people in a situation where they just have to upgrade for one reason or another. It's not always their choice and it's rarely a desire for change that makes them do it.
No, as long as "dpkg -l git" shows that you have package version 1:2.1.4-2.1+deb8u2 or later. Debian backported the fixes:
git (1:2.1.4-2.1+deb8u2) jessie-security; urgency=high
* Non-maintainer upload by the Security Team.
* Fix remote code execution via buffer overflows (CVE-2016-2315, CVE-2016-2324) (Closes: #818318)
-- Salvatore Bonaccorso <carnil@debian.org> Fri, 18 Mar 2016 06:20:38 +0100
The Debian Changelog (where you find this stuff out) is linked from the package page at https://packages.debian.org/jessie/git (on the right hand side under Debian Resources) or you can look at /usr/share/doc/git/changelog.Debian.gz on your system.
Apple clearly has some frequency of updates (10.11.x updates, security updates, etc.) so the fact that they’re a corporation or using closed-source, etc. is not necessarily relevant. If they want to patch something tomorrow, they can.
Since they haven’t changed this package, there may be another reason besides security. For example, some important group at Apple or a big customer may have created a dependency on "git" functionality, and they want to carefully test any change on a large scale before proceeding. Just because it isn’t wise for important things to depend on fragile environments doesn’t mean they can ignore those environments when making changes.
The problem with simple versions is that an update seems to be all or nothing: you can’t easily fix a small security hole when starting from a few versions ago because you have to consider anything else that changed. Ideally, systems are designed in enough layers that small updates really are practical without affecting other features.
Then you will get caught by some GUI that uses /usr/bin/git. Be aware changes to PATH in your shell startup files do not affect graphical applications at all.
Just installing git from Homebrew or MacPorts is not enough to be safe from this remote code execution.
Yosemite and El Capitan prioritize /usr/local/bin over /usr/bin, so if you install the latest version of git via Homebrew, you're mostly okay. If you're using Mavericks or older, it is as described above.
That said, any program that invokes /usr/bin/git directly instead of /usr/bin/env git would still be vulnerable.
It's not as big of a problem as the article makes it out to be so long as you install a newer version of git.
If you're really concerned, the filesystem restrictions mentioned can be bypassed by booting into safe mode. Though it's still not a good idea to mess with the default program installations since Apple may depend on that particular version of git for some program.
The article made it look like you were basically fucked. I stopped using guis for git about a week into starting using git. Just avoid guis, use homebrew, and you'll avoid this problem.
yes, that is why you set path variable in your bash_profile so that when you do 'which git', you are not pointed to /usr/bin/git but /usr/local/bin/git. you might also want to do sudo chmod a-x to /usr/bin/git.
It's a shame that Git uses GPLv2; this would not be permitted under v3. Unless, I suppose, Apple does actually provide a means of replacing these programs.
Of course, the overall issue of not being able to modify the software on your own system still holds.
Please correct me if I'm misunderstanding, as I'm not familiar with the system.
GPLv3[0] introduced protections again Tivoization:
--
"“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM)."
You can install your own version of Git, yes. But Apple, as a distributor, cannot prevent you from installing a replacement for the software that it distributes. If, wherever they provide the source code to Git, they do provide a method to replace the currently install software, then that wouldn't be a problem.
See, this I don't understand. We have open source distributions who provide massive amounts of packages, for free, on a continual basis and don't have any where near the amount of profits that Apple has. Yet Apple, who have a lot of users and must update severe security flaws, don't have a reasonable way of updating system software.
Instead, we get the updates once ever so often and these security updates don't even come close to fixing all the security bugs in the software on my OS X operating system.
It's really not very good. Vendors such as Apple and Microsoft are far, far too slow in releasing updates.
OK, Xcode 7.2 security notes mentions Git vulnerability CVE-2015-7082 (older than the ones in questions), and Xcode 7.3 doesn't mention any fixed vulnerabilities in Git, so I guess it's vulnerable and NOT patched.
Running Ubuntu in VMware and doing all the critical stuff there is also a good solution as you still get perfect hardware support through the underlying OSX layer.
On git-scm.com, it says the latest build for OS X is 2.8.1. However, if you actually go to https://git-scm.com/download/mac to try to download it, they give you the 2.6.4 installer.
1. OS X ships with a "git" command in /usr/bin that merely looks for the real "git" command inside Xcode or somewhere else and executes it.
2. The vulnerability is inside the real "git" (shipped with Xcode/the Command Line Tools) that Apple apparently cannot be bothered to update.
3. The author complains about not being able to make /usr/bin/git non-executable because of SIP.
Why not just make the real "git" command non-executable and be done with it?
And since /usr/bin/git apparently just delegates, the git vulnerability at least won't endanger most users, since they don't have the real (old, vulnerable, thanks Apple) git installed.
That's a somewhat more complicated issue: they're shipping the last version of bash using GPL v2. Upgrading would be a significant legal question and I'd be surprised if the long run outcome wasn't removing bash.
I couldn't have said it better. Will comply and aspire to inspire. and no ut omnes. I'm out of my league here, any direction on any level no advice refused Thank You in advance for you help and considerations. Reciprocation to ensue. quiet and listening
Minor nit: this post mentions v2.7.1 as safe from these vulnerabilities, but it is not. v2.7.4 is the first safe version (this mistake was inherited from the bogus oss-security post that is linked in the article).
So ... has anybody actually checked to see if Apple has patched the vulnerability in git without bumping the version number, like pretty much every linux distribution does for their stable releases?
You're joking, right? When I use software and I look at the version number, I like to know that it is what it says it is. If they are going to patch the vulnerability, then I want to know if they have done so.
Besides which, this whole question is missing the point somewhat. There is no easy way of updating the system supplied tools like git, even if you wanted to. The latest version of git is v2.8, and I think it would be grand if we could use a version less than 6 months old!
Hell, the same goes with any other system software.
>The latest version of git is v2.8, and I think it would be grand if we could use a version less than 6 months old!
Exactly. Apple are update nazis themselves (support for old versions is being dropped quickly and users are forced to update), so they should at least apply the same logic to system tools.
Inability to manually update the copy of a tool located in /usr is par for the course with most Linux distributions - in that case the kernel won't stop you from replacing the files (unless your distro has the root FS mounted RO), but the package manager will typically replace them right back the next time there's a minor update (same for OS X updates pre-SIP), and it's easy to accidentally break things. Instead, the recommended method is to generally install non-distro software in /usr/local or elsewhere and put the location on your PATH. Which, of course, works just as well on OS X, and it's what Homebrew and MacPorts do. I don't see the problem.
Perhaps they should. But then again, a large percentage of Linux users use distros that lag even further behind (albeit arguably with a better security fix backporting process)... stability is valuable too.
I'm not sure about this. Anyone who wants to use a decent Linux distro will get fairly frequent updates - Fedora, Debian, SuSE, Ubuntu and even Slackware get frequent updates and Fedora, Debian and SuSE are well known for backporting security fixes.
Apple are known to be tardy in taking their time to release security fixes unfortunately. This whole point might be mitigated if they were more responsive, but they aren't terribly. Your average Linux distro is far faster at updates even on LTS releases than Apple...
Windows is still targeted far more than Mac, it could be argued that the lack of impetus originates from this phenomenon.
I'm worried about when (not "if") this changes, people are snapping up Macs all around me and thusly the platform won't be protected from worms due to" hacker disinterest" forever.
Windows has UAC that does something similar, and the Windows\System32 directory can't be written to by an ordinary user. The malware I see doesn't install in this directory, but in Program Files - and they can't really lock this down.
The way that folks get themselves into a mess and get malware installed are largely via programs with network access. Locking down the /usr directory isn't going to prevent this sort of thing from occuring - what will stop it is not allowing users to run as admins by default, which OS X is doing already.
At which point, the way malware will get installed is via software vulnerabilities, in things like git. It won't be occurring because Apple stopped me from turning off the execute flag on potentially vulnerable programs like the git that they install by default.
> the Windows\System32 directory can't be written to by an ordinary user.
You need to be able take claim ownership from TrustedInstaller, so administrators only: yes (you'll need to acquire a UAC token, but the GUI prompts you automatically). You'll also need to turn off installation integrity to prevent Windows from replacing the file. All documented. It isn't obfuscated in any way whatsoever and I figured it out first time in 2 minutes with no Google.
Security through obfuscation is known to be a broken concept.
Yep, I know. What you are basically saying is that you need to turn off some settings and security mechanisms to bypass the integrity of the system. Which is precisely what most folks are doing on OS X - except they have to reboot their system, which I personally find insane. But given this, I don't see the point you are making.
Isn't this security through obfuscation precisely what Apple are doing right now?
It seems Apple doesn't care much about it's software. They would probably save billions. And we would probably see some nice contributions from Apple. And it would create a lot of goodwill for them.
Because Apple have purposefully made it more difficult than it should be for developers to find and disable the vulnerable versions.....and thus far, not released a patch.
It's really a moot point as this was AND still is a common problem no matter the OS. I fail to see any releases of any OS that's secure out of the box.
I Clicked on the link expecting to read about a vulnerability. This is actually about blasting OS X for not having updated versions of software.
Software updates are super important of course, and apple should be better at pushing the latest software updates, but I wish the title reflected that.
> This is actually about blasting OS X for not having updated versions of software.
No it's not, it's blasting OS X for shipping software with a known remote execution vulnerability, and not allowing the user to easily upgrade that software themselves due to new OS-wide security policies.
>and not allowing the user to easily upgrade that software themselves due to new OS-wide security policies.
But the author didn't try that. They merely speculated that
upgrading over top of that will almost certainly screw something up later.
I upgraded the Subversion that's shipped with XCode 5 on OS X 10.9 (both old, I know) without any problems simply by manually replacing the files in /Applications/XCode.app/Contents/Developer/usr/bin.
I disagree that it is clickbait. There is a schema people use for announcing security vulnerabilities ("<issue> in <software> version <number> through <bigger number>"), and the RCE issue was recent, so to those familiar it was clear that it was exposition rather than disclosure.
Ironically, when they do that, they also make it difficult, impractical, or impossible for you to upgrade or disable vulnerable software (in this case, an old, insecure version of git with remote-code-execution vulnerability).
People like Richard Stallman have been warning about this sort of thing for decades.