Previously, I used Nix on Mac (as this blog post suggests) but I had enough “gotchas” with Nix on Mac that I decided to go full NixOS.
For those who ask why a VM or why I keep Mac around at all: I like macOS for everything else besides dev work. I use iMessages heavily, I like the Mac App ecosystem such as calendars, mail clients, etc. This gives me the best of both worlds.
I usually run this on an iMac Pro but also have a MacBook Pro. It runs great on either. It’s also really nice with Nix to update one machine and the other is just one command away from being equivalent.
I recognize this is a “weird” setup but wanted to point it out since it seems relevant to this post.
If you look at ChromeOS, it's actually relatively mainstream.
Most of us need a locked down desktop environment more than we realize. The UNIX/POSIX shell environment works really well for our specific kind of work (writing software), but it's full of security holes and gotchas. A lot of people talk about Apple as wanting to "own" everything, but the trend in securing consumer OS environments is clearly trending in this direction. Starting with research examples like Qubes, and all the mainstream security efforts in Windows, and ChromeOS.
Working with a VM that you can torch and rebuild is ultimately the best of both worlds.
What would be really cool is a Mac OS app for managing Vagrant... Basically a competitor for Docker Desktop for Mac that used VMs instead of Docker.
> A lot of people talk about Apple as wanting to "own" everything, but the trend in securing consumer OS environments is clearly trending in this direction. Starting with research examples like Qubes, and all the mainstream security efforts in Windows, and ChromeOS.
This is because Google and Microsoft want to own everything, as well. It's not a coincidence that Apple, Google and Microsoft's security implementations enshrine each of them as ultimate gatekeeper and single source of truth for security on their respective operating systems.
It is interesting that you bring up Qubes OS, because its security model doesn't depend on Invisible Things Lab, the Qubes developer[1], deciding what can or can't run on Qubes OS.
In fact, if a security model that depended on a company like Red Hat authorizing what can or can't run on Linux, it would be rightly criticized as Red Hat trying to "own" everything instead of developing a secure system at the OS level. Yet Apple, Google and Microsoft are doing just that.
I think we are really conflating "app store distribution" and "the POSIX userland isn't workable in a modern security environment for end-users". POSIX apps run as the user, when they really need to be run with their own identity. Most often we should use the identity of the "developer".
Do I want Firefox to have access to my Contacts? My Photos? Do I want Facebook to have that access? To all of them, or just ones that I select? Should my disk utility have the ability to send data over the network? My IDE? My IDE plugins?
This has nothing to do with "stores", and everything to do with the security model and the UI needed to make it work. e.g., when I run `grep -e "Something" -r ~/src`, I assume grep is going to read all of the files in ~/src. But in the basic POSIX environment it can access anything I can (including, say, ~/.ssh) and open up any old socket it wants and send data wherever it chooses. Have you read every line of code of every package you've installed? Rather than relying on 'trust', 2021 requires that we actually have some security model that doesn't rely on trust, but explicit permissions and grants, and workable UI for defining that. Wouldn't it be nice if bash arranged a read-only capability granted to the grep process for all files under ~/src based on the above command, and if the grep executable itself had no inherent permissions at all?
> Do I want Firefox to have access to my Contacts? My Photos? Do I want Facebook to have that access? To all of them, or just ones that I select? Should my disk utility have the ability to send data over the network? My IDE? My IDE plugins?
Modern security on Linux is hardly dependent on the POSIX security model.
Linux just got the Landlock feature[1] in the kernel[2] that does just this. Along with Landlock, features like cgroups, seccomp, AppArmor, Firejail and other sandboxing features go far beyond the traditional POSIX security model. Even parts of the ChromeOS security model are based on modern Linux sandboxing features, and don't rely on POSIX security.
To address your Firefox example, I use these features to sandbox Firefox so that it runs in a container, and limit its ability to run code or make certain syscalls, and it's only allowed to touch its profile and a dedicated Downloads folder. D-Bus can't even reach it.
> This has nothing to do with "stores"
I didn't say anything about stores. It has everything to do with Apple, Microsoft and (slowly) Google forgoing OS-level security for centralized control of what can and cannot run on their operating systems. This is not just enforced with "stores", but with certificate systems like Apple's Notarization and Gatekeeper programs and Microsoft's certificate program.
Instead of relying on the system to provide security, we're relying on Apple, Google and Microsoft to decide what we can run, or can't run, even though Apple is responsible for distributing 500 million copies of Xcodeghost[3], and Google's Play Store is the #1 vector for malware on Android[4].
> Instead of relying on the system to provide security, we're relying on Apple, Google and Microsoft to decide what we can run, or can't run, even though Apple is responsible for distributing 500 million copies of Xcodeghost[3], and Google's Play Store is the #1 vector for malware on Android[4].
The security stuff is a huge red herring.
The security model is all about limiting self publishing and directly reaching consumers by the biggest players in IP, Disney, Netflix, Nintendo and Tencent. 90% of App Store revenue is games. Most remaining revenue is to subscription entertainment services. Security as engineers talk about it basically never affects the bottom line.
> Instead of relying on the system to provide security, we're relying on Apple, Google and Microsoft to decide what we can run, or can't run
This is highly misleading. System-level hardening is complementary to cryptographic trust, not a replacement forit. Hardening features do their best to prevent arbitrary malicious processes from controlling the system, and the certificate systems make it possible to revoke the ability of specific known-malicious authors to distribute software. In macOS, you can look at the entitlements system, sandboxing, and system integrity protection for examples of hardening.
Not to mention, at least on macOS and Windows, all the certificate verification features can be turned off.
> System-level hardening is complementary to cryptographic trust, not a replacement forit. Hardening features do their best to prevent arbitrary malicious processes from controlling the system, and the certificate systems make it possible to revoke the ability of specific known-malicious authors to distribute software. In macOS, you can look at the entitlements system, sandboxing, and system integrity protection for examples of hardening.
I agree, however neither of these operating systems gives the user the level of control to address these concerns:
> Do I want Firefox to have access to my Contacts? My Photos? Do I want Facebook to have that access? To all of them, or just ones that I select? Should my disk utility have the ability to send data over the network? My IDE? My IDE plugins?
A system that implemented security that empowers users would allow you to say "I want my Email App to only access certain Contacts or certain Photos", allow you to supply dummy data to apps and give you better granularity.
Instead, what we get from Google is a system whose security is convenient for advertisers first, and not users, so it doesn't offer that level of empowerment even though some AOSP forks do. You either give it all of your contacts or pictures or none of them. Making a secure system that also empowers users would mean that they and their partners can't collect user data. We're shown scary warnings about installing apps outside of the Play Store and are assured that Play Protect means that everything on the Play Store is secured.
What we get from Apple is a security model that is convenient to their bottom line, so instead of addressing that user's needs, users are told they don't have to worry about their Email App doing anything bad with their contacts or photos because Apple vets apps and its developers. Like on Android, you either give the app all of your contacts or none. Making a secure system that also empowers users goes against their marketing that insists that their App Store and certificate racket are the only things keeping users private and safe.
> Instead, what we get from Google is a system whose security is convenient for advertisers first, and not users, so it doesn't offer that level of empowerment even though some AOSP forks do. You either give it all of your contacts or pictures or none of them. Making a secure system that also empowers users would mean that they and their partners can't collect user data. We're shown scary warnings about installing apps outside of the Play Store and are assured that Play Protect means that everything on the Play Store is secured.
This literally isn't true since Android 10 and has been even more hardened in Android 11. Now the apps ONLY see the pictures you, as user, allow them to see. The same goes for file access, location access and plenty of other security model changes in last two Android versions.
So your rant about how evil Google caters to advertisers only really becomes wierd when they literally implemented the change you want.
iOS already lets you choose with granularity which photos to allow an app access to as of iOS 14. Both macOS and iOS do not allow any app to have access to photos, contacts, camera, etc, without prompting the user first. But there is a trade off between usability and security, and Apple is pushing that boundary towards security and privacy about as aggressively as any vendor is. People have been complaining about all the new entitlements pop-ups in macOS and saying they shouldn’t exist! But they are providing real benefits to users and increased safety.
Of course they should be doing more, but they are constantly improving, and the same can be said of any organization. Where are the Linux distros that do this stuff out of the box and don’t involve configuring these tools with the command line? SELinux and all of those other tools you mention aren’t helping users if users are forced to become sysadmins to benefit.
> iOS already lets you choose with granularity which photos to allow an app access to as of iOS 14.
How does this work? When an iOS app asks for permission to view photos, the user is asked which folders to grant it access to? How does the user grant one-time access?
> Both macOS and iOS do not allow any app to have access to photos, contacts, camera, etc, without prompting the user first. But there is a trade off between usability and security, and Apple is pushing that boundary towards security and privacy about as aggressively as any vendor is. People have been complaining about all the new entitlements pop-ups in macOS and saying they shouldn’t exist!
I think security prompts, viewed as interruptions, are a consequence of the Apple-owns-everything model. If I want to share a single photo with a friend using my messaging app, I don't really want to grant permission to view photos to the app. The fact that I have to is frustrating.
Instead, a step backwards towards the desktop metaphor, where the user owns the files, is a better place to step forward from: the user owns the files, and the apps are potentially hostile tools which the user sometimes requires. The app should ask the operating system for a photo, and the operating system should allow me to select the photo I want. The app then gets read access to this file right now. The app could certainly do something malicious with the read access it has received, but at least if it doesn't do anything malicious with it now, it can't do it later.
The limitation is that each app wouldn't be able to design its own photo selector
Some apps would of course legitimately want a broader access than that, e.g. a backup app, but the broader requirement shouldn't determine the limited requirement.
The same for location, for instance: it would be a resource that the app requires and the user chooses to give the current location or the spoofed location or cancel the action. (An advanced option that could be available for that would be "automatically fulfil the request in the same way for the next day/week/month/till the next upgrade/forever".)
The cognitive load would be much less. A user doesn't need to evaluate how trustworthy the app is in general. The user doesn't need to think, "It will be much easier for me to give SMS access to this app for setup, but I don't want it general access, so which one should I chose?". The user doesn't need to deny access to any app (which is tiring, since we're typically most naturally inclined to cooperate). Every action taken is naturally limited, so the stakes are lower.
The current approach is another attempt to reduce the cognitive load by reducing gratuitous choices. But it also prevents necessary distinctions and forces people to make choices they really don't want to (I want to send this photo vs I want this app to have access to my photos). But users view permission dialogs as things that take away their control rather than as things that give them control, so this approach seems failed. As you say, making them more and more fine grained just means the user is reminded more and more often that they don't have a say.
> Where are the [free software systems] that do this stuff out of the box and don’t involve configuring these tools with the command line?
There are projects to build software in this way-ish. They are all ridiculously limited and impractical. The amount of effort it takes is incredible. It's not simply a change of configuration; operating systems and programs need to be written to operate in this way. For instance, systems would have to stop making assumptions about the global filesystem namespace - the GTK filepicker would have to return read access to a file, rather than a filepath which a program than asks to open read-only.
Can you imagine a team of three or four developers maintaining a secure fork of a web browser in their spare time? and without a web browser, everything is a toy.
What you’re describing is literally the iOS (and increasingly, macOS) security model that gets derided by power users for being too restrictive. The system owns the photo picker. When an app asks for photos, you can either give it access to the whole library, or you can choose specific photos to share with it from the system picker. The system owns location services. When an app asks for location, you can either choose to allow it indefinitely, choose to deny it, or choose to allow it once. Even if you allow it indefinitely, iOS will follow up with a prompt a while later explaining that the app has been using your location in the background, show you a map of the locations it has tracked over that time period, and ask if you want to continue allowing it indefinitely or deny access.
What you claim about reducing gratuitous choices is pretty clearly not how Apple views these things. They have been adding more and more granular permissions with every update, in response to developers abusing the wider permissions. I also have no idea what you mean by the user being reminded they don’t have a say. The user does have a say. What irritates the people who don’t like the prompts is they think they shouldn’t have to click through anything. It’s not a complaint that the permissions are too wide, it’s that they’re too granular.
I will also note that the cryptographic verification (and App Store review) side of things makes these measures a lot stronger, because developers can’t just hack around in private APIs and system-level frameworks to circumvent the restrictions. They are forced by a contract only to access the restricted resources through approved channels. Even though Apple is not perfect with review, the evidence from other platforms and their own data is that they catch the vast majority of abuse that happens and we never see it.
> I think security prompts, viewed as interruptions, are a consequence of the Apple-owns-everything model.
No. It's a consequence of finding a balance between a usable system and the fact that people are shit.
From the start iOS was very limited as to what apps could access. Funnily enough, the Android crowd derided it for this. "What, no file system access, ahahaha".
And a user's usage scenario isn't "I open photos and and decide to share one with an app". The usage is "I open an app and want to select an unlimited random number of photos and videos to share with my friends in this app".
So the usability/permissions tradeoff was: an app sees nothing outside the sandbox, but can request access to a limited number of other system-defined sandboxes: photos, camera, microphone, location etc.
Guess what. People are shit. So they decided that unlimited access to photos is so very nice, and could be abused.
So the new permissions in iOS let you limit access even further: you select which photos an app has access to, and it only has access to those. Because what users want hasn't changed: they don't share photos with an app, they use an app to share photos.
The result is significantly more cumbersome than before and there's literally no way around it. It has nothing to do with "being Apple-owned". If you limit access, you will end up with security prompts and popups, access management dialogs etc.
> The same for location, for instance: it would be a resource that the app requires and the user chooses to give the current location or the spoofed location or cancel the action.
This is literally what iOS does. However, when iOS does it, suddenly it's "security prompts, viewed as interruptions, are a consequence of the Apple-owns-everything model"
>the GTK filepicker would have to return read access to a file, rather than a filepath which a program than asks to open read-only.
xdg-desktop-portal [1] does something pretty close, and it is how Linux applications shipped as Flatpaks access your files (at least when the Flatpak sandbox is in enforcing mode). Every sandboxed program has its own mount namespace which cannot see your home folder. When you want to open a file in a sandboxed program, you select the file in an out-of-process file chooser, and the portal puts a handle to that file in the program's view of the filesystem.
Flatpak and the xdg-desktop-portal have some pretty interesting ideas.
It's a huge shame there's almost zero documentation, both in the code, and for users / developers. I've heard about this feature before, but I've no idea how to make it work.
When needing to grant access to photos on iOS, the user can has three options:
- Deny access to photos
- Grant access to hand-picked photos\
- Grant access to all photos.
Picking the middle option opens an OS-handled photo picker where you choose which photos the application can access. The model works pretty well in balancing flexibility, security and convenience.
Something else _many_ apps support, is just pasting a photo. So if I want to post a photo on Twitter, I can just copy it from my photo app, and paste it there. Twitter has access to no photos on the system at all, it merely receives that one.
That's pretty much selinux. We don't have a nice ui for it, because what you described is actually pretty complicated to achieve. But it actually works. Today.
> Do I want Firefox to have access to my Contacts? My Photos? Do I want Facebook to have that access? To all of them, or just ones that I select? Should my disk utility have the ability to send data over the network? My IDE? My IDE plugins?
Wouldn't the POSIX way of handing this be to have Firefox run as its own "Firefox" user with different privileges?
It would, but you run into all sorts of impedance mismatch. For example, how do you revoke and grant access to your photos folder to some applications, on demand? The straightforward way (create groups for each important folder access class) blows up in terms of size very quickly, and group changes don't really happen on-the-fly, either. Things can get even murkier when it comes to device access, and you also have to deal with all sorts of caveats like suid binaries. You also don't have the API required to easily deal with any of this (e.g. no simple API to prompt the user for permission to access a path, no simple API to retry access until the user either grants access or denies it etc.).
How do you then grant Firefox write access to your Downloads folder, read access to folders on the fly when you want to upload a file, “general area only” location information, access to the microphone and camera only during video conferencing, etc?
You can do some of this with groups, but that’s difficult and runs into limitations pretty quick for even comparatively simple use-cases. And making a bunch of custom ab-hoc groups requires root access on top, when ideally a user should be able to manage fine-grained control without escalating their own privileges.
Traditional POSIX file permissions are a huge blunt hammer that don’t scale well to the fine-grained type of access that things like iOS let you configure.
POSIX permissions are way too crude to be properly functional.
SELinux sort of solves that part of the equation with additional labels given to each file, but eg. Android only uses selinux for the base of the system. Modern security requires sandboxed applications with user-controllable permissions.
> This is because Google and Microsoft want to own everything, as well. It's not a coincidence that Apple, Google and Microsoft's security implementations enshrine each of them as ultimate gatekeeper and single source of truth for security on their respective operating systems.
To a close approximation, 99% of regular users (let's just say people who are not developers by profession) do implicitly trust their respective OS vendors. To suggest otherwise suggests a level of paranoia which would make it impossible to ever have a stable system and get anything done.
It's funny, because I actually agree with both of you.
We should generally trust our primary OS vendor, because otherwise we're all going to need therapy.
But we shouldn't have to trust them. That is, the system should be designed such that we have some amount of "defense in depth" where a mistake by Apple during the review of one of the 3.5 million apps doesn't result in an exploit.
I think this is why Apple and the others are trending towards the design that they are, because a deep capability approach that's integrated into the UI helps offload some of the complexity and overhead of managing that liability at scale.
> I think this is why Apple and the others are trending towards the design that they are, because a deep capability approach that's integrated into the UI helps offload some of the complexity and overhead of managing that liability at scale.
I don't think this is an accurate description of what Apple is doing at all. If anything, they're moving more towards the direction where they are implicitly trusted and if you don't trust them you don't get any security.
Apple does have pretty great security story. On ios, it’s like state of the art, and it’s ok on macos as well. And no, walled garden is not the security of apple, but proper sandboxing is.
So even without extensive app review (which is frankly, not extensive at all) a rouge application should not be able to do much damage.
Apple is responsible for distributing 500 million copies of Xcodeghost[1] to its users, and Google's Play Store is the #1 vector for malware on Android[2].
I think it's very misleading to put it like that. Xcodeghost came with an infected version of Xcode that was not distributed by Apple.
From Wikipedia: "Security firm Palo Alto Networks surmised that because network speeds were slower in China, developers in the country looked for local copies of the Apple Xcode development environment, and encountered altered versions that had been posted on domestic web sites."
Right, and despite Apple assuring us that it vets each app and developer via its App Store, 500 million copies of the malware were distributed via the App Store to users' devices.
This is the next sentence after your quote[1]:
> This opened the door for the malware to be inserted into high profile apps used on iOS devices.
> Even two months after the initial reports, security firm FireEye reported that hundreds of enterprises were still using infected apps and that XcodeGhost remained "a persistent security risk". The firm also identified a new variant of the malware and dubbed it XcodeGhost S; among the apps that were infected were the popular messaging app WeChat and a Netease app Music 163.
From the same article:
> Removing malicious apps from the App Store
> On September 18, 2015 Apple admitted the existence of the malware and began asking all developers with compromised apps to compile their apps with a clean version of Xcode before submitting them for review again.
Sure, a wikipedia reading can give that impression. What actually happened is that sytems with infected Xcode installations created infected executables. Those infected executables were then uploaded to the App Store. There were known to be thousands of infected executable images.
This is a false dichotomy. Just because I trust OS vendor to not be shipping malware or backdoors in their code doesn't mean I must let them decide what I can or cannot run on the OS for me.
> Apple, Google and Microsoft's security implementations enshrine each of them as ultimate gatekeeper and single source of truth for security on their respective operating systems.
Who do you propose should shoulder this responsibility? The vast majority of users lack the expertise to take it on themselves.
While I don't think it's perfect, and I don't use it myself, but because it was mentioned in the GP comment I responded to, we can look at the security model of Qubes OS which doesn't rely on its developer to decide what can and can't run on the OS.
Ideally, systems would be designed such that risks are minimized no matter where the app you want to run came from and no matter who signed it. Right now, that's how Apple, Google and Microsoft are dealing with security. They rely on developers signing software with certificates, and they revoke those certificates if they don't want that software to run on their operating systems. There was an incident last year the highlighted this system[1]:
> Last week, just after we covered the release of Big Sur, many macOS users around the world experienced something unprecedented on the platform: a widespread outage of an obscure Apple service caused users worldwide to be unable to launch 3rd party applications. Already being dubbed the “Apple Apocalypse” or “OCSP Apocalypse”, the cause was down to a little-known but essential service called “Online Certificate Status Protocol”
> The purpose of the OCSP call is to check whether a piece of software being launched has had its developer certificate revoked. Revoking developer certificates is one way that Apple deal with known malware. By using an OCSP responder service, Apple hope to prevent any software whose certificate has been revoked from launching on pretty much all Macs anywhere within minutes.
If a non-technical user runs his/her banking and some malware in the same VM, qubes os does jackshit about that.
It’s not security in itself. You have to configure each app to the tightest set of permissions it can run with and also have to trust said application no matter what. Apple does both of it via proper pre-configured sandboxes for apps, as well as revokable credentials. The latter is needed when eg there is a new serious vulnerability without a fix yet.
In my eyes the whole philosophy is wrong. They (we?) try to add abstractions and systems on top that catches code that abuses flaws in systems and software.
How about using the same energy to create systems and software that are more rugged, reliable and won't fall apart when a three year old mashes the buttons long enough?
Fedora Silverblue is potentially where the future of the Linux desktop might end up. It improves on security significantly, where the system is read only, similar to macOS, and all user programs are sandboxed within flatpaks (migrated from the preexisting RPMs of these programs).
Yes, I agree. It's kind of a pain to work with at the moment, but some more improvements it Silverblue is cool. Getting everyone to agree on FlatPaks may be an issue.
openSUSE MicroOS Desktop is another distribution exploring this space. It’s a neat idea, but the SUSE folks are still working out a lot of quirks (the firewall had to be disabled in recent builds, for example). How mature is Silverblue? Can it be used as a daily driver with minimal fuss?
You’ll probably want to install it in a Toolbox [1] container. This is the recommended way to set up development tools; you’ll generally want as little as possible layered on the OSTree file system.
flatpak/snap have serious downsides and nix does a much better job of package management. The sandbox aspect is interesting but in practice it’s so far shown to break things in my experience
I'm aware of it. Hasn't been updated in a long time, and you need to otherwise install vagrant and a VM manager separately (fortunately, it doesn't ironically tell you how to do so via homebrew...)
I think a first-class vagrant app for Mac OS would (a) be available in the App Store for simplified VPP deployments (b) install vagrant and (c) either come with a lightweight VMM over Hypervisor.framework and/or work with other active VMMs like Fusion, Parallels or even Docker Desktop.
Vagrant was born of the CLI, but I think in 2021 it needs to embrace its role as a bridge between the GUI and CLI. Just my opinion, though...
Yep. Crostini under ChromeOS operates more or less the same way. It's also why I personally am done with the homebrew approach.
I was using VMs to address the disaster that is the Windows Registry as a developer back in the early 2000s. Homebrew is such an amazing step backwards in that regard.
I have a feeling that Terminal.app is headed this direction on Mac OS, but Apple is taking their sweet time navigating the migration since everyone gets pissed off whenever they fix a security/reliability problem. Could you imagine the uproar if the next version of Mac OS worked the way ChromeOS does? (ps., there's nothing stopping you from running Firefox on ChromeOS, I've done it).
The general issue one runs into on Windows where you, say, install a bunch of dev tools and then, at some point, your registry gets corrupted and you find yourself needing to reinstall everything.
If you run your tools, and do your testing, inside of VMs, you can quickly "restart" from a known good state.
The Windows Registry is a lot like /etc in Unix-like environments. In either case, all that configuration can get co-mingled and corrupted and you have to start over.
VMs are a great way to solve that problem for developers.
Registry corruption since w2k is pretty much solved. It's a database with a WAL-style transaction log. If registry corruption is something that happens to you more than once, I'd check the storage since there's probably a lot more that's getting corrupted... Otherwise - keep full disk backups.
Chucking and restarting a VM is a lot faster than dealing with full disk backups.
Registry corruption isn’t just about transactional integrity. It’s also about whether or not something got misconfigured along the way and now you have to go figure it out. Or end up trying to uninstall and reinstall a bunch of stuff.
I’d rather just be able to start over from scratch.
All this work to treat our servers like cattle, but we insist on treating our clients like pets. Your local client should be just as replaceable as your servers.
It’s not so much corruption, it’s that every app uses it a little bit differently and half of them aren’t good at cleaning it up on uninstall. You can still get the system into a bad state where a rebuild (especially if you don’t have a lot of dependencies) is easier than trying to untangle whatever app or script or driver borked something.
I’ve experienced registry getting bloated and it is terrible for data restoration. I can’t think of any meaningful similarity between windows registry and Unix /etc. I’ve experienced orphaned config files but no other real issues. The orphaning is intentional usually because the config file has been changed from the default.
Yes, this is what’s so alluring about WSL, whereas with homebrew on my M1 setup I have an utter clusterf&$& of a paper trail across my desktop that is a real PITA to plausibly reverse course, modulo Time Machine backups.
I use a `brew bundle` to help manage this. Make a `Brewfile` listing homebrew installs, then `brew bundle cleanup` to delete everything that's not in that file. This way I can try something out and easily delete it (and the dependencies it installed). It also obviously makes it easy to setup a new machine.
This has been my MO with home brew as well. But it’s not perfect. Unix-style apps don’t always behave nicely when it comes to cleanup and it’s easy to get borked, not to mention the difficulties that the homebrew folks go through each time there’s a new release of Mac OS. I really think the writing is on the wall and we’d all be better off dumping homebrew and just using VMs.
You can use Homebrew and MacPorts concurrently. I will always choose the MacPorts version of a tool given the option and only fall back to a Homebrew version when absolutely have no other choice. It's frustrating to have to use two tools but I've found it easier to keep my dev environment clean this way.
I haven't had to do anything fancy yet. I just install both and try to find the MacPorts equivalent first before using Homebrew.
For example, I use the MacPorts version of Python. If I need any Python dependencies I install them either via MacPorts or pip.
In fact, I only have two Homebrew packages installed: dockutil and mas. Both are tools I use as part of ansible scripts which don't have MacPorts equivalents. All the remaining Homebrew packages are all cask apps (Firefox, Chrome, Discord, Slack, etc.). Those don't have any dependencies that would cause any conflicts.
For comparison, I have 139 MacPorts packages installed. 2 Homebrew packages, and 21 Homebrew cask apps.
What happens when your windows system goes haywire(disk failure/os update gone wrong)? Good luck getting it work the same even if you have backups. One of the reasons i love NixOS is i can get it work exactly the same as it was before on completely new hardware.
I love how your reply (which mentions that you use NixOS) is at the top because it indirectly confirms my experience with Nix. I tried Nix on Debian only a couple of weeks ago, and the main issue I had with Nix was that whenever I searched for an answer or other info relating to Nix, most of what I found was written by users of NixOS, not standalone Nix. It seemed that the overwhelming majority of people sharing info about Nix are users of NixOS.
For example, notice how there are 4-5 times as many results when searching the web for "configuration.nix" (which is used with NixOS) than there are when searching for "config.nix" (which is used with standalone Nix):
I got bored of wading through irrelevant search results, so I uninstalled Nix. (My experience with Guix was even worse, but in a different way.) When I want software that isn't in Debian's repos, I'll stick to installing it from source like I've been doing forever.
Outside of configuring your system, almost everything from NixOS is applicable to other distro's. The only notable exception I can think of are video drivers, and you will need nixGL for that.
Otherwise, I would recommend looking at home-manager, it's essentially NixOS but for any linux distro, or even darwin with nix-darwin. https://github.com/nix-community/home-manager
I used nix on Ubuntu as my daily driver for nearly a year before installing Nixos. I also installed it on a server and multiple desktops before that. My main pain point with nix on Ubuntu was gui stuff didn't work. But as a build tool it worked amazingly well.
Using nix as an ad hoc package manager worked for non GUI stuff, though, but that wasn't all that interesting. Except it let me have the latest Python and haskell and so forth...
Not that weird of a setup. A few years ago I was responsible for dev environments for >500 developers. I wasn't aware of Nix at that point in time, so we provided prebuilt Linux VMs for local and remote use,which we would update with Chef scripts. In the meantime I also use Nix for setting things up for my personal dev environment. Currently trying to use WSL2, not sure yet whether that is really better than a VM. I can recommend using the nix Home-Manager for combined dot file management and tools installation. With direnv and nix you can set it up such that when you enter a directory you get the exact version of the tools you want for that project. Being able to easily manage different versions of the same package easily is IMHO the killer feature of Nix.
I forgot. ATM I use Ubuntu with the nix package manager, not Nix OS. The reason being that in case a package is not available on Nix, or is not working out if the box. I found for example that Visual Studio remote support does not work out of the box on Nix OS.
You're running linux (with nix installed) on the bare metal of a mac, and then a VM on top of that running macos (with VFIO passthrough so it's not dog-slow?), is that right?
Performance is pretty reasonable? APFS works fine in this setup? No gotchas with drive encryption? ... actually, if your machine does use drive encryption, which is doing it, linux or mac? I don't know enough about mac hardware to know if there is some mac boot environment that handles encryption and then hands off to the OS (be it macos or linux).
> You're running linux (with nix installed) on the bare metal of a mac, and then a VM on top of that running macos (with VFIO passthrough so it's not dog-slow?), is that right?
You got it.
> Performance is pretty reasonable? APFS works fine in this setup?
Works well enough for my use cases. It's certainly less slow and fragile than running Linux in a VM via macOS.
APFS works fine in the VM, but I use HFS+ just because there are more mature tools to poke at HFS+ images than APFS, right now. The APFS FUSE drivers work well and support reading and writing[1]. There's a closed source driver that supports writing and encryption, though[2].
> No gotchas with drive encryption? ... actually, if your machine does use drive encryption, which is doing it, linux or mac? I don't know enough about mac hardware to know if there is some mac boot environment that handles encryption and then hands off to the OS (be it macos or linux).
I don't use T2 disk encryption, I just let Linux handle partition encryption. Apparently there's support here[3] for it, though.
do you run X windows in this setup? Any issues with getting all the drivers working for trackpad, sound, wifi, etc? That's what has me hesitating to bother trying linux as my host OS on a 2015 MBP
2015 MBPs are great Linux machines, everything is supported except for some webcams, but there are out of tree drivers that work perfectly. Sound via Pulseaudio or Pipewire works well out of the box, too.
I run Wayland, and libinput[4] gives you pretty much all of the trackpad features and gestures you get in macOS like adaptive acceleration, tapping, multitouch gestures etc. Xorg works well, as well, except its gesture support is limited compared to Wayland's.
Depending on the model, WiFi has open source and closed source driver support that both work well, and have Bluetooth support.
This is a good reference to see how well your model is supported on Linux[1]. This project will remap your keyboard shortcuts to match the ones you're familiar with from macOS[2].
I've also posted about making Linux and Plasma Desktop act like macOS on HN[3].
I envy your patience. I had to drop MacOS altogether out of frustration that there were no good package managers, and VMs simply don't cut it for me. Nowadays, I run an inverse setup: I do pretty much everything in Arch, with a few Wine prefixes to solve Windows compatibility, and a healthy combination of Darling and QEMU to spin up MacOS instances.
Meanwhile, I find macOS sticky in part because of the package manager (homebrew).
In particular, I've not seen a package selection so good and so consistently up-to-date, with so very little fiddling or upkeep to add or maintain 3rd-party repos, since I was on Gentoo—and portage broke (in whole, or just had broken individual packages) a lot more than homebrew does (which is almost never). I can't think of a single thing I use that isn't available and well-maintained on it, open or closed source. Even obscure things often just install on my first guess at the package name, no muss, no fuss.
Now, what I don't use it to do is to manage project dependencies, but then I don't like using, say, the "system" PostgreSQL or Nodejs or whatever even on Debian, either, unless I'm using them directly and not making them part of a project that will need to be distributed or deployed elsewhere and worked on by others. I'm entirely fine keeping core OS parts, my personal software I use, and management of individual project dependencies, strictly separate—in fact, having worked this way, I prefer it.
I could be tempted away by a similarly-great and high-quality package selection, maybe, but that would be table-stakes before we even get into the rest of how the package manager operates.
Databases and other 3rd-party daemons go in Docker because I can install the same version with the same tools pretty much anywhere. Even if it's not being deployed with Docker, the docker-compose file documents exactly the version and config the project expects, and is guaranteed to be correct assuming everyone developing uses it. This also means it's trivial to run several versions, and to add/remove/update/start/stop them independently, including any data or "cruft" files. If Docker were to vanish, the answer would become Vagrant running some very common and stable Linux distro with either a config shell-script or ansible yaml, for similar reasons. Documentation, consistency, and isolation.
Some people use the above for their projects themselves, for similar reasons, but I just use the now-ubiquitous env and version-manager programs for that (nvm, rvm, that kind of thing). Again: documentation, consistency, and isolation. Even if your dev set-up isn't used for deployment, if it's scripted then it's also accurately documented. As for language-specific libraries and tools, even if a few are in some package manager, I wouldn't rely on them. The language's own package manager will work everywhere, will probably be more (maybe much more) up-to-date, and you're going to need it for some stuff, inevitably, so you may as well use it for everything the project needs from that language.
[EDIT] notably, this way of working means none of my/our projects give a shit what OS or package manager anyone is using. Prefer macports to homebrew? Running Void Linux? Slackware? Probably even Windows (I dunno, who cares, it's Windows, but probably it works)? Go nuts with that, live your best life or whatever, the projects don't care. You'll likely need to manually create a Linux VM for Docker if you're on a BSD, but if you're on a BSD that will be the least inconvenience you face this week. I truly don't get why anyone would just "brew install postgresql" or "apt install postgresql" and use whatever happens to come down as an implicit project dependency in this, the year 2021. Because you actually want to directly use a local database to do database things? Sure. For a project that will be distributed or deployed elsewhere? No no no no no. Not even if you're running the same OS as your servers. Or Ruby, or Node, or even your compilers if you can help it. You have to be more specific and keep your shit tidier than that if you don't want a bunch of pain, both now and later, and almost all of that's very easy to do now.
Homebrew's repos are pretty lacking compared to most modern distros I've used, but if you haven't tried pacman on Arch, you're missing out. It will blow Homebrew out of the water.
Arch's default repos are definitely missing several cross-platform (at least Mac and Linux) programs that I've installed with Homebrew, without having to touch the config or add any extra repositories. I listed a few in another post, above. I've yet to see the opposite, because I simply haven't encountered anything I wanted that wasn't on homebrew. More details in another post elsewhere in the thread.
Arch does seem to be about as good as Gentoo's portage, which is the best selection I've yet to see in a Linux package manager. I'm not interested in a rolling-release base system anymore, though, because I just don't have the time or patience for "running update broke X/Wayland/audio/wifi/boot" these days, or fiddling with whatever it takes to keep that from happening. Rolling release applications? That I like.
Yeah, but it's got a much smaller set of packages available and, last I checked, had a lot more issues with broken packages and things like that (I assume because there are fewer eyes on it, and way less total time spent on maintenance of it)
On macOS brew? Yeah, tons. To even get somewhat close with Ubuntu or Debian you'd have to add a bunch of non-standard package repositories, and because of the way most Linux distros tie up user-facing programs with system releases, you'd have to do similar to stay up-to-date on packages that are in the default repos.
Arch does much better, but a quick search of ones I guessed might be missing turned up the following missing, from what I have installed on my Mac:
- rolldice
- android studio (package name android-studio in homebrew)
- tealdeer
- heroku (these are the Heroku command line tools)
- sublime text (package name: sublime)
- postman
- slack
Maybe you can add other repos to get them on there, but I usually just guess the package name and get what I want. I don't think I have a single thing on my Mac, as far as software I directly use (not project dependencies), that I didn't install with Homebrew, aside from first-party Apple software (Pages, Numbers, Safari, Xcode, Mail, and so on). And that's not bringing up all the Mac-only software I've installed with it, which obviously wouldn't be fair.
I haven't added a single extra repo. That's just with the defaults. You used to have to do a one-time invocation to enable binary-only packages (mostly closed-source stuff) and then use a different keyword for operating with them, but that's no longer the case and hasn't been for IIRC at least a year—it's all transparent. "brew install [guess at package name]" works almost every time I need it, and a "brew search" finds the rare failures. I basically never need to open a browser to find & download software, and it's all up-to-date, and it's all totally separate from my base system so is extremely unlikely to ruin my morning because I ran a blanket update-everything-to-latest and didn't pay enough attention.
Fwiw, all of those except rolldice are in Nixpkgs, and you can also use Nix on top of any Linux distribution, separate from the base system, just like you use brew separate from the base system on macOS. Nix on top of Ubuntu, for example, would support all those packages. You won't like the CLI though
PS: Debian has > 3x as many packages as Arch[1]. It would surprise me very much if that was just a matter of things being more split up in Debian, or Debian's higher count of virtual packages. When people talk about Arch's great selection of packages, they're including the AUR, which is actually massive. Imo it makes more sense, then, to include the AUR in little tests like this.
Default repos don't seem to carry a lot of those (no big surprise), but the AUR seems happy to serve them. All of the apps you listed are available there, presumably because it would be a bad idea to force people to build things from the primary repos. It's honestly a nice distinction, considering one of the main complaints I've heard from Homebrew users is that it's painful having binaries and build scripts mixed in.
Darling is more of a novelty at this point. Soon I'm going to try using it in my build system, if it's even remotely successful I'll do a write-up for it.
Another HN user did a write up on Homebrew[1] that I agree with.
My main gripe is that Homebrew would regularly break my Python virtualenvs with linking errors. Meanwhile on Linux, I have virtualenvs that I created 9 years ago that still just work.
These days you might be able to sidestep that issue by not using Python from Homebrew at all, but installing and managing it with Pyenv instead.
`brew pin python` and it'll stop that from happening.
but yeah, this is a constant source of pain, and I end up helping people at work fix it probably on a weekly basis (or more often). python bin-envs are such a hassle.
I need to run the latest Python versions, including alphas and betas, though, and have everything just work even if the system and local Python versions change under me. Pinning Python would make it difficult to contribute to CPython, or make sure my libraries work with new Python alpha/beta releases before stable versions are released.
Virtualenvs based on Pyenv's local Python installs on Linux works well, and virtualenvs that were based on old system Python versions still work even when I upgrade the system Python. This is where I'd get linking errors with Homebrew that I don't get with Linux.
at this point you're far beyond the intended use for a widely used package manager. install a stable python somewhere by hand and keep it there ¯\_(ツ)_/¯
There are definitely times when I think about just running linux in a VM and using that as my primary dev environment. I did that before when I was briefly forced to use a PC laptop for a couple of months, and it worked great.
Also homebrew has so many drawbacks that I'm increasingly planning on boycotting it fully.
I went down this route as well and am the happiest I've been when it comes to managing dependencies and creating builds. I ended up taking it one step further because Apple kept breaking my hypervisor (xhyve) with every new macOS release. I built a new workstation that runs NixOS and wayland/sway. I still have my MacBook Pro for when I need macOS comforts or if I don't feel like being at my desk I can just SSH into the workstation.
I used to work like this with virtualbox Ubuntu on Windows host for front end dev. It worked great. Even had a AHK script that makes the vm fullscreen borderless, bar the win 10 taskbar below, clean and super usable.
The issue is I got an M1 mini mac for development, and I’d rather have the superior fonts and gui of Sublime, MacVim, etc. Granted they look pretty good already on Ubuntu.
I feel like Docker container is the middle way here, you can keep the host clean from complex installs, and you can use evetything on the host to edit your code, since it’s so easy to map folders with Docker.
I don’t really see a downside, although it is super convenient as well with a graphical vm, that you can archive and backup your entire vm, along with all your os preferences, editor settings, terminal settings etc.
edit:
I guess I could also use Macvim etc. on the host with shared folders but those shared folders are slower and have all kind of permission issues afaik, I think Docker mapped folders work better.
Last time I heard Docker on Mac uses virtualization under the hood so it’s about the same as virtualbox (unlike os supported "containers" on linux host), but I don’t know if that is still true for M1?
> Last time I heard Docker on Mac uses virtualization under the hood so it’s about the same as virtualbox (unlike os supported "containers" on linux host), but I don’t know if that is still true for M1?
Note that networking changed a ton in Big Sur because Apple doesn't allow kernel extensions anymore (or, its too user hostile for vendors to do it). The VMware networking stack switched completely to HyperVisor.framework's networking stuff, which doesn't support a bunch of "advanced" features.
I only use it for a hostonly network + NAT and this works fine.
> I’m running some Linux vms under parallels on a m1 and the graphics performance could be better.
Not GP but I'm using UTM, which is Qemu based. I need Windows to connect to my work so I've setup an ARM Windows 10 VM. Graphic performance is indeed terrible but I've found out that RDP works just fine with whatever resolution your physical computer supports.
I do a similar setup to this on Windows (WSL) and previously on a Mac (Parallels + Ubuntu). In both cases it's exclusively SSH access using an outer-system terminal, and as far as editors, a mix of screen/vim/whatever in the terminal, and VSCode with remote editing.
Would it be possible as an alternative to remote editing to use the VM's "Shared folder" functionality?
Running an heavy IDE like IntelliJ inside a VM when you could also run it in the outer OS seems like stretching this setup beyond what is meant for.
When I tried something like this, write heavy activities were pretty slow. Extreme example was running something like "npm install" (write many small files) on the host. That took about 10x time compared to doing it in WSL. I've had similar slow performance with shared folders in things like Vagrant.
Disclaimer: I did this 2-ish years ago, it may have changed by now.
Not sure exactly what the situation was here, but if it was an NTFS folder from Windows shared into the VM/WSL, then you are indeed running the npm operation in the inner system but paying the cost of writing thousands of tiny files across that translation layer.
I experimented with this but eventually gave up. The folder was natively the outer system's filesystem and "shared" inward, and that means things like non-case-sensitivity on Mac or lack of proper permissions on Windows, not to mention the extra sharing overhead whenever running a build or whatever.
By "gave up", do you mean the shared-folder idea, or the whole thing, i.e. using a Linux VM for dev from a Mac / Windows host?
I'm seriously pondering not having to deal with re-installing my dev environment on the Mac because even with Homebrew and rbenv and such tools, dependency hell is around every corner.\
Probably unlikely. The only way it might be better (for now) is if you need to use something that uses the GUI.
But with native x forwarding coming soon to WSL2, you'll get the same experience soon enough (given that WSL2 is a separate VM running in Hyper-V already).
To others who might be interested in trying Nix instead of Homebrew:
There is a `brew install` equivalent in vanilla Nix -- `nix-env` provides an imperative interface pretty similiar to `brew`.
`nix-darwin` is a third-party thing that tries to give you a NixOS-like experience on macOS. It's not necessary if you're only interested in replacing Homebrew. You can have `shell.nix` files and ad-hoc environments without replacing your entire system configuration.
Vanilla Nix is very well isolated, and you can try it side by side with Homebrew to see how you like it -- one of the nice side effects of Nix-installed software being independent from the rest of your system is that, well, you can install whatever you want without breaking the Homebrew equivalents. For example instead of putting binaries in `/usr/local/bin`, Nix creates a directory of symlinks to files in `/nix/store`, and you can add that directory to your `PATH` if you wish.
I switched to Nix a few months ago and have found it... fun. It's not something I would recommend if you want things to just work, but if you're curious about Nix, you can learn quite a lot about it from the comfort of your MacBook.
Yep, I found the `shell.nix` setup to be too complicated and time consuming, even if it's technically "correct", so I just `nix-env -iA nixpkgs.pkg_name` every time I would have previously used `brew install pkg_name`. Since nix isolates each package from every other package, there's no risk of package incompatibilities when I do this (compare to homebrew, where package A needs one version of python and package B needs another version, and updating python can break both...), and so every project I work on just has all the tools it'll ever need.
For package management within a project, I just whatever tool is appropriate for the language -- NPM, Poetry, Cargo, etc.
You don't strictly even need a `shell.nix`, you can just name packages verbatim on the command line a la `nix-shell -p python39.pkgs.ipython -p python39.pkgs.numpy --pure` and you're away.
I recently helped somebody get Nix set up on an M1 Macbook and it seemed to work fine overall. We just used the standard installer, then mostly reused a home-manager-based config that I use on my x86-64 Macbook. Core stuff like Git and Emacs worked with no fiddling, and the few problems we did encounter seemed to be from issues in my personal config, not Nix on aarch64 in general.
That said, we didn't try to build or use anything particularly tricky—just Git, Emacs, Python and a simple Haskell project so far.
Yeah, it, like, mostly works? But maybe only in multi-user mode? I don't know; I didn't read the whole thread. I don't have an M1 machine, but it seems like there are binaries cached for the arch in the official cache:
$ nix-env -qasA nixpkgs.git --argstr system 'aarch64-darwin'
--S git-2.31.1
Which implies that it's supported in some capacity.
(The "S" means "there's a substitute available;" i.e. git at least is cached for aarch64. Yeah, Nix's CLI... needs work.)
I don't know exactly what the state of it is -- i.e how many packages can build natively on M1 -- but I know that the Nixpkgs community has done a lot of work in the last year to support M1 nicely, and I thought that it was fully usable at this point. But there's a big difference between "Nix works on M1" and "every package in Nixpkgs works on M1."
A better future is on the horizon (or here, today, if you're willing to install a 2.4 pre-release): Flakes make it trivial to fall back to x86_64 when aarch64 fails, and the CLI experience is dramatically improved.
Alas, that path is probably not yet suitable for folks without prior Nix experience, given how new everything is.
Are flakes necessary for that? I would think you could install x86_64 binaries manually by setting `system`, even on the 2.3 branch. It seems I can do the reverse thing:
But maybe you just mean that it does this automatically? That's neat, if so. And I would expect this to be a lot more annoying within a default.nix file, having to keep track of two different nixpkgs imports.
Not automatic, but as a replacement for nix-env and nix-shell, it's great. When I need an x86_64 package, I find `nix shell nixpkgs#legacyPackages.x86_64-darwin.neovim` easier to recall and reason about.
(Or `nix profile install nixpkgs#...` for a persistent version of the same)
2. Run the installer with: ./install --daemon --no-channel-add
3. Edit `/etc/nix/nix.conf` to include the line: experimental-features = nix-command flakes ca-references
4. Restart the daemon with: sudo launchctl kickstart -k system/org.nixos.nix-daemon
5. Done!
At this point you have a Nix installation, without any legacy channels, which uses aarch64-darwin by default. If you have Rosetta2 installed, Nix will automatically detect it and set `extra-platforms = x86_64-darwin` so you can seamlessly use / build software for other architectures. If you want to specifically get an Intel version of something in nixpkgs, you have to explicitly state it like: `nixpkgs#legacyPackages.x86_64-darwin.foo`
Nix / NixOS support AArch64 ( https://nixos.wiki/wiki/NixOS_on_ARM ) however Apple/Mac-devices don't have many drivers available nor have people contributed enough time and resources to have a darwin-aarch64 build farm and binary cache.
Oh wow, I see it grew a lot since last time I checked.
I got interested in the cross compile section. I normally work on Mac and whenever I need to create a docker container I need to generate binaries for x86_64 linux.
I typically did it by creating another nixpkgs: instance like:
linux_pkgs = import <nixpkgs> { system = "x86_64-linux"; };
This doesn't exactly do cross compilation and it actually requires linux builder to work (I have one running in docker). There's also option to specify cross compiling, there, but from my experience requires rebuilding pretty much everything from scratch :/
Using pkgCross seems appealing, but I can't find a linux x86_64 target there. The closest I can find is musl64 (which is great, but can't get everything to work with it).
Is it under perhaps another name that I don't recognize, and if it isn't, would be possible to add it, it would greatly help creating docker images on mac.
I realize that the current naming is confusing, so while writing a tutorial I opened https://github.com/NixOS/nixpkgs/issues/126949 that hopefully gets us to a naming scheme that's more consistent with the triplets in cross compilation world.
As for the binaries, you need to compile them once and then you can use http://cachix.org/ for free to cache them.
is the `--darwin-use-unencrypted-nix-store-volume` switch still necessary when installing? this is just my personal opinion of course, but i think this should be somehow removed (i understand that on a modern mac computer things will still be safe (probably?), but seeing this flag in the install command is just bad for a first-time user)
I've been using nix on macOS as my primary package manager for years. Really love it! Specifically nix-shell is a blessing a you can quickly try out things without worrying about breaking some other packages on the system.
Initially we've only used nix for development at my company, but now we're also running most of our servers with NixOS. This allows us to use our dev package definitions in production, so our dependencies are always in sync between dev and prod. I've written a blog post about all the advantages we get with nix in our framework IHP a while ago, if you're curious, check it out: https://ihp.digitallyinduced.com/blog/2020-07-22-why-ihp-is-...
If you're using an older mac and have troubles with installing nix, try out the new installer in this GH issue: https://github.com/NixOS/nix/pull/4289 (the command is in the "Try it out" section). It's already merged, but not released yet. So far this has solved all the issues on older macs for me.
Also check out https://nix.dev/ which has many great resources to get started :)
Brew's slowness is a huge pain point. A basic `brew info` command takes 1.8 seconds and requires executing 6 ruby processes. (with online analytics disabled)
And `brew search` is so slow that I've just made a function[0] to search taps locally for formula/cask filenames matching my query. Takes 0.08 seconds instead of 10-20 seconds.
Makes me want to try `nix`. Anyone have experience using them both together?
Nix generally performs pretty well... as long as you're using the "right" commands. For example `nix-env -i git` takes about 30 seconds on my laptop (it evaluates the entire package universe and does a bunch of sorting and string comparison), but `nix-env -iA nixpkgs.git` takes about 600ms (it... doesn't).
`nix search` is really fast, and gives much better results than Homebrew's search. It creates a local cache of Nix evaluation results and searches that, and you have to update the cache yourself, but that hasn't bothered me in practice.
There isn't really an equivalent of `brew info`, but this is close:
$ time brew info git
git: stable 2.32.0 (bottled), HEAD
... lots of other stuff omitted...
0.98s user 1.28s system 61% cpu 3.678 total
$ time NIX_PAGER= nix-env -qaA nixpkgs.git --description
git-2.31.1 Distributed version control system
0.13s user 0.02s system 95% cpu 0.165 total
You can have a much faster package manager, if you're willing to put up with the ergonomics. i.e. Nix currently lacks an equivalent to `brew upgrade`, which is kind of bad. It has something that sounds like an upgrade (`nix-env --upgrade`), it doesn't actually do what you expect. And it takes several minutes to run, depending on how many packages you have installed.
Another performance datapoint:
$ time brew update
...infinite output omitted...
82.11s user 99.19s system 95% cpu 3:10.58 total
$ time nix-channel --update
unpacking channels...
created 1 symlinks in user environment
4.13s user 11.29s system 97% cpu 15.812 total
I haven't updated Homebrew in a while, which might explain part of the slowness there. But as the original post points out, Nix updates take the same amount of time regardless of how long it's been since you last updated.
I don't know if the reason brew is so slow anymore is because they changed the default 'brew search' command to query github for a bunch of matching regex open issues and return those results or what. I don't want it looking for packages upstream, just search locally. If I want the latest list of packages not cached on my system, I'll just run 'brew update'. And if 'brew update' is so slow now just due to the size of the .git repo growing so large over time.
If you want to try this but balk at the ~rough install experience around the Nix volume/encryption: a bunch of work I did to sand this down last Fall was merged into master this Spring.
There are some headwinds wrt backporting those improvements to include them in the 2.3.x release series, but you should be able to take advantage of them in the meantime via the installer numtide publishes from master: https://github.com/numtide/nix-unstable-installer (no special flags needed).
[apt/dnf/brew] install package_name
vs.
nix-env -iA package_name
[apt/dnf/brew] search keyword
vs.
nix-env -qaP '*keyword*'
Is Nix going to become the git of package managers? Why can't there be a 'nix install' or 'nix search' for people who just want to use software?
That's why I like pip. I can tell a newbie to 'pip install' a package and they can immediately intuit what it does. It gets installed to their PATH, too.
2. How common is it to install from outside nixpkgs?
Depending on the answers to the above:
`nix install packagename` should default to nixpkgs and install into currently active profile (if auch a thing as currently active profile is a thing in nix, of course)
`nix search packagename` should default to searching nixpkgs
Boom. You improved developer/user experience 100-fold with little effort.
Edit: 99.9% of examples in the linked page have nixpkgs# prefix for packages. Just drop it, and make it default. Everyone will thank you.
I think nix already does what you want for search, but the install suggestion is a good idea.
`nix install packagename` would fall outside the current command convention (since `profile` is a command and `install` is a subcommand of profile), so they would have to introduce a new top-level command "install" which is aliased to `profile install`. Given the `nix` command itself is a newer/experimental alternative to `nix-env`, et al., in a seemingly similar spirit to `apt-get/apt-cache` -vs- `apt`, I think creating such an alias would be a very reasonable thing to do.
`nix search packagename` afaik currently does what you (probably) want. It defaults to searching in both `packages.<system>` & `legacyPackages.<system>`, so nixpkgs will be included. I suspect the inclusion of `nixpkgs` in the examples above is actually narrowing the search, possibly for speed: doing `nix search packagename` is currently a bit slow.
Much more common than you would expect coming from a traditional package manager, since Nix freely allows multiple versions of packages to coexist. E.g., many folks run a stable release of NixOS but install select user packages from unstable, while most development environment and packaging scenarios will pin to a specific commit of nixpkgs.
Nevertheless, there is an alternative CLI that infers "nixpkgs" when omitted, but I can't seem to find it right now.
nix-env is a little worse than your examples suggest. It's really:
nix-env -iA nixpkgs.package_name
Can't forget the leading `nixpkgs.`. And searching is:
nix-env -qaP '.*keyword.*'
Because `--query` takes a regex -- unlike all of the other nix-env subcommands, which take globs.
But `nix search` has been available for years, and works great.
There's still no nice interface for installing packages, but that's not actually an operation you do very often with Nix, since you're mostly specifying environments in files, rather than mutating your own user environment. Of course, newcomers don't know that, and they want to manipulate things in their PATH like every other package manager, so that's not an excuse for the bad ergonomics. The upcoming version of Nix adds `nix profile install` which is a bit more intuitive.
I think Nix's current CLI makes git look positively humane, but I'm encouraged that it's been steadily improving.
I immediately said to myself "nope, not going to use this". For some reason unbeknownst to me developers think that CLI must be as inscrutable and cryptic as possible.
You don't have to leak your tool's internals everywhere to provide a reasonable `nix search package-name`
Most people on the Nix world are moving away from stateful package management entirely, identifying it as precisely what causes complexity and mixed up & stale dependencies in the first place. It's much simpler to be able to either just name what you need when you're about to use it or put together a list of packages for an environment you're going to repeatedly use.
The comparison in this case would be
nix-shell -p qemu
Which is gone and not junking up your system as soon as you've finished with the shell provided.
I don't understand the attraction of homebrew. Macports is effectively the same as the *BSD ports and supports the MacOS environment in terms of using the native compiler and toolchains.
I looked at Homebrew and saw that (at the time) it wanted to blast crap into /usr and did weird root shit. Macports does everything in /opt/macports and keeps itself clean and out of the way of either Apple's OS (and the locked down root volume) or other MacOS native apps that install in /Applications.
Homebrew only ever writes into `/usr/local`, which was unused by macOS until very recently (though unfortunately a few third-party tools also use it and lead to very occasional conflicts with Homebrew). The original rationale for using `/usr/local` over `/opt` was that, at the time several widely-used software packages had hard-coded installation paths and would require extensive patching to work in other locations [1]. This hasn’t been true for a long time though. On systems with Apple M1, Homebrew finally made the move to `/opt/homebrew`.
> and did weird root shit
On the contrary, the philosophy of Homebrew is that package installations should never require superuser privileges. The only time Homebrew does “root shit” is when initially setting up `/usr/local` writability. Not weird at all. Though it does mean that Homebrew is effectively unusable (and a very bad idea!) on a multiuser system.
I'm a Macports user. Here are some things I find attractive about Homebrew:
1. While the range of packages available is only slightly larger, the support story of those packages tends to be MUCH better than Macports: a not-insignificant number of ports haven't been updated since they were first published.
2. Homebrew user docs are much better. The Macports contributor documentation is comprehensive and detailed, but their user docs are obtuse/non-existent. Homebrew contributor documentation is poor, but that lack affects a much smaller set of people than Macports' lack.
3. Homebrew is the supported install method for many pieces of open-source software: even if they also have a port, any support they give / tickets you read will be targetted to the Homebrew installation method.
The 1st and 3rd points may be mainly down to network effect but I feel like the 2nd point is very addressable and will also contribute to 1 & 3 in a big way.
Back in the day homebrew allowed you to use system libraries for stuff like openssl while macports did not, which was nice for some things, especially at a time when there were no binary ports which meant installing something like imagemagick would take a loooooong time because of all the dependencies.
Also brew cask was pretty nice, and generally the experience to add a port was not as seamless as adding a formula to homebrew.
Apple should create an integrated package manager for OSX.
And I agree: I have only had bad experiences with Homebrew and the "cuteness" of its user interface just becomes a massive annoyance when the core of it is not working.
That’s not my experience with Homebrew at all. I use it on my work laptop on which I don’t have root access and Hombrew has been extremely easy to configure to install everything in a local folder.
As a novice Nix user, I would say that I've hit a certain amount of this too. Basically every time you update your channel, it's a dice roll how much you're going to have to rebuild or re-download.
Now, it's easy to let it run in the background or interrupt/resume it at any time, so that at least is a plus— and it does provide a clear indication of how much it's downloading, and how many packages are being built.
nix on OSX was my gateway drug to nixos. Nixos is much more intimidating & when you stray outside of what is supported by nixpkgs the learning curve really ramps up.
I wouldn't be surprised if we start seeing shell.nix and build.nix showing up in more and more OSS projects & it spreads mostly that way.
One thing I'd love to know is what the advantages of the different systems are, objectively. I've always used macports, from the very early days, and I just use it because it works, and I know its quirks. I remember brew starting, and thinking "oh, that looks interesting". Yet I never really felt a compelling reason to change.
This article is refreshingly honest about why the author decided to give nix a try, and I very much appreciated that. A large chunk is "it's new and I wanted to try it out", combined with deterministic update times, which I respect. Is there a detailed technical comparison of macports, homebrew and nix anywhere?
Nix comes with some friction since its approach to packaging is so different. When you start using it, 95% of the time it's great. But because Nix does things in such a different way that it can be difficult how to navigate that last 5%.
In terms of installing packages, it's an advantage to Nix that it works with both macOS and Linux. While not every package is as cross-platform as it could be (e.g. some desktop apps that are available on both are only packaged on Linux), it does mean it's easy to use e.g. the same version on Tmux in all computers you have access to.
Nix can be used for more than installing packages. e.g. the nix-shell allows for dropping into a shell with particular programs installed.. like a generalisation of rvm/nvm/whatever. Nix can be used to describe Docker images instead of using Dockerfile. NixOS uses Nix to describe its OS's configuration and services, rather than just packages. (Whole OS setup described from a single file is pretty neat).
Nix+NixOS can also be used to describe iso's, installation iso, vm images, ami images, azure images, sd-card images, pxe-boot images, shell environments, and others.
Not to mention that nix can be used (with varying levels of success) on windows, intel macs, m1 macs, redoxOS, and others.
* Things like fzf have bash completion accompanying files. I have not figured out where these reside using nix.
* Things like mysql make heave use of directories such as /usr/local/var/mysql or similar. I was unsuccessful in surpassing permission problems with these.
> * Things like fzf have bash completion accompanying files. I have not figured out where these reside using nix.
In general, ~/.nix-profile/share/bash-completion/completions/, although fzf in particular seems to have its in ~/.nix-profile/share/fzf/completion.bash. To be honest, I'm not really sure about why.
In general, ~/.nix-profile is more or less the Nix equivalent to /usr.
> * Things like mysql make heave use of directories such as /usr/local/var/mysql or similar. I was unsuccessful in surpassing permission problems with these.
If you're not running the services as system-level services, override their state directories. In the case of MariaDB that'd mean something like `mysqld --datadir=~/var/mysql`.
> Things like fzf have bash completion accompanying files. I have not figured out where these reside using nix.
Bash completions get installed under Nix profiles (yours, root's (a.k.a., the ‘default profile’), or the ‘system’ profile, if you're using nix-darwin or NixOS), just like binaries.
On my system, I have some bash completions under my user profile, in
If you use a shell installed from Nixpkgs, it'll automatically pick up on all such completions installed to your profiles (which come bundled with the packages they're for). (To try this with bash, make sure to install `bashInteractive` rather than just `bash`.)
If you use a shell configured by a Nix module system, like NixOS or Nix-Darwin, it will provide an option to enable the installation of such completions to your system profile, which you can enable like this: https://github.com/LnL7/nix-darwin/blob/a7492a8c76dcc702d0a6...
> Things like mysql make heave use of directories such as /usr/local/var/mysql or similar. I was unsuccessful in surpassing permission problems with these.
It's clear that you're pointing to a real issue, but there's either a typo or a misunderstanding above. If your `mysql` binary really does spit out messages about /usr/local/, it didn't come from Nixpkgs. But if it comes from Nixpkgs, it might be emitting similar complaints about directories under /nix/store.
The explanation: Nix packages don't know or care anything about /usr/local, but they are compiled so that they think all of their ‘prefixes’ (e.g., what by default on a system are just /usr, /lib, /etc, etc.) live in package-specific directories under /nix/store. It's an important part of Nix's design that everything in the store be immutable, and so everything there is marked read-only.
The solution: to use stateful services like databases that you install from Nixpkgs, point them to writeable directories outside of the Nix store to use for their data storage. You can do this manually for mysql with options like `--datadir=/some/writeable/path/you/own` and similar. These same options are used inside Nix module systems when they enable configuration of services like MySQL, e.g., here: https://github.com/NixOS/nixpkgs/blob/nixos-21.05/nixos/modu...
I find that the isolation of Nix/Nixpkgs makes it pretty natural to use it in tandem with Homebrew.
I don't need any CLI tools that aren't in Nixpkgs anymore, so I use Homebrew exclusively as an installer-fetcher-and-runner rather than a source-based package manager (i.e., only for ‘casks’ rather than for ‘formulae’), and the combination is very nice.
Nix similarly plays nice with Pkgsrc and MacPorts, so if you're interested in switching away from Homebrew but unsure about Nix, there's no special work you have to do to install one of those alongside Nix for use as an escape hatch that doesn't involve Homebrew.
Interesting read as I am a huge OSX and CLI fan. Thanks for that! I too am worried by the root file system "security enhancements" in recent versions of OSX that make it harder and harder for me to do my job (develop website backends).
However I may wait to try Nix until it has better support for my primary tools (mainly php and ruby based websites) ... because I LOVE the simplicity of homebrew. And if there are any homebrew developers reading this, I frickin' love you guys. You make everything I need to do extremely easy.
Indeed, as a newcomer to Nix I was immediately put off by the arcaness of everything. I don't want my first experience to be memorized obscure paths and funky command line arguments (such may justified _later_ when more advanced options are wanted). I'm glad it works for people and I gather nix-env is a step in the right direction, but it's IMO not quite there yet.
It actually is! There are lots of examples of arcane Nix commands, but as of Nix 2.0, you actually can type `nix search git`. And it works much better than `nix-env -qa` ever did.
Your point stands, though. For example, why is it:
nix-env -iA nixpkgs.git
Instead of:
nix install git
(I know what the answer is, but still, it's a question that I think a lot of newcomers are going to ask.)
Cool response. In your opinion, what advantage does Macports offer over homebrew? I'll admit When I read tutorials on new frameworks, tools, etc, they almost always assume everyone uses homebrew. So (perhaps lazily) I just did the same.
Homebrew has been getting worse and worse as they continue to add more dark patterns and remove features from the software. The new maintainers aren't even well liked by the original creator.
Homebrew was always about compiling and installing your own software the way you want to. Now it's been turned into a glorified binary package manager with centralized control.
It's one of the reasons why I like Ubuntu. Of course, there is also the possibility here that a package could be compromised, but I feel there is more scrutiny, especially as it is based on Debian.
I've become increasingly uncomfortable with running brew ruby scripts on my work machine, and try to keep usage of brew to a minimum.
I have been using this setup for a few months now, and generally it works really nicely, except for clang_12 on Mojave and Python packages.
When you combine direnv with shell.nix or default.nix by adding "use nix" in a `.envrc` file, it autoloads the Nix shell environment when cd’ing to a directory and it is amazing for development!
How is Fink doing these days? It looks still actively maintained. The advantage with Fink is it uses dpkg; I winced as I watched years of Homebrew reinventing a Unix package system, badly.
After a long time dabbling with nix packages on OSX, never quite understanding what the heck I was doing syntax wise, I finally "got it" after watching these _excellent_ tutorials [1] by @burke. I still have no idea what CLI commands to use for anything, but I never need to understand that since I only use Home Manager and work directly in derivations.
The revelation that _everything_ is an attribute set was the key I think, along with understanding syntactic quirks like the multiline string ('') and attribute set merge (//). The latter is especially disorienting if you aren't familiar with it. With that small foundation you can easily dig through the source code for nix packages and craft whatever you need, or at least I so far have been able to. I am now fully on a nix packages / Home Manager setup except for various homebrew casks I haven't ported. That isn't fully necessary, but it would be nice to have everything in one place (although I _do_ manage my homebrew Brewfile by way of Home Manager).
I really love the temporary installs feature of nix OS. Often I just want to try some software once and don't want to use it afterwards. The temporary install feature is perfect for that, to help avoiding building up bloat from forgotten uninstalls.
I think one thing that's interesting about "temporary" installation is that nix has this notion of "present" vs "installed."
When you run a nix-shell the program is downloaded and if necessary built and put into the nix store, after leaving the shell it's all right there on your disk ("present") just not "installed" into the environment.
This is why the first time you run nix-shell with something new it downloads stuff while subsequent invocations are immediate. It also means that if you liked ripgrep in a shell, installing it is just has nix write out some new symlinks.
i've wanted to like nix, but both times i tried it i got turned off by the config. not the language per se, but how it's used. i was expecting some ocd space church dream where all the configs for the packages had been coalesced into nice clean abstractions on top of this language, what i found was that it felt like most config was kinda random and arbitrary. the naming of settings wasn't consistent and that older alternatives felt more consistent. (whether it be --with flags in autotools or the use flag system in gentoo).
i really like the reproducibility goals and the separation and gnu stow like structuring of installed packages, i just had a hard time with the inconsistencies with how the nix language was used to express configuration for packages.
The goal isn't to replace Linux with a "grand unified distribution". Rather, Nix complements and extends the usual way of doing things where the usual way is painful.
Nix grew mostly organically, as a community effort, and it shows. If you want a more consistent interface, Guix might be enticing. I'm not sure of its state on OS X, though.
This is really cool. Temporary development shells are a great reason to start using Nix. It makes new tools much easier to test without wreaking havoc on your existing projects.
A project I'm working on called flox (floxdev.com) is aiming to make Nix easier to use and install. You can define collections of packages ("profiles") that install with all their dependencies locally, and automatically sync across every machine where you have flox installed.
It works on all linux distros now, but we're working to add macOS support.
> Unpredictable command times. I never know if running brew install or brew upgrade is going to take 5 seconds or 20 minutes. This usually means knowing if a program or dependency is being downloaded as a prebuilt binary or compiled on the spot.
Wait, does Nix solve this? And how? The author never mentions how Nix does.
I've never encountered a package manager with this capability, so just curious how such a "global progress bar" even gets implemented.
If you follow one for nixpkgs, it means that the infrastructure has at least attempted all possible builds. If you're starting to build something, it's likely broken. However, that's usually not the case. https://hydra.nixos.org/jobset/nixos/release-21.05
I spent about an hour looking into putting Nix on my work MacBook but the installation seemed too complicated and I couldn't find any official documentation. I frankly just don't want to have to create a volume, that's a pain in the ass.
A colleague who tried Nix also informed me that he was coming across outdated packages like Terraform. Brew's packages are updated fairly often.
I don't think that's true, at least about terraform. I see[1] version 1.0.2 updated 8 days ago (same day it was released).
The installation[2] is also not that complicated.
The documentation also seems to be easily to be found on the project's site[3].
Nix has its issues, like steep learning curve due to the language being functional and lazily evaluated, but it doesn't look like you even got to that point. If these things were causing you difficulties, perhaps you dodged a bullet.
I love nix if only because for a given project apt/yum/pacman updating my host won’t break things, the environment is replicable easily, and cross compile environments are a breeze. It’s like pythons virtualenv on steroids and it’s freakin awesome. The build never breaks until I decide to update something, even the compiler!
I installed nginx using macports didn't have uwsgi params module.
I tried finding solution on Google couldn't find.
Deleted macports installed nginx via brew
Nix generally only manifests temporary environments, rather than mutating global state. For a more Pythonic environment, try the direnv integration, which will let you both `use nix` and also `use python3` together, as long as it's in that order.
This exact thing is what trips me up about using Nix and I haven't been able to figure out how to get around it. Sometimes you want a global Python library. Here's one example from my own use case: I want to use a Weechat Python script that depends on a Python library called "pync". With e.g. MacPorts I can just run
sudo port install py-pync
and now the Weechat Python script will be able to "import pync". But this doesn't work with Nix since every package is completely isolated from each other.
I spent quite a bit of time trying to figure this out and was unsuccessful. I'd love to know what Nix's answer to use cases like this is.
The (idiomatic) answer is based on the fact that your global environment doesn't need pync, weechat needs pync. Therefore, you override weechat's dependencies and inject the package into it's private environment like so: https://nixos.org/manual/nixpkgs/stable/#sec-weechat
But you don't want to install globally. One of major strengths (for me at least) is that when combined with nix-shell you have something like virtualenv, but for all packages, not just the ones from python.
This means that someone else who checks out your project can get the same environment you had for the development.
If this is combined with direnv, then you don't even need to invoke nix-shell you just enter the project directory and everything is there.
If you use something like poetry2nix[1] it will automatically have the dependencies your project has.
I do want to install globally, as outlined in my original comment. Unless you can tell me how to somehow modify the installation of Weechat to also include this one Python package.
I think Nix is a really cool concept and no doubt there are certain use cases that it solves really well. But my experience with it as a day to day package manager was that it just made simple things hard without a proportional benefit to make all the trouble worth it.
That Nix expression defines a package meaning 'just like Weechat, but with pync in the build paths'. There are examples with links to more docs on the unofficial NixOS wiki: https://nixos.wiki/wiki/Weechat
I understand that this may still feel like 'making a simple thing hard', but it does, I hope, answer your question. Modifying existing packages in just a few lines of code in order to include additional dependencies is very much part of Nix's paradigm. :)
There's a way you could write all that inline to install it with nix-env, too, but I can't be arsed.
Once you apply the override concept with Weechat, you'll have a feel for what is a very general suite of customization mechanisms in Nixpkgs. That'll be unlikely to miss a similar opportunity with other packages.
NixOS Discourse is a great place to clear stumbling blocks like these, by the way :)
> I do want to install globally, as outlined in my original comment. Unless you can tell me how to somehow modify the installation of Weechat to also include this one Python package.
The goal that Nix tries to solve, is to make all implicit dependencies explicit. The proper way of doing is to list these packages in the WeeChat.
> I think Nix is a really cool concept and no doubt there are certain use cases that it solves really well. But my experience with it as a day to day package manager was that it just made simple things hard without a proportional benefit to make all the trouble worth it.
Yes, it indeed it makes simple things hard, but then this approach makes hard things easy. Because every package in Nix repo have explicit dependencies it solves many other problems. You no longer have dll hell for example a project that uses nix to build, generally just builds without issues.
I use Nix for development and the great thing is that I can use it to describe all tooling that I need for development. So another person can replicate the same environment. I was able for example to write a custom modules to standardize dev environment. And if someone uses it I can easily use the same mechanism to produce:
- nix package
- wheel package (it is for Python project)
- package application into docker with just only the necessary pieces (can also slim python by only compiling features that are actually used)
When using nix for building, I can speed up the work, because Nix only rebuilds things when things change. For example in traditional pipeline when you work on dev branch when you merge branch it typically rebuilds the whole thing. Nix is smart, and tracks what actually changed. If no actual files changed the build will return immediately with the binary that was produced on the branch. If during the merge our branch was integrated with some other changes that happened in the master branch, Nix will know to rebuild it again. In addition to that, on average built time was cut in half for me.
Yes, it provides this feature and you can change location of the store, but when you use custom location, you won't be able to rely on the precompiled binaries and will end up recompiling everything.
This is because when nix links shared libraries it modifies binary to point back to /nix/store/... location. This way it is completely isolated from the rest of the system.
Normally nix won't allow you to place a symlink /nix to point to other place, because while it mostly works, some packages that are resolving symlinks to actual path might end up being broken. You can define an env variable (forgot what it was, but you should be able to find it by searching the error message) that allows you use this setup.
This actually was an issue when OS X Catalina first appeared. In that version Apple made root disk read only and also wiped anything that was manually created. Ultimately the solution was to create a /nix as a mount. The nix installer does it on mac (in the past you needed to add an installation option, not sure if that's still needed).
A feature declarative programming gives Nix is being able to commit a list of dependencies in the file default.nix, and when entering just run $nix-shell, and have the right version, and it works for my Linux colleagues as well.
How does default.nix handle dependencies? From what i've (i'm on NixOS) seen you just see something like `gcc` or `openssl`, with no clue what version.
Tbh i've found Nix very very lacking in the dependency department. What i _want_ is something like Cargo.toml from Rust (or maybe packages.json/yarn.lock if you're familiar with NodeJS), which compiles to a Cargo.lock. Instead what i have is a name for a dependency with no clue what version the writer expected, or what version my computer will install.
Pulling a new commit from nixpkgs repo can lead to chaos and random things update with no rhyme or reason.
Flakes, in alpha i believe, is the first thing from Nix that actually felt repeatable. But even with Flakes i still don't know what version anything is, but at least i know what concrete commit sha worked for me and can always revert as needed.
My next NixOS install will use Flakes with a very granular approach. The OS on one package ref, and other packages all have their own isolated refs. This will be verbose and bloaty, but being able to update Vim without my OS going crazy will be really nice.
I'd still like Flakes to behave more like Cargo.lock, but it's at least progress.
The way I currently pin packages is to just import a specific commit hash of nixpkgs as a variable. Niv[0] is a tool that makes doing that and updating very painless.
But you still have to look up the individual values right? I'm using Flakes, pinning commit hashes is automatic (lockfile) - but i still have no clue what version any package is on.
As with a lot of comments about Nixlang, the tooling there seems lackluster at best. In this case i'd want tooling to easily know and change package versions, not sweeping changes to your entire package ecosystem. Nix might give me that, but resolving derivations manually is not a great UX, imo.
For languages with relatively uniform package ecosystems (like Rust and Go) there are tools that can generate Nix code for you from lockfiles with exact versions. In practice this usually suffices, because these ecosystems also have to remain compatible with other environments where OS-provided dependencies like C compilers and libs have wildly differing versions.
For most C ecosystem stuff you can try overriding the version, but you'll quickly notice why this is a hard problem: Different software versions want to be built in different ways! Someone would have to therefore maintain the quirks of each package version's build process for each package for all eternity, an insanely huge endeavor.
Also, even if it were possible this would break the Nix binary cache model, because suddenly you'd have an insane combinatorial explosion of not just package versions, but also versions of their dependencies. In order to preserve purity and reproducibility, a change in dependency version means that all dependents have to be rebuilt, taking up insane amounts of CPU and storage on the build farm.
I don't think that your request is unreasonable (I'd love to have this feature!), but it's probably not fully possible in any software distribution ecosystem.
`gcc` is basically a function/package derivation. What it evaluates to depends on the given channel/nixpkgs source you use - in the derivation (something like development/../gcc/default.nix) you will find the exact commit/version that will be used. For packages that routinely use multiple versions something like python2 and python3 are both available but the whole point of nix is to let go of this notion of versioning — you can reference out of tree packages as well, hell you can even use Y compiled with gcc and clang and reference them both.
So your remark that nix is lacking in dependency management is quite far from the truth - it is the only proper solution to the dependency hell problem I’ve seen (of course as well as guix which employs the same model)
> both available but the whole point of nix is to let go of this notion of versioning
To be clear, i'm on NixOS and i use Flakes to manage my imports.
Letting go of versioning feels like nonsense. I need to write my software against specific versions of other libraries. I can't just ignore version and hope for the best.
Why does Nix think suddenly Semver compatibility is no longer a concern?
Previously, I used Nix on Mac (as this blog post suggests) but I had enough “gotchas” with Nix on Mac that I decided to go full NixOS.
For those who ask why a VM or why I keep Mac around at all: I like macOS for everything else besides dev work. I use iMessages heavily, I like the Mac App ecosystem such as calendars, mail clients, etc. This gives me the best of both worlds.
I usually run this on an iMac Pro but also have a MacBook Pro. It runs great on either. It’s also really nice with Nix to update one machine and the other is just one command away from being equivalent.
I recognize this is a “weird” setup but wanted to point it out since it seems relevant to this post.