Hacker News new | past | comments | ask | show | jobs | submit login
The Gates to Hell: Apple’s Notarizing (cdfinder.de)
312 points by _xrjp on April 30, 2021 | hide | past | favorite | 205 comments



Notarization has been a nightmare of a solution to a problem that isn't effective. You can get practically as much security by pushing malware signatures to the client without the massive privacy overreach of having Apple archive each and every bit of code that you generate for distribution.

This is just Apple's overreach extended to the desktop. Excessive control that makes developer's lives hell while adding barely any security on top.


>You can get practically as much security by pushing malware signatures to the client without the massive privacy overreach of having Apple archive each and every bit of code that you generate for distribution.

Apple do this too, it's called XProtect: https://support.apple.com/guide/security/protecting-against-...

They also have a built-in malware remediation tool, which is presumably what was used when they killed the vulnerable Zoom web server on everyone's Mac: https://www.zdnet.com/article/apple-update-kills-off-zoom-we...

Notarization is clearly part of a defense in depth strategy for macOS.


> Notarization is clearly part of a defense in depth strategy for macOS.

Defense in depth means layering security. It's, for example, when you use password hashing but also full disk encryption. That way if someone gets your hard drive, even if they break the disk encryption, they don't get your password in plaintext. Even if they know how to crack the password hash, they first have to get past the disk encryption.

Notarization and signatures aren't two separate measures. They're the same measure implemented two different ways. That's basically useless. If some piece of code is identified as malware then it both gets revoked and added to the malware list, and then they both catch it. If it hasn't been identified then it's neither revoked nor on the malware list.

The things that make it past one also make it past the other. There is no defense in depth because there is no depth. The two measures would have to operate based on a different principle in order to achieve that.


Notarization is distinct from code signing/signatures, and has distinct security benefits. Notarization involves uploading the entire binary to Apple, where signatures involve you creating signatures on files that Apple is blind to.

Apple cannot guarantee they are revoking all certificates for a given malicious application with code signing, because they do not know what variants exist even if they have obtained one of them. Revoking just one code signing certificate may not be sufficient. With notarization, they can search for these variants and prevent new variants from being signed by new developer accounts -- protecting machines that i.e. have outdated XProtect definitions.


It's linguistically confusing to try to distinguish between malware signatures and digital signatures when we're comparing them, so let's call one fingerprinting and the other one certifying.

> Apple cannot guarantee they are revoking all certificates for a given malicious application with code signing, because they do not know what variants exist even if they have obtained one of them.

This is making the case that certification/notarization is worse than fingerprinting because the same malicious application could have multiple independent certificates. But since notarization is the thing people are objecting to, that's no argument in its favor. (Though, of course, they could, and possibly do, refuse to notarize apps with known-malicious fingerprints, so if there is a difference there at all then it's only by implementation and not by necessity.)

> With notarization, they can search for these variants and prevent new variants from being signed by new developer accounts -- protecting machines that i.e. have outdated XProtect definitions.

Let's think about this for a minute. You have a malicious application that at one point had a valid certificate. The user goes to run it.

If they have a working network connection to check whether the certificate is revoked, they have a working network connection to get the latest malware fingerprints. If they don't, they get neither. So what's it buying you?


> Notarization involves uploading the entire binary to Apple

And that's exactly what I have a problem with.


XProtect has a wider scope than notarization, though, and its detection rules are different. Notarization and XProtect are both focused on stopping malware but they don't actually operate on the same principle, notarization happens in the cloud before deployment (to stop malware from being deployed) and XProtect happens continuously in the operating system (to stop malware from running), checking for malicious signatures. That functionality intersects with notarization, but it's not equivalent.


It operates based on the same principle. There is a list of known-malicious software and you reject that software. If the bad software is known, they can both reject it. If it isn't known, neither of them would.

Defense in depth requires one of the measures to catch things the other one wouldn't.


People seem to be having trouble with this, so let's go to the ogre analogy.

Defense in depth is layered, like an onion.

Combining automatic certification with fingerprinting is layered, like a cake. It's not the same thing.


> Notarization is clearly part of a defense in depth strategy for macOS.

The issues being called out here is that it comes at too high of a cost to both develops and users compared to the benefits it provides.


I hardly notice the cost as a user. Sometimes I have to right-click and application to open it the first time, same as it’s been since Gatekeeper was introduced as far as I know.

As for developers, well. Apple clearly do not treat their developers right.


"defense in depth" should really be shortened to, and called, what it really is: "paranoia".


A defense in depth would be to have layers of trust for applications, identifying and preventing bad behaviours, rather than trying to lock developers in.


>A defense in depth would be to have layers of trust for applications

They do, it’s called…code signing and notarization?


Having to get any Apple code signing key for regular users is a barrier of entry for malware. However low it is, it is there. Moreover, it gives Apple the power to revoke certificates in the future to at least attempt to contain further malware activity.

Is it really that hard to get your code signed as a malware developer? No, not at all. Is that worth bothering developers so much? Maybe not. Is it a power grab? Probably. Does that together make notarization useless for security? No, not really.

Notarization is just a step in the chain. It disincentives malware, especially trivial malware (which is the largest quantity and the most relevant for the bulk of the users) by tipping the economics of it slightly less in the malware developer's favor. It does this at the cost of also tipping economics less in regular developer's favor. You may disagree whether or not that's worth it (and I might be inclined to share that opinion), but that doesn't make notarization useless from a security perspective.


The economics also work in Apple's favor as it either requires using your real identity to commit fraud, committing identity theft by creating an LLC with someone else's identity, or paying for a registered agent in a third-world country to sign up for you (not sure how much that costs though, I've never looked!). I'm sure most malware cases they deal with are triaged for the possibility of filing a police report.


It turns out that getting access to an Apple developer account is not all that hard.


And how is any of that different from the Developer ID code-signing Apple had already? You still needed to register as either a corp or an individual using legal identifying documents just to generate the certificates. This is the step you seem to be attributing to notarization. It’s not new at all.

Moreover, Apple was also already using OSCP to check for revoked certificates when validating the code signature. They’d already revoked malware-producing Developer ID certificates several times in the past before notarization ever existed.


I'm explaining how it currently works - they have the legal resources file police reports for serious reports of malware, or if it's in a place with largely uncooperative police, a domestic federal investigation into the activity.


But the question is why they needed to require notarization; it adds nothing to this protection ability.


That’s been discussed a lot elsewhere in the thread. The parent of my comment specially talks about how any barrier to entry (My add: especially legal/criminal ones) deters most unsophisticated/undedicated attackers from widely distributing malware.


> You can get practically as much security by pushing malware signatures to the client without the massive privacy overreach

I think the issue with pushing malware signatures to the client is that it is reactive rather than proactive - i.e. by the time you have identified a malware signature, it is already too late (which leads to an inevitable cat-and-mouse / whack-a-mole game).


So, my take has been that Apple’s been doing a long push to switch incrementally from a Unix user/group/ACL security mode to a capability model: the various entitlements, things like PowerBox not having an API, notarization, etc.

The big issue I’ve always had with capability security (as implemented here and in Fuschia) is that, while it is a better security model in many ways, it’s also a lot easier to use against developers and power users, especially when you depend on PKI to implement your unforgeable tokens.


And it does not even work in every case even signature is successfully identified. For example. If the malware already take down the network in some way. There is just no chance for apple to push the malware signatures and fix to client anymore.


> I think the issue with pushing malware signatures to the client is that it is reactive rather than proactive - i.e. by the time you have identified a malware signature, it is already too late (which leads to an inevitable cat-and-mouse / whack-a-mole game).

But notarization is the same. Apple isn't vetting notarized apps before they're distributed. All it does is impose a cost on the developer, who could still for all you know be a member of the Russian mafia. Or any random developer who has had their machine compromised and then used to sign the compromising party's malware.

It doesn't get revoked until somebody identifies the code as malware. It's the same reactive process as malware signatures.


Malware can change its signature and then it’s no longer on the exclusion list.

However if an inclusion list is used, then the malware changing its signature means that it loses the ability to execute.


Except that approval is automatic so they just modify the signature and submit it to be included again.


Honestly I never have seen such piece of crap like "Notarization". One of the Apple's failures ever (if is not the most one).

A nightmare (and not cheap) to deal with it as a developer.


$100/yr is fairly cheap all things considered, and it takes thirty minutes Max to set up a bash script to handle it.

And that’s just non-Xcode. If you use Xcode it’s often automatic.


Cheap considering what? Considering the 30% margins they take on any further sale? Considering the 25$ one-time fee the Play Store takes?


For macOS (which is what we're talking about in a topic about notarization), you can sell your software any way you want outside of the App Store, without the 30% cut. There's still a $100/year developer account fee to be able to notarize new builds of your app.

This is not iOS where the App Store is the only way to install an app.

This comment is not an endorsement of any aspect of Apple's business model, I'm just correcting a factual error in your comment.


Just FWIW it's a 15% cut nowadays. (Unless you are doing >$1million a year of sales, which anyone quibbling over a $100 annual fee isn't.)


That is correct, but that is a special discount program you have to apply for, wait for judgement, and get approved for in advance. It's not the default.

It was a great step forward, but I don't understand why they made it so complicated with an approval process, when Google did the same thing afterwards and could just say "the first million dollars a year is 15%, after that it's 30%".

The total revenue difference for the different companies is probably negligible.

(or... outside of the App Store you can sign up for a PayPal account and accept payments at a 3% rate instantly)


They review applications because they want to make sure big developers with many apps aren't dividing their apps across lots of different developer accounts so as to get around the total sales cap. (The application form asks questions about other accounts you have, related businesses etc.)

If you are a small dev with just one developer account, you'll sail through the applicaton process.


Cheap considering buying a proper certificate for signing and releasing on Windows will often cost you the same. ;P

If your bar to hit is Linux, you'll never be happy with anything.


On this note, does HN know where to acquire the cheapest possible code signing cert for Windows?


The cheapest base code signing certificate will be via a Sectigo (formerly Comodo, although they allow resellers to advertise either brand) reseller. I'm not affiliated with this site beyond being a customer, but the website 'codesigncert.com' is the absolute cheapest i've found for Windows signing (EV 3 years: $219/yr [0] / regular 3 years: $59/yr [1]).

Note that this landscape might change in the future. Microsoft is working on Azure Code Signing, which will mean Microsoft themselves manages issuing the certificate, doing the identity verification, etc - the only catch being that they probably don't want to have to deal with any lost keys or improperly stored keys, so they don't let you generate your own cert and you can only sign certs via the API or other integrations. All of this info is available via this talk [2] and it's the only public information available on this service that i've found.

0: https://codesigncert.com/sectigo-ev-code-signing

1: https://codesigncert.com/sectigocodesigning

2: https://youtu.be/Wi-4WdpKm5E?t=530


I just renewed a certificate using Sectigo, it was a painful experience.


Wasn't for me. That site's renew button simply starts an order for a new one (as renewal is really just replacing with a new, extended certificate) and sectigo themselves re-did all the company verification, after which my cert was issued. Went smoothly except for waiting ~24 hours for it. If you were trying to get an EV certificate, the process is supposed to be more strenuous on making you prove your operation (sometimes) as well as prove that your certificate infrastructure is secure enough.


It wasn't an EV certificate, just ordinary code signing. I guess you were just lucky.


> If you use Xcode it’s often automatic.

You can't have read the same article I just read!


Not everyone has the same difficulties the author did.

XCode notarization does work for many developers, perhaps even the majority! It is a fragile process, though, and the author is not the only one for whom it fails.


Maybe independent notaries are what you're after if privacy is the problem? The web seems to do fine with independent cert providers.


Yeah but that puts the burden of verification on Apple. This way the application author has to (legally) undersign their intentions.

Makes sense to me...


I wrote a little sh script to notarize HandBrake two (or maybe three) years ago, and that's was it. It's not rocket science. But like every new thing, it required a bit of time to read the documentation and to understand what's going on.

The plugin issue described in the article is probably related to the hardened runtime, so it's unrelated to the actual notarisation process.


This is my notary script for a DMG: https://gist.github.com/lunixbochs/0ceeb23be5c3d5c6748165e61...

   ./notary.sh notarize app.dmg
   ./notary.sh staple app.dmg
I store a password as a keychain entry named AC_PASSWORD. If you're running this in headless CI you should run a notarize command once interactively so you can tell keychain to always allow altool to access the password.


You can also opt to use API keys from Apple Connect in CI, which is what I do - less of a headache with Keychain.app.


While you only have to update your build process once, you now have a noticeable delay for every single release build, and worse, a few times per year, your build will break because you haven’t signed Apple’s latest contract.

And for what? There is no way Apple can make it impossible to get malware notarized.


But your release build is not done constantly, and so you have to sign in and accept an agreement - neither of which is a big deal.


> But your release build is not done constantly

Some people (are they the minority? probably?) cut release builds frequently enough for it to be a big pain point.

> and so you have to sign in and accept an agreement - [which isn't] a big deal

It probably is though. Have _any_ agreements you've accepted in the last, say, 5yrs not had blatantly overbearing or malicious terms?


If you're cutting release builds often enough that the extra minute or two is hampering your productivity, you may have bigger issues with software development.


For me, it is generally > 2 minutes, and “release build” cover any build that leave your machine, e.g. nightly builds, beta builds, test builds to a specific user to track down some issue, etc.

Signing the agreement is also not just clicking “Agree”, you have to hunt for the contract online first. Last time I did this, their 2FA was down, so just logging in took me about a minute.

It feels like a lot of friction that serves absolutely no purpose, which makes it so extremely infuriating.

This is from the same company whose former CEO pressured a developer to reduce the boot time of the Macintosh (and got a 28 second speedup).

Yes, this stuff really matters to a lot of us!


I got all the Airwindows audio unit plugins notarized in about a week (that's several hundred distinct plugins), and used third party apps to do it (DropDMG and SD Notary)

The trouble I ran into was this: Apple wants the process to be a little mystifying as a barrier to people trying to find exploits within it, I think. I disagree: for instance, using a shell script and the Apple terminal tools is very much the Apple-intended approach, but integrating the third party tool got me sending code to Apple's servers quicker, and it's those servers that matter. I wasn't considered a significant developer to Apple, nor will I ever be, but I code open source software that's being adopted by other projects and used as an on-ramp for would-be DSP coders: my choices and attitudes matter.

Apple in the form of a key Gatekeeper dev had very specific intentions for me: I was strongly advised to use automatic code signing in XCode and use Terminal and a separate workflow to remove the incorrect cert that Xcode assigns, and put in the correct one (Developer ID Application, in this case).

Apple's defaults for an XCode application (written in Swift, because of course it is) and automatic signing, work perfectly first time for something like their example 'hello world' app. For any Audio Unit, this fails every time. You can set it correctly inside XCode from the start, using manual settings. I chose to do this.

Near as I can tell, I am expected to have a manager to whom I will turn over code, and who is the only one with a code signing ID, because as a lowly DSP coder I'm not expected to be allowed to put code out into the wild without supervision. There's a clear expectation that if I mattered, I'd either be the boss of (not necessarily trustworthy) coders, or I'd have a boss whose job it is to be more trustworthy and oversee my code in case I have a wild hair some day and code a bomb into things. I feel there's a resistance at Apple to putting the keys to distribution into the hands of untrustworthy people. Perhaps a belt and suspenders approach? I'm not convinced this is in any way a good thing, though.

Doing the code signing correctly, which is not the same thing as having an Apple-specified process, means every bit of code I generate gets checked for malware by an Apple process. This could save my butt if I got owned by some extremely clever second-level malware that tried to commandeer my XCode and build malware into everything I make. I get that there are also possible risks with having every executable sent to the mothership to be studied: if I competed with them, that's wildly anticompetitive and lets them decompile and pirate anything I do (given sufficient effort). I'm literally sending them all my work before anybody else ever runs it.

I feel that this type of risk (which I'm not convinced is a currently active threat) is better handled by government, regulation, and the law, than by forcing the software ecosystem to normalize running any old code from anywhere.

Doing Apple notarization the way Apple wants it done, should be a nontrivial factor in keeping Apple products from being a giant pile of malware, spyware, and user-manipulation in future. If you're able to do it properly (as in, get the code AUDITED, not 'do it only the way Apple says') the result is distributed plugins and applications that 'just work' the way they used to, but without the same level of risk to the end user.

I think it's worth the trouble to do this. The benefit is clear, and possible dangers of the approach belong to the legal sphere rather than being a technical reason to avoid code signing, or normalize having everybody avoid code signing on your behalf.


Avoiding the type of malware that’s self encrypting and mutating to avoid detection seems like a significant goal. I don’t know if Gatekeeper achieves it though.


The initial setup is a bit of a pain (and the documentation is a bit lacking - though much better than most Apple documentation - particularly for edge cases like mine, distributing a screensaver), but to Apple's credit the process is pretty solid and is now consistently quick. I've notarized dozens of builds of Aerial since the requirement was announced and I think only once did I have to wait to release because their service was stuck.

Minor tip: Stapling, while optional, should be recommended (and might as well be mandatory) to everyone that notarize (you staple a certificate signed from Apple that avoids the call home when the user tries to open your software).

The only thing that slightly irks me is the contract situation, if you have a "paid" developer account, you absolutely need to sign any update to the "paid app" contract from the App Store even when you want to notarize an "out of store", open source app.

Plus it breaks my script every time...


Yup, "You must first sign the relevant contracts online." And large company, so I've gotta track down the guy that can actually sign the contracts.


And, not mentioned in the article... you have to have an Apple Developer ID which costs £79/year ($99). Presumably if your subscription lapses any previously released software will stop working?

That is the part I find most offensive, if it was just difficult and buggy I would suck it up and work around it. But having to pay for the privilege is too painful, particularly if you're offering free software.

For my case (non GUI app) I can at least distribute via Homebrew and have the user build from source in a more or less automated way.

Another notarization helper tool is here https://github.com/mitchellh/gon


No, if your subscription lapses previously released software won't stop working. If you are offering free software you can sign with an ad-hoc certificate, and instruct the user on how to bypass gatekeeper, which isn't great at all but it doesn't cost any $$.


Looks like this explains how: https://www.digicert.com/kb/code-signing/mac-os-codesign-too... but... "only Apple Developer code signing certificates are compatible with GateKeeper"

Does code-signing with an ad hoc certificate and no notarization provide any better experience than just unsigned code?

Do you get a friendlier message (c/f "malicious software: Move to Trash") when Gatekeeper blocks it?


Unsigned (arm64) binaries don't run at all on M1 Macs, so yes, an ad-hoc certificate provides a better experience ;)


I just tried an unsigned bin on M1 Big Sur and the experience is the same:

it's initially blocked with a "Move to Trash" dialog

but you can go to security prefs and click "allow anyway"

Then try again, click "open" rather than "move to trash" on another warning dialog and the file does get run.

I haven't tried a signed+un-notarized one but it sounds like it'd be similar?


I suspect that the code you're trying to run is ad-hoc signed.


Not by me... and it's my own code build from src in a github action.


When targeting ARM macOS, the linker automatically ad-hoc signs everything it outputs. You can check this by running `codesign -dvv` on the binary. Alternately, if your binary is an Intel binary running under Rosetta, those can be unsigned.


Hmm, it was built on Intel though (GitHub Actions macos runners are only Intel)

But maybe some other part of the toolchain (Gradle, GraalVM native-image) was implicitly ad-hoc signing it


https://eclecticlight.co/2020/08/22/apple-silicon-macs-will-...

"This new policy doesn’t apply to translated x86 binaries running under Rosetta"

...I guess that's why.

The whole situation is so confusing. The article talks about how unsigned code won't run on ARM macs, but an ad hoc cert is fine.

I suppose this fits what others have said in the thread - unsigned native-ARM binaries will be completely blocked. Unsigned x86 binaries can run on ARM macs under Rosetta (what I tried... or possibly my bin was signed by the build tool).

But all these will still get block/warnings from Gatekeeper if un-notarized, which is the part you have to pay for.

This https://github.com/Homebrew/homebrew-core/issues/47129 suggests there is yet another factor to consider - the "quarantine" flag. Presumably downloading a tar.gz from github releases via Chrome gets a quarantine flag triggering the Gatekeeper warnings. That Homebrew issue (from 2019 though...) seems to say that "bottles" installed via Homebrew (which is basically the same thing - a precompiled bin downloaded from internet) won't have the quarantine flag set and they just need to be ad-hoc code-signed.


Instructing users on how to bypass gatekeeper is a nonstarter, as explained here:

https://lapcatsoftware.com/articles/unsigned.html

This simply is not a viable distribution method for the mass market. Apple has positioned apps from devs that pay Apple so far above apps from devs that don't that you cannot compete outside of their subscription revenue model.


[flagged]


In this case I think it is accurate given forum post evidence, but possibly dated to 2019 given other dates on the page and the fact that Big Sur is not mentioned.


This puts it well, albeit with (enjoyable) snark.

So often these criticisms read like it’s expected that Apple conform to the Linux “model” of doing things, and anything else is like the plague.


Code signing isn’t linked to an active developer subscription, if it lapses all existing signatures are still valid - you just can’t sign more.


What happens when the cert root expires? Does it not expire or does Apple grant an eternal valid signature when apps were signed before the root expired?


For apps I’ve downloaded on my iPhone, there are sometimes updates that denote Apple updated the key. Assuming that’s App Store only behavior though.


Apple just did this recently for iOS 14.5, actually.


I don't get the whole panic around notarization. I maintain a big open source project, and is quite complex. It is a game engine with downloadable plugins and lots of system integration. Notarization was easy. I didn't use Xcode GUI for it, because it has one line command to do it, which is more comfortable for me: https://github.com/coronalabs/corona/blob/53eeb3e31ac09f7a46... Not a biggie.


The panic is that it is an anticompetitive moat that they are setting up that, if used in such a way, would allow them insane preference for their own App Store on all Apple computers.

Imagine what sort of system-level settings you'd have to change on macOS today if you wanted to ship a competing macOS App Store on Apple devices with UX similar to Apple's own, but without Apple signing keys.

You'd basically have to write some malware-style code to get inside of Gatekeeper and privilege your downloaded/purchased apps the same way Apple does for apps from their own App Store.

How long do you think they would let this stand before trying to whack your installer daemon with XProtect for posing "danger to profit integrity"? (They'd spell "profit" as "system", though.)


Could we get a year added to the headline?

The blog post talks about waiting to upgrade to macOS 10.15, but the current macOS is version 11, so I'm thinking this is fairly old. Because at first I thought this might have been related to a recent info.plist vulnerability. [0]

[0] https://www.wired.com/story/macos-malware-shlayer-gatekeeper...


I've just barely got my app working with code signing (using desperate amount of self-checks and fixups for when Xcode messes up the build or signing decides to use xattrs, which doesn't survive in normal archive formats).

Now I'm battling with Notarization which is exactly this hell that either pretends to work and doesn't, or spits inscrutable errors and sends me in circles between multiple tools and services.

And these days all the documentation that Apple produces is in form of brief mentions in WWDC videos. Aaaarrggh!

I'm seriously considering switching to WASM or just abandoning my apps.


> And these days all the documentation that Apple produces is in form of brief mentions in WWDC videos. Aaaarrggh!

There's actually a very detailed guide that explains both how to do it from the Xcode UI, and from the command line: https://developer.apple.com/documentation/security/notarizin...


From personal experience, notarization hasn't really caused any friction in my dev process for Mimestream. The upload + server response usually only takes about a minute. Yeah, it's another thing to learn, but the process is pretty well-integrated into Xcode, and if you're building via script then it seems well-supported?

On the other hand, code signing is perennially confusing, and I wish the documentation was better.


The notarization process is super painful, no doubt. I had originally written shell scripts to automate the process for my company, but recently switched to the excellent command line tool 'xcnotary' (https://github.com/akeru-inc/xcnotary). it's available through Homebrew.


Thanks for sharing this, I've also got a fairly stable shell script to do the same but I'll strongly consider moving to this next time I have some work to do on the related build since some of its features seem nice.


I wrote a little thing about signing and notarization recently. I don't harp on the details of the actual platform processes, because the argument is that even if the process was smooth, it's completely unacceptable.

https://nixpulvis.com/ramblings/2021-02-02-signing-and-notar...

There are issues with the way we develop and distribute applications and software in general, but none of the major platforms are doing anything but extracting $$$ for themselves and tricking users into a false sense of security.


The administration around code signing and notarization for both Apple and Windows was huge (1.5 developer months from start to kinks-worked-out for an electron app). Startup lessons learned: don't build desktop apps.


Then again, as a user I’m not sure that I want to trust an organization that can’t afford 50k of overhead to execute unchecked code on my personal machine.


What a fantastic write up. So many of us are familiar with the "quirky" parts of the apple development ecosystem and the self-gaslighting effect of trying to solve problems that are non-existent and yet persistent.

This really captured the constant "wtf" of building against the sloppy moving target.

But... its still better than a lot of toolchains/ecosystems, and when it all does work, for 1 month a year lol, it's great!


Maybe Apple should hire Steve Ballmer, I hear he's available: https://www.youtube.com/watch?v=OHVhiybBb1U


And if someone wants to phone him the ringtone sounds like this https://www.youtube.com/watch?v=8zEQhhaJsU4

(bad joke I know, it's Friday anyway)


He might be available to chair the Democratic National Committee.


10.15 was released 18 months ago so I assume this article is from 2020.

The process has improved since then so not sure how much of this still applies.


At least the title of this post needs a "(2020)" added.


All this code signing stuff is an admission of defeat. "Our OSes are insecure and we can't secure them, so fuck it."

Unix and VMS/NT, the two most popular kernel lineages, were both designed when computers were either isolated or connected to an Internet that was effectively an academic/government walled garden. They absolutely were not designed to deal with the present information war zone where everything is trying to spy/hack/ransomware you and every piece of code is guilty until proven innocent.

Since the Internet went mainstream we've been constantly stuffing wads of chewing gum into their many cracks, adding hack after hack to try to secure that which is not secure. Address layout randomization, pointer authentication hacks, stack canaries, clunky system call whitelisting solutions, trying to shoehorn BPF into a system call filtering role, leaky containers and sandboxes, and so on.

Code signing is an admission that none of those measures have worked.

A secure OS would be built from the ground up with security as a primary concern. It would be written in a safe language like Rust or perhaps even in a system that permits strong proofs of correctness. Every process would run with minimal required permissions. Everything everywhere would be authenticated. The user would have visibility into all this and would be able (if they desired) to control it, or they could rely on sets of sane defaults.

There'd be no need for code signing on such an OS. You could safely run anything and know it would not be able to access things you didn't grant it permission to access. The web JavaScript sandbox is the closest thing we have to that but it's extremely limited. By providing a Turing-complete sandbox that can be generally trusted to run code from anywhere, it does show that such a thing is possible.

(Mobile OSes look like they've kind of done this, but they haven't. They've just stuffed more chewing gum into the cracks in Unix and put a UI on top that hides it. They also "solve" the problem by nerfing everything.)


An admission of defeat is one way to look at it, or you could describe it as acceptance of reality.

As you point out, security engineers have been working for decades on a vast array of techniques to mitigate classes of vulnerabilities. There's no reason to believe this is something that can ever be finished. There will always be bugs, always. Code signing embraces that reality by making it much easier to contain bad programs after they get out into the wild. It is just another tool in the toolbox, as with all security mitigations.

It's silly to suggest that you can solve security by simply rewriting the entire OS in Rust; and in a modern OS, every process already does run with minimal required permissions, and authentication is generally enforced, and users do have visibility and control, at least by design. Sometimes things slip through, of course. That will still happen even in the shiny new world you're proposing.

The existence of JavaScript does not imply that a completely secure OS is possible. There's a rich history of JS bugs that have led to total compromise of the OS -- in fact, earlier in your comment, you listed several vulnerability classes that have disproportionately affected JavaScript VMs.


I didn’t say you’d just rewrite it in Rust and that’s it, just that the use of safe languages would be one thing that would help. We really do need to get away from C with its endless footguns.

Apps absolutely do not run with least privilege on any current popular OS. If I install an app on Windows, Linux, or Mac it can see tons of my data out of the box. In some cases it can see the whole system except for specifically locked directories and files. Then there’s the huge pile of local exploits afforded by unsafe languages and cruft.

Perfection may not be possible but if OS app isolation were as good as popular browser JS environments that would go a long, long way toward making it safer to run stuff locally.


Anyone think that users will eventually become desensitized to the "malicious software" popup? If the process is this complex and buggy I imagine a lot of developers simply won't bother with notarization. Eventually if enough legitimate apps don't bother the popup will become common, and users may be annoyed more by Apple than any particular app. Like how the "run as admin" prompt just became an extra automatic click to many users in Windows.

It's not like the big development shops that do take the time to get the notarization process working get a special green checkmark by their app. After the app has been launched the first time, it's back to an even playing field with the apps that didn't notarize.


If you're going to rant about details, it helps to actually get the details right.

For example, showing a screenshot that doesn't contain the word "malware" and then saying:

> Using my application name and the word "malware" in one sentence is suggestive and extremely offensive by Apple.

Does not fill me with much hope that the author is detail-oriented. I'll keep reading, and I know already that the notarizing process isn't smooth, but my "snowflake meter" is already in the yellow zone, and I've yet to reach the part of the essay labeled "Part 1"


Is the difference between "malware" and "malicious software" really that meaningful that it undermines the overall point of the piece?


For me, yes - it did undermine the case that the author was making. It was sloppy. If he got this one simple thing wrong, can I trust his other complaints?


FWIW, the author seems to be German speaking and in the German version of macOS, Apple actually uses "malware" in their notarization popup instead of the German equivalent of "malicious software" (de: bösartige software).

I think it's very possible that the author changed his os language to take the screenshots suitable for an international audience and did not immediately notice the slight difference in wording.


I had not considered this possibility! Thanks for the call-out. The author's English is excellent, and I hadn't realized until I saw your comment that NeoFinder is indeed from a German site.

https://www.cdfinder.de/en/downloads.html


I had the same initial reaction about misquoting. However, Apple also uses dialog boxes that use the word "malware", specifically: "macOS cannot verify that this app is free from malware." See screenshots at https://support.apple.com/en-us/HT202491


There is also a dialog that claims an app will harm your Mac and you should drag it to the trash.


technically not the exact word but malware is just a portmanteau for "malicious software", which is in the screenshot


The shown dialog box quite heavily implies that "NeoFinder" might be malware, and that the fault is with the application developer and not with Apple.


It’s a fair point, though the dialog I usually see says “macOS cannot verify that this app is free of malware”. Presumably that’s because Apple scans the uploaded executable for malware during notorization.


Unless your dialog box is quite different from the one in the article, it says that the app "can't be opened because Apple cannot check it for malicious software."

Which means the same thing, and if there weren't quotes around the words, a paraphrase is fine. But if you put quotes around a word or phrase, then it should be accurate, or I will start to wonder if your attention to detail is adequate to things like, um, notarizing software.


Exactly what do you think "malware" is short for?


Yep. This focuses on the command line’s terrible error messages but the Xcode UI is bad too. Clicking through multiple steps, and for some reason non-standard “transparent buttons in list items” are used to reveal the “Export App” action you need to finally obtain a local notarized copy. Except those buttons will not appear after it is “done”; you have to switch to another target and then switch back to get the buttons to be clickable. I mean the whole process just screams “does anyone at Apple ever have to use this?”.


Ha! Even though Notarizing was released in 2018, it's still too soon to discuss for me....

Try packaging a python interpreter with a ton of .so's and .dylibs with your .app and see how much hair you have left!


I ship my app as universal2 with latest Python (including modules and complex shared objects preinstalled to site packages such as Pytorch, numpy, opencv), xpc services, a launch daemon, a handful of frameworks, and ~137 shared objects in deeply nested subfolders. Fully notarized + stapled + hardened runtime + library validation. I don't build in Xcode, as this is a cross platform app. It's caused little stress. AMA.

(I also ship the same app to Windows with EV signing and I think that is more of a pain, due to the physical HSM requirement)


I’ve packaged and signed/notarized a Dolphin (fork) build and it was relatively painless all things considered. Dylibs have no real impact on this process so I’m curious what you ran into.


In python there are a handful of .so files in subdirectories you need to sign. You can basically `find . -name \*.dylib -o -name \*.so` and codesign those.


I'm fairly certain that you just need to codesign --deep here, as that's all I've ever done.


Don’t use codesign --deep, it’s mostly broken. Sign manually from the inside out.


You're gonna need to expound on --deep being broken, considering I've not run into a single issue with it, and judging by the majority of blog posts/docs that cover this, others have the same experience.


I'm assuming it doesn't work well with nested bundle signing. As per my other thread it also seems to be picky about which subdirectories it signs, and there are lots of weird paths (LaunchDaemons, XPCServices, LoginItems, etc) you can put stuff in that needs signed. Not to mention if you put anything needing a sig in Resources.


Hmmm, well I'm willing to believe it then, yeah (although I definitely have a nested bundle setup in a project where --deep works fine... odd).

This is good to know though, and hopefully this exchange helps someone in the future too (actually, this would make for a good blog post - this kind of nuance is lost in most of the docs/existing posts).


If you're curious, Quinn (the Eskimo) has more details: https://developer.apple.com/forums/thread/129980


Perfect, exactly what I was looking for. Thanks!


That seems to work if you put the entire Python stdlib under Frameworks but not if it's somewhere else.

I do the `find` thing because I pre-sign my libraries when I build them to save a bit of time during app build.


Hmmm, interesting - if nothing else hopefully this comment exchange helps some wayward developer down the road!


I can't remember, because I did this a few years ago, but I think there was some other code signing benefit to not putting all of Python in Frameworks as well.


My advice from years of notarizing my apps is to make sure you do it at least once per day for each of your apps. If you only notarize once every release (say, every month or so), you are almost guaranteed to encounter some new cryptic error that you've never seen before, either due to some glitch in signing your app or frameworks, or else some server-side error such as new terms & conditions that you are being "encouraged" to agree to. It will take you hours to research and resolve them if they aren't spotted right away.

As others pointed out, https://github.com/mitchellh/gon is a great tool for doing this on your local machine (e.g., with a cron job). In addition, if you are building your app using a GitHub action (which I highly recommend if it is open-source), you can use my https://github.com/hubomatic/hubomat action to package, notarize, and staple a release build in one shot. The sample/template app does this automatically on every commit as well as once per day: https://github.com/hubomatic/MicroVector/actions.

So when this fails from a scheduled job, you at least know that something has changed on the Apple side and can investigate that right away. And if it fails as a result of a commit, then at least you can start looking at what changes you may have made to your entitlements or code signing settings or embedded frameworks or any of the other million things that can cause it to fail.


I auto notarize my app when I push to a release-candidate branch even if I don't deploy it. I also have a general code signing CI test that catches stuff before notarization would. I believe I've never, in thousands of pushes to this branch, hit a non-obvious notarization issue.

The main annoying thing so far for me using notarization long term is the terms and conditions signing step, which is silly because they're only updating the paid apps contract and we're notarizing explicitly so we can distribute outside the app store.


Smoke-testing your code signing is a good idea, and would probably catch most notarization issues. Aside from those, through, I've encountered numerous issues with embedded frameworks and app extensions whose error reporting wouldn't be described as obvious. Catching those right away rather than right before you are trying to deploy a release is critical.


`spctl -v --assess -t execute` is crucial.

My app layout is fairly complicated, so I'm sure I'm exercising a lot of the corner cases: https://news.ycombinator.com/item?id=26996223

I check that executables don't depend on libraries from outside the app, I check that I successfully shipped everything as universal2, and I check for stuff like .DS_Store and vim .swp files.

Here's my final stage check script, which staples notarization and checks the stapled dmg at the end as well: https://gist.github.com/lunixbochs/3d5eaf04e789932f8a19ca0fc...

I shared notary.sh in another comment: https://news.ycombinator.com/item?id=26996457


I agree about notarisation, I think it's the wrong solution. It gives Apple too much insight in what applications are used on Macs. This is my business and mine alone. I don't wany my Mac calling home with everything I open. Despite there being a way to turn it off.

I think simply spreading signatures of known malware for a local check would be a much better option.

However as a Mac enterprise admin I don't think the process is particularly difficult. When it came in I scripted it all once and that worked fine. Only issue is that sometimes it doesn't like if I make a PKG with a package from another supplier embedded in it. The problem is that I have to do that because some solutions have several packages that need to be installed in a particular order, and my MDM (MS Intune) does not provide a means by which to specify installation order. It just blasts all packages in a random order at the machines. So I re-package those. But anyway even that is not all that tough to get around.


> Despite there being a way to turn it off.

There isn't; the OCSP checks happen on launch automatically.

I got Apple to encrypt it next year and delete their logs, though, thanks in part to the publicity afforded by HN to my yelling about it. They also committed to adding an off switch.

Hopefully they'll do it in a clever, privacy-preserving way using a bloom filter or something, instead of just sending the developer cert hash up to Apple as soon as you double-click an app.


Well by turning it off I mean blocking ocsp.apple.com in my firewall. I do this personally, not at work by the way. But yes they should really provide a way to properly turn it off in the OS itself.

By the way another issue I have with the developer cert thing is that this way they will block all your apps if they have an issue with just one thing you've uploaded. And we all know Apple tends to blur the line between plain old malware and "against our T&C/Commercial interests". They already have a say in what apps I can use on my iPad. Like the ban on emulators, etc. It's my device, it should be a recommendation at most.. This is why I fear they are moving Mac in this direction as well.

PS: I didn't realise you were the one who raised this issue a couple months ago. Thanks for your work!!


There are a lot of other host names that need blocking, too, pancake.apple.com and xp.apple.com and *.push.apple.com among them

The amount of spyware in macOS these days is absolutely astounding:

https://sneak.berlin/20210202/macos-11.2-network-privacy/


Thanks, again something I wasn't aware of. The problem with the push one is that blocking it will also block some legitimate stuff unfortunately :(

I'll keep an eye on your blog! Excellent info.


Like what?


Well, push messages :)

The problem is, if the Mac can't reach APNS, it won't get informed when there's update to things like MDM profiles. If I push a new MDM profile it happens immediately on a Mac that receives push notifications. On a Mac that doesn't, it can take more than a day!

This is something I'm fighting with our network team about because they're not allowing that traffic right now. Understandable, but for proper management it's necessary to make changes quickly sometimes when a user needs to get an exception applied. It's also necessary for things like iMessage but we don't allow that in work anyway (at least not for work purposes)

We're running an internal proxy but APNS doesn't work through a proxy, they need to make an exception for it so it can go out direct.


For our small software company, notarization took at least 40 hours of additional work, and slows down releases.

Anyone knows if "stapling" the distributed bundles files (.app .pkg executable files etc.) is useful in any way ?


I rearranged my CI graph to run notarization in parallel with my unit tests, and turned on GitLab's DAG to allow jobs from different stages to run at the same time as long as their dependencies are satisfied, so the impact of the notarization step has been small.

My CI pipeline is build -> test -> deploy. The Mac "build" job uploads the app for notarization as a side effect. There is an additional "mac archive" job during the test stage. This job runs general tests on the DMG (checks code signing is valid, makes sure I'm not depending on system libraries), then waits for notarization to finish and staples the DMG. By the time I'm done mounting and checking the DMG, notarization is almost done anyway.

My typical release time right now (from git push to having a fresh app available to install for windows/linux/mac) is 7 minutes. I think I could get it down to around 3 minutes with optimizations.

My primary bottleneck right now is building / xz-compressing a windows installer (which means my windows tests finish last).


Oh, I do notarization after building my .zip containing my .pkg, are you saying I could notarize each bundle separately and I dont need to notarize the final .zip ?


My final artifact is a dmg. I emit it as part of the app build, in my very first CI stage on Mac, and immediately upload for notarization before the build job ends. Then, in parallel to tests, I have a separate job that checks the dmg for common issues, then waits for notarization and staples it.


The stapled information is used the first time someone tries to open the distributed bundle on computers that are not connected to the Internet.


Notarization is nothing except Apple making sure they have visibility into the tail end of their ecosystem.

Remember the days of Windows 95 when you could make an application, sell it to a person in your own town and nobody in the world knew?! Not anymore!

Now Apple has to know that you made an app and get an exact copy of it, just for safe-keeping.


You can still do this with a webapp though (although I guess in some sense Apple looks at Safari traffic logs probably). Apple just wants the revenue.


FYI: The original title of the post is "The Gates to Hell: Apples Notarizing" evidencing the frustration involved with the notarization process and which now was relativised to just "Apple’s Notarizing".


Apple devices are becoming increasingly unusable for developers.

Fantastic opportunity for Linux apps to gain more dev resources, as anyone with a bit of foresight sees little future in macOS, iOS, Windows, or Android as development platforms.


This comment could have been written a decade ago and it was as laughable back then as it is today.

Developers do not dictate the success of a platform. Users do. And they don’t want Linux on the desktop.


> Apple devices are becoming increasingly unusable for developers.

Nothing laughable about that. It's absolutely 100% true. That's why only 25% of developers are on a Mac. Outside of the Silicon Valley bubble that number goes way down.

> Fantastic opportunity for Linux apps to gain more dev resources, as anyone with a bit of foresight sees little future in macOS, iOS, Windows, or Android as development platforms.

Linux apps have steadily been gaining more dev resources. That's why we have big companies like Microsoft bending over backwards to make things like VS Code run on Linux or in Linux container tech.

> Developers do not dictate the success of a platform. Users do. And they don’t want Linux on the desktop.

Linux is on more systems than any other OS. Was that because of Users? Nope.

If Macs and iPhones disappeared tomorrow, the world would largely continue on without much hassle. If Linux or Windows disappeared, we'd have a worldwide catastrophe on our hands. Users never chose Windows either. Developers and the businesses that they worked for did.

The number of normies using Linux on the desktop isn't a good metric. There are as many devs on Linux as there are on a Mac. Of those, I'd say more than half are not even targetting iOS but rather the web. So, Apple is always just a few bad moves away from losing those web devs to a Linux desktop.


I'm not so sure. It's a chicken egg problem: if developers get so frustrated that they suddenly start building great experiences on Linux, the users will flock there. It won't be a fast exodus, and it won't be clear cut, but it will set in motion a transition. Developers follow demand, but users follow supply. If enough developers create supply elsewhere, and that supply gets interesting, users will come.


One can already see it in some areas with the popularity of containers, with many developers choosing Linux as other operating systems have poor to no container support (often nothing more than running them in a Linux VM).

Stupid anti-developper practices of proprietary OS vendors will result in only more developers migrating to Linux distros.


Microsoft has pretty much solved this with WSL/WSL2. I hate the obsessive stalking and much of the redesign of Windows 10, but at it's core, it's better for many general consumers.

As a developer, I run into Windows' superior dealing with low-memory situations quite often. My work dev machine has 16GiB of RAM, but my full-stack development work is pushing my system past the 16GiB mark easily now. Coworkers using Windows have similar problems, but their system slows down whereas mine completely freezes for 30 seconds at a time while the system struggles to find some free memory.

For the people who would tell me to "just get more RAM": it's out of my hands, and 16GiB should be more than enough for this type of work anyway. Most software written these days, especially tools aimed at developers, seems to think everyone has 128GiB of RAM and that bad memory handling can best be solved by buying more hardware.

With Gnome 40 and systemd 248, the Linux experience will become just a tad more friendly for both general users and developers, but there's a lot of improvement that can still be made to the Linux experience.


Try out zram, it compresses memory in-memory and is a real life-changer. On <=8Gib systems it is mandatory for regular use.


Many OEMs including large ones like Dell have built Linux desktops and laptops over the years. No one buys them.

It’s not because of the lack of apps but because the basics are so poor e.g. broken sleep mode, driver instability, poor battery life, changing UI etc


> No one buys them.

Pfffft. OK! Clearly this is incorrect... We have whole companies like System 76 built around selling Linux desktop systems. We have whole divisions of PC manufacturers for selling Linux desktop systems. If nobody was buying them, they'd cease to exist.

Non-developers largely don't buy them, that's all.

Also, many developers are aware that Linux runs on anything, so a large portion of Linux users just install it on whatever hardware they already have. The rest buy systems from Dell, System 76, etc. or build their own.

> because the basics are so poor e.g. broken sleep mode, driver instability, poor battery life, changing UI etc.

Developers obviously buy pre-built Linux laptops that have none of these sleep/driver/battery problems. I work with some of them. If one makes a sensible choice and uses a desktop environment like XFCE, they would notice that there's much, much, much less instability in the UI when compared to Windows, macOS, Gnome or KDE.

Personally, I prefer desktop systems and I have never had to spend more than 40 minutes getting Linux running on any desktop PC that I've tried it on outside of my Mac Pro from 2012, which was only problematic because it's not a real PC with a normal BIOS or boot procedure it's a locked down Apple terminal.

I've seen way, way, way more problems with macOS and Windows than I have with people running Linux in our office. Certain mice and keyboards won't even work on a Mac!


Been using the dev models for almost ten years, none of those are a problem. Ubuntu Mate is a good choice for a stable gui.


Technically, Chromebooks are Chrome on top of Linux. It's not a full desktop, but it's a desktop and it's pretty popular. Especially in schools.


Why do users choose a platform? Apps. Who makes the apps?

Developers embraced Windows over Mac, users followed. (Until iOS development made Mac's the default dev machine.) Developers embraced iOS and Android over Symbian and webOS.

Windows and Mac were great development platforms ten years ago, and iOS and Android were way better than the now-dead competition.


> Until iOS development made Mac's the default dev machine.

I'd argue Mac's were the default developer machine before iOS. E.g., Paul Graham from 2005 (http://www.paulgraham.com/mac.html):

> All the best hackers I know are gradually switching to Macs. My friend Robert said his whole research group at MIT recently bought themselves Powerbooks. These guys are not the graphic designers and grandmas who were buying Macs at Apple's low point in the mid 1990s. They're about as hardcore OS hackers as you can get.

From my anecdotal experience, the rise of the Mac among creators, including developers, was meteoric, and immediate, once OS X was stable. Not to say it was everybody, but I'd say by the mid-2000s it was the default choice, as in I rarely ran into anyone using any other platform in the web startup circles I was in at the time.


HN ist a bubble. There are millions of developers writing software for embedded controllers and other hardware which is done on Windows and Linux. Plus a plethora of B2B software HN never heard of because it runs in corporate environments powered by Windows.


Yeah, I was in college and startups through the mid-00s, adopting Mac at Tiger. It seemed like maybe Leopard was the breakthrough anecdotally, where Boot Camp took away the "but-it-can't-run..." that was the last factor keeping people away, even though no one ever seemed to actually end up ever running Win after all.

I'm not sure what IT purchasing looked like in enterprises and SMBs through the '00s, but I wasn't able to get an Apple machine at a non-tech mid-size employer in the late '00s and it seemed like developers at big employers still had PCs.

I've always been issued (and supplied) Macs since the early '10s, and I'd say at this point, given how bad macOS has become, it ultimately does boil down to the fact that "being able to build an iOS app" is a valuable feature, and no other company can offer it.


With due respect, that's a rather elitist and miopic view that considers mac usage within bubbles only, albeit popular in HN. Specially considering that surveys report macs being used only by a third of the developers that answered it. That's a farcry from "most".

See StackOverflow survey.


I didn't say "most", and even the framing I used of "default machine" came from the comment I was responding to:

> Until iOS development made Mac's the default dev machine.

I was just pointing out that based on my personal experience, it started earlier than that.

I don't know what to say about the elitist comment. I'm only pointing out my anecdotal experience. I'm well aware of Stackoverflow's developer statistics, but I just don't personally run into developer machines that aren't Macs very often. But then, I transitioned to iOS development myself around 2010, which is obviously going to skew things. Frankly I'm super curious where all the non-Mac using developers are, because they aren't on the web teams, or mobile teams (Android/iOS) that I usually work with. I know Windows is by far the most popular choice for game development, but that's far from my career.

(I guess actually, for the elitist point, I only really care about the hardware being used by people doing work I admire, because I want to do work like that too. I suppose if that's elitist so be it, but to me, that's just being practical.)


It depends on what you work on. Of course if you work with iOS development your surroundings will be mostly macOS machines since it's a hard requirement, no surprises here hence my comment about bubbles. Otherwise statistically speaking, macs are not the default developer machine as per surveys.

It also depends on who you admire. For example Linus, someone I admire, uses a AMD Threadripper 3970x. And the best engineer I personally know, uses a Thinkpad with Debian.

On my team, currently responsible for heavy backend engineering, we have been replacing macs with Dell XPS + Linux due to mac's horrible support for Docker which is a hard requirement for us.


> It depends on what you work on. Of course if you work with iOS development your surroundings will be mostly macOS machines since it's a hard requirement, no surprises here hence my comment about bubbles. Otherwise statistically speaking, macs are not the default developer machine as per surveys.

Again, I didn't framed it as default machine, that comes from the comment I was responding to. Personally, I probably would have said something like "default machine for developers working on products that target non-developers" (I'd have to think really carefully about how I'd word this actually, because I'm well-aware of the statistics).

Actually, I'd love to hear your framing of this. E.g., major tech companies usually default to a MacBook for developers. They're usually the most common machine at tech conferences. Unfortunately both based on anecdotal experience again, maybe you disagree with those too? But if you agree, how would you describe that if not the default machine for developers then? Not being rhetorical, I honestly struggle figure out the best way to describe it.

(Also regarding this "Of course if you work with iOS development your surroundings will be mostly macOS machines since it's a hard requirement, no surprises here hence my comment about bubbles." I was specifically drawing on my experience before iOS development existed, when I worked in web development.)

> It also depends on who you admire. For example Linus, someone I admire, uses a AMD Threadripper 3970x. And the best engineer I personally know, uses a Thinkpad with Debian.

Clearly, but why is my following the work of people who I admire (mainly product-centric apps and website) elitist, but your following Linus, etc... not elitist? That was my question here.


Default machine for developers is a very broad term. It is heavily biased on what and where they work on. Sometimes it's not even a choice. Perhaps we agree on that and are talking past each other.


> Until iOS development made Mac's the default dev machine.

Bash, and a lack of a good command line story on Windows, made the Mac the "default dev machine" for certain cultures/industries.


It’s not just apps, drivers and OS basics are key too. I tried half a dozen times to switch to Linux as a daily driver.

I gave up the last attempt because two things happened: someone out there pushed a bad update that crashed my GUI for no readily apparent rhyme or reason, and I could not for the life of me get my scanner to work.

Now if I wanted to have 17 CPUs or hook a HAM radio into it or do something truly weird, Linux was the way to go.

But I just wanted my computer to (a) not break and (b) do the basics smoothly.


Well yeah, "software", including drivers and OS integration, not just "apps".

Linux phones no longer seem limited by hardware or cost (at least judging by my PinePhone), more stability and support. Once stability is there in terms of reliability of the core "Minimum Viable Phone" apps, which seems relatively close, there will be a great opportunity for design-oriented founders/brands to craft highly polished experiences without needing to pay off the Duopoly or, worse, trying to compete with them for talent.

Yeah, there will be new innovative startup phones, but what'll happen when every brand can offer their own phone just by hiring a few designers and engineers?

NikePhones, GucciPhones, DunkinPhones, TeslaPhones, McPhones...


Branded phones have been a thing for a very long time, and not enough people are interested in them to make any of them successful.

I'm glad that you're happy with your PinePhone, and I hope that one day in the future it achieves perhaps 1% of the global market for smartphones. I doubt it, but that would be nice.


Where's a good branded phone?

Linux is already running on the majority of smartphones, individual manufacturers and distros seem less important. Any Android user would switch to another Linux phone with the right features, price, etc.

iOS seems poised for a slow ride into irrelevance as developers start suing them and begin leaving (Basecamp, Spotify, Epic...), before long they'll be the dusty old devices in schools like 25 years ago. Apple (legacy Mac OS) and Microsoft (Windows Mobile) have both lost before because they lost the developers.

Microsoft has been very smart in their recent plays in this regard, regaining a huge amount of developer trust over the past 5 years or so.


BMW, Facebook, KFC, and Yahoo! are branded phones I remember off the top of my head, but even Microsoft gave up on the phone business because they couldn't gain traction.

"iOS seems poised for a slow ride into irrelevance" is quite a thing to say the same week Apple announced all-time record Q2 revenue on the back of their iPhone business. It's up there with "this is the year of Linux on the desktop" as a perennial statement that will eventually have to be right*, sometime between now and the heat death of the universe.

You're focused on where the developers are, as if that were the leading indicator, but I think history tells us that developers go where the users are more often than the other way around. Neither users nor developers seem to be much discouraged by Android or iOS, and neither are stampeding toward PinePhone, either.

In any case, you're making a lot of future claims that I find ludicrous, but I'm not a betting man, or I'd take your money.

* Not really


Developers are also users, usually some of the more advanced users, important early adopters and power users. Remember non-developers saying they would never get Facebook, or an iPhone, or bitcoin, or Snapchat, or TikTok... and then Road to Damascusing into evangelists? Developers tried all of those out first, with many seeing the problematic social mechanics and rejecting them early, instead of running the hamster wheel to enrich others and centralize power in furtherance of one's own greed.

If enough developers give up on the proprietary OSs and just run Linux, which is basically actually doable now, the tides will turn. Certainly unclear if developers will unify and go all in on Linux, of course, but given historical trends it looks quite likely.

Anecdotally, I've always been fine enough with Win and Mac since the '90s, fine with iOS and Android since the '00s/'10s. But something shifted last year, and while I will remain a user, I've mostly given up on actively developing/maintaining native or web apps targeting those platforms.


That’s certainly possible, and you’re right about the leading edge early adopters dictating the success or failure of a system.

I’d be delighted if I could get Windows 10/iOS/MacOS-level reliability and functionality out of Linux and would switch to it as a daily driver. But the delta in user experience right now is massive, so I just use Linux for the things that only work on Linux


> Why do users choose a platform?

In my family, some of the users chose their platform after asking me what to buy. I'll normally recommend Apple because the hardware is generally pretty good and their service and support through their stores is decent.

Applications usually don't matter, with gaming being the big exception.

If applications really were the biggest factor, Apple probably still wins. Through virtualization, a Mac can legally run more software than any other computer.

> Developers embraced iOS and Android over Symbian and webOS.

Users embraced iOS before developers. The day the iPhone was release, there were far more apps on Windows CE based smartphones but none of that mattered. So I don't think you can say it's all about applications.


> Why do users choose a platform? Apps.

They also choose brands. Apple has one of the biggest ones, and the fact that people bought MacBooks during the butterfly keyboard years should tell you just how powerful that brand is.


Yeah, grandparents and computer labs kept their old Apple computers around through the '90s when Apple wasn't cool. Before getting a Mac became at first ironically cool in the early '00s (because they looked funny but couldn't run any good games). Apple is clearly just a default and not a preference with kids nowadays in my experience, like they were for kids in schools in the '90s. They have good games now though.

Brands don't seem particularly defensible, and Apple doesn't seem to have any network effects outside iMessages/FaceTime and their accessory ecosystem.

James Currier on brands: "I don't think brand is a worthy defense at all. I've seen companies with phenomenal brands get crushed in a matter of years. If you go back to the early age of PC software, the best brands in the industry were Lotus 1-2-3 and WordPerfect. Microsoft crushed them in a matter of years, and they had the brands. I don't think brand buys you very much. The best brand in search was AltaVista or maybe Yahoo! and now they're roadkill."

https://www.nfx.com/post/the-four-types-of-defensibility/


> Brands don't seem particularly defensible, and Apple doesn't seem to have any network effects outside iMessages/FaceTime and their accessory ecosystem.

Perhaps that's why Apple advertises so much with product placements in movies. On a lot of movies the main characters have iPhones - it's not overt, just a jingle or them looking at a call (which looks like iOS incoming call UI). Same with Macs or AirPods. I honestly wonder what their placement spend is like.

They aren't the most valuable brand for no reason [1]. That placement is earned, but reinforced with lots of spend.

[1] https://www.businessinsider.com/apple-surpasses-amazon-as-wo...


Yes but it means nothing if consumers are not buying the hardware with Linux. The average consumer is not going to put Linux on their computer if it’s not as simple as downloading a browser or upgrading an operating system.


Consumers will go where the apps are: if developers increasingly exit the Apple and Google ecosystems, which seem beset with nearly daily anti-developer actions, the only other option is Linux app development for mobile. The hardware is ready, cheap, and good enough.

"Average consumers" are becoming increasingly technically sophisticated as demographics shift, and of course the two leading mobile OSs are already Unix-based. While they're not marketed as "*nix" to end users, they absolutely were marketed as such to developers.


Agree.

Probably no so related but your comment remembered me some friends sentence, something like: "end-users don't mind about the technical aspects they just want something that works".

This is an ad-hoc claim and not necessarily true, I know. But turns out that this sentence is trivial nowadays with this such of big impact of technology in people's lives. So users are not foolish, they are every day more aware about software in general. They know what they want and can give you the value that your software deserves so just let's start to tell them more about Linux.


Oh customers are definitely buying hardware with Linux. The problem is its in the form of insecure yet often locked down Android phones loaded with addware spying on them. :P


There are practically no stores which sell hardware with Linux.


That doesn't prevent it from becoming so mainstream at work that even one salesman(!) at work used it.

Yes you have to jump through hoops and not everyone in IT is extremely happy always but even some of them prefer it.

To me it feels kind of like when Mac broke through in developer circles.

First it was weird and IT department laughed. Then more and more people including bosses demanded it and here we are: if a job demands all devs use Windows many devs will go somewhere else.


> That doesn't prevent it from becoming so mainstream at work

I would say that it does. If your boss is not aware of this system, they will not allow to use it or consider secure. At work, I am typically not allowed to freely choose my OS.


You misunderstood me. The point is it is already happening:

Linux is already so mainstream at work that I've seen a sales guy(!) using Ubuntu.

And I see people sharing screen on Teams and it is Linux!


Yeah, the majority of phones and servers run Linux. It's "mainstream infrastructure" instead of a consumer brand, but not out of the ordinary for consumers or businesses.


When your entire desktop is mostly Electron, no, apps don’t dictate what people buy. You can run the same stuff on almost any OS now.

People buy Apple because the hardware is great, the software is more integrated than Linux and Windows due to that tight hardware control, and the Apple Store model which “just works” for the average person.


LOL you know who coined the term "app" right?


Coined? We talked about web apps, desktop apps, mobile apps back in the early '00s, before Jobs unveiled the iPhone, which initially supported only web apps, if you missed those days.


okay so you don't know who coined the term "app", got it


App is shorthand from application, no known person "coined it", because that term is going back to eighties or earlier. Apple had MacApp (which was application framework) in 1985, that time app was already well known shorthand, for example Atari had file extension .app.

Unrelated, but TOPS-10 operating system had executables named with extension .exe, that happened somewhere in 1967-1970. TOPS-10 had lots of interesting stuff, that later is similarly done by newer OSes. Probably my favorite is ctrl+t, which sends SIGINFO in FreeBSD, giving you status from current running program. And what we see in TOPS-10 manual: "When you type CTRL/T (control-T), the monitor prints status information pertaining to your job on your terminal."

:)


Who first shortened the word "application" when referring to software? Maybe someone in the '80s, or earlier?

Looks like it's attested by '92.


I personally gave up, I treat all of their platforms as "legacy" and only port stuff as "best effort" similar to what I was doing with IE in the IE days (yes, that also includes Safari).


Recently getting into web development, getting things to look right on Safari has been such a pain... they really just work on Firefox and Chrome, but every time we deploy a new version, the one coworker with a Mac will message us about some arcane difference in behavior for certain CSS we're using on Safari.

I wish we could drop it, but our Analytics disagree, sadly.


Yeah, it's a clear abuse of market power and is holding back the web. They have the resources to fix these issues, and seemingly choose not to, effectively taxing developers, who pass the costs on the consumers. So even well within the current bullshit "consumer welfare" standard: options go down, costs go up, with Apple's apparent "embrace / hold up / extinguish" strategy.

I'd love it if you could write up your experiences, ideally with those figures on Safari usage and citations to any relevant WebKit bugs, and send them over to Rep. Cicilline's office in Rhode Island.

I thought his office did a good job putting together these questions on anticompetitive behavior by Apple (who has since actually been fixing the protectionist issues that they gave obviously flimsy answers on):

https://docs.house.gov/meetings/JU/JU05/20190716/109793/HHRG...


What's worse, Apple will insist that the "open web" is the alternative the app store on iOS. Apple controls the portal to that open web too! And either through intention or incompetence, its a non-starter for developers! Even alternate browsers on iOS are forced to use the Safari/WebKit engine; its not even close to "open".


People almost only talk about the missing features on Safari and while that's true that they are blockers in some areas (like PWA for instance, that's not a coincidence...), the main issue for me is that Safari is a very buggy browser engine. It's hard to emphasis how a massive amount of small details aren't right at all, you will encounter Safari bugs even on a basic website.

Safari feels like it's in "maintenance mode" at best.


I hear you... I'm almost to the point of creating a second simplified version for legacy browsers (similar to Gmail's HTML view) and put a banner on the top to upgrade to some more modern browser. I just cannot guarantee that everything will work on their browser anyways.


> Fantastic opportunity for Linux apps to gain more dev resources

The problem is that they’d rather complain about systemd or reinvent the wheel in 10 different ways which in the end means their resources are spread thin and nothing gets done.


I think Linux needs a better desktop GUI development story before its app ecosystem can really bloom.

The most complete it has now are the Qt-based KDE frameworks, but those are natively C++, and many if not most developers are not going to want to have to deal with C++ no matter how many quality of life affordances are offered by Qt. Bindings exist, but are limited to a handful of languages and come with their own quirks.

GTK is much better from a bindings standpoint, but isn't as complete as the KDE frameworks meaning devs have to bring a lot of their own stuff, plus GTK devs are subjected to new releases pulling the rug out from under them with new versions.

What I'd really like to see is an equivalent to AppKit, which includes practically everything needed for the most app (reducing dependencies to a minimum) and is C-accessible making it reasonable to write bindings for other languages for.


Not Linux, maybe a new OS. All the current OS's have settled into their consolidate the empires status, so we're just waiting for the barbarians at the gates to free us from their inevitable tyranny.


The windows seems actively develop about linux develop environment recently. The implements wsl2 few years ago. Bring gpu accelerate to wsl2 last year. And they recently implements wayland on wsl2 for full graphic support. Although guys here probably don't really like about ms very much. I think they will able to attracts some *nix developers.


Is Apple hostile towards developers?

Probably not, but it sometimes feels like it.

This is weird.


They put up a ton of artificial barriers and leave them out to fend for themselves often.

I'd say they're probably one of the most developer hostile companies. The only way the look friendly is in comparison with Nintendo.


Maybe not outright hostile, but they don't care about developer experience. It's more like: "You want you App on our OS? Go figure it out."

Currently I'm annoyed by a long known bug that requires to manually confirm 50+ popups with username and password whenever signing an iOS App...


I'm polishing up a macOS app for the store.

How much time should I expect to budget for the initial signing/notarizing/submission process?


What kind of app is it? If it’s a new app that you wrote in Xcode, all Objective-C and Swift, and not much customization, then not all that long.


Yep. All Swift, started in most recent Xcode. Good news then, thank you.


It does so because it can. Never understood the fans who never get fed up.


The more tight grip the state has , the more bureaucracy it produces




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: