This makes me wonder how open source is supposed to work on macOS. People seem to become more and more aware of it and even enterprises that insisted on support contracts can see that they can't get around open source completely anymore. Meanwhile Apple is removing the ability for me to have a pet project without paying an Apple tax.
If the message were completely transparent, something like "The developer didn't pay $99 for us to do a cursory check on them (or whatever it is that Apple does with that money), are you sure you want to run their software? [Move to trash] [No] [?]", then that would give the user the relevant information to make this decision, but as it is, virtually no mac user will understand what is really going on.
I also can't imagine $100 is easy to come up with in countries below level 4[1]. The OpenStreetMap Foundation recently introduced a way to waive the yearly £15 fee for OSMF membership if you have a certain number of map edits or otherwise contributed to the project. The OSM community seems to be quite diverse, but I can't imagine that Apple computers are less widespread than OpenStreetMap.
I remember RMS saying that the GPL was updated because one "obvious" freedom was not so obvious -- you could meet all the requirements of the GPL, without giving the right to RUN the software.
GPL3 allows that. And looking backwards, apple shipped software that was GPL2, but would not ship software that was GPL3. As one example, bash and make all quietly stopped getting updates from apple when the GPL3 versions came out. (although apple sort of broke GPL with bash as it never shipped all the source for it -- missing the header file rootless.h)
One other point about these dialogs + the help message. You are required to contact apple to even see this online help. Apple deals itself into the equation no matter what.
as an ex-apple employee and long time developer, glad people start to see apple is an open source foo creating prisons of software to lure the obsessive brand fanatics and get a fee selling them as users to the other parties.
- in apple tech, the users are volunteer products where servicing them gets a myriad of monetization, notarization etc.
- hope open source starts to ignore apple platforms as a target at some point. Being "*nixy" and presence of "brew" etc gave the false impression apple is in the open source camp.
I would never develop against an OS like that. Far too many security risks. MacOS still has < 10% market share and I believe that number won't go up too much in the near future at least.
The level of rationalization of these lock in practices is just sad to be honest, fully neglecting how software becomes more accessible.
Even signed apps have been victims of malware attacks and I do think the check is primarily to ensure the developer has paid their Apple tax. It isn't that high, but I don't think I want to spend it. If it normalized, Apple will surely increase it and developers would have absolutely no handle to protest.
> I do think the check is primarily to ensure the developer has paid their Apple tax
I think the true reason for the check is not money, but control. They want to control what software runs on "their" platform. For historical reasons, they don't have that control on desktops/laptops yet (but they already have it on phones).
If you're developing in the right sector, MacOS has a market that's selected to be lucrative.
I don't think that's a particularly good thing, but it does explain how <10% market share can make such a big splash. Especially for more casual projects, IMO.
> glad people start to see apple is an open source foo creating prisons of software
People have been saying it since day 1 of the Apple App Store. It's called a walled garden and it should be attacked as the abuse of a dominant position it is.
Near monopoly power. We are talking about Macs still, right?
Oh I'm against this latest erosion of the ability to run whatever code you want on a Mac. This is one of the reasons I just got one of the new Intel iMacs, because I can see this coming on the ARM side. It's their product though, they legitimately get a monopoly on what features it has, and I don't have any right to tell them how to design it. That's histrionics.
There is a legitimate case to be made though as customers as to how we would like to see the product develop. I'm behind that effort 100%.
> Near monopoly power. We are talking about Macs still, right?
The problem is that there's only one Apple Developer Program for both iOS and macOS.
If you get kicked out of the developer program for reasons related to the iOS App Store, you're also kicked out of independent Mac distribution outside the App Store. You no longer have true independence on the Mac either.
Well, you're using someone else's products (dev tools, compilers, OS libraries) you buy and license from them under certain commercial terms. If you don't like the terms, don't buy them.
And to be crystal clear, that's the approach I am personally going to take. I carved off a TB partition and installed Windows 10 and WSL 2 on my new iMac and it runs like a dream. I still need MacOS, and I'll be installing Virtualbox for some stuff. If the Mac gets to the point where I can't run all the applications and tools I need, I'll miss the hardware and the OS and some apps, but I'll jump ship. I hope they listen to us, but I intend to ask and argue, not tell or coerce through legal action.
> Well, you're using someone else's products (dev tools, compilers, OS libraries) you buy and license from them under certain commercial terms. If you don't like the terms, don't buy them.
This doesn't tell the whole story, because terms change. Even open source licenses change. Apple added Gatekeeper to Mac OS X in 2012. Before then, it was a pretty open platform. And other companies such as Microsoft and Google have been known to follow Apple in some respects, so just because one platform has better terms than another at the moment doesn't mean the platform owners can't change their terms on a whim. Apple/Google/Microsoft have close to all of the OS market share on both mobile and desktop, so it's not like there are a lot of choices, especially in the consumer space.
They can't change the terms on a product they have already sold to you, but new versions of the OS and dev tools are new products with new features. If you want the new features, you can choose to accept the terms, but you don't have to.
On the user side, there are security updates. Yes, you can refuse to install OS updates that patch vulnerabilities, but obviously that's a big problem for the user. And eventually the vendor stops providing security updates altogether for the hardware.
On the developer side, you can't really refuse to use the new versions, because they are required to support your software for the latest OS versions, which is where your customers will be. So if you don't, you lose your customers and go out of business, which is not much of a choice.
It's untrue that updates consist of nothing but new features.
Do that. I do so as well. But as a rule for society it seldomly works. Too few people are willing or knowledgeable enough to withstand the lure of their individual short term benefit as opposed to the collective cost of their action. I mean, this very feature we are talking about is itself a protection of users against their short term desire: "Let me run this application, I want to see the dancing bunnies" [1] And people fail to do so, even though the downsides are personally and sometimes very immediate. They could research who distributes the file, calculate the trade-off between the remaining uncertainty and the expected reward and come to a rational decision. Or they could just click! Just accept those terms and conditions. Just enter their credit card number on the apple developer product page to get to what they want. And that's what most people do most of the time.
It's for this coordination and collective bargaining problem that we need to regulate the shit out of anything that reaches a certain size.
Well the other hypercapitalists sell a product that sucks and sure, it lets you run other stuff, but it absolutely tramples on your privacy and is still, for the most part, worse in all aspects. (and yes the keyboard issues were very close to tipping the scales)
If only we had an operating system that we could install on our computers freely, right?
It is a shame that people are forced to buy a computer and not reformat the disk to install their OS of choice. It is a shame that we can not take the money that we could be saving and investing in open alternatives...
But it is not a panacea and it is not for everybody. And it has two main issues:
- developers don't care about stability and/or polish (just see the discussions on the trackpad ITT) "Oh but if you change library X to Y and reroute libinput and etc it might maybe work and maybe it will not break anything else"
- because of the former reason, not all (important) applications are available to the platform. I'm really glad that a lot of things are online now, but that doesn't solve all problems
I've lost count of how many times wifi was supposed to work "out of the box" in Linux and it didn't. (And no, it wasn't an issue with drivers or wpa, it was the stupid Gnome NM widget - if I configured it manually it works). Or some other stuff. And sure, a there are stuff that works better even than MacOS.
- windows 10 - no privacy, hello telemetry, cortana, etc.
- mac os - no freedom to do anything not allowed by apple
- Linux - polish / ui issues?
At least with Linux once I configure it right it works without issue and does everything I want.
Currently that means kubuntu 20.04, AMD GPU (or intel integrated) and laptops that say they support it (Dell/Lenovo) or self built desktop. (I used gnome until I hit your NM issue too and it did not allow me to move top bar to the right... switched to KDE)
I no longer have a fear of upgrading distributions/packages causing problems, nvidia drivers causing black screen after upgrade...
I know it is important, that and exterior looks of the hardware.
Still, I do not have UI issues and the polish is fine on Linux, was explaining how to get there.
Problem is people expect a 300$ Linux laptop to work like a mac usually... when you would need a similar priced dell xps or lenovo carbon x1 plus manufacturer to support Linux, like dell developer edition.
I agree. There are several distros (especially something like Elementary OS) that are pretty darn easy to pick up as a Windows user. I set up my very non-technical grandma up with Elementary OS and she loved it.
I don't think the obstacles to adoption are based on the merits of Linux (or lack thereof). The obstacles are institutional. Businesses don't want to adopt Linux because that's a risk, and most people know Windows/Microsoft Office. Average people don't want to take a risk (installing Linux/buying a Linux box) with a device that is a decent-sized investment for most people.
I have made multiple attempts to switch to Linux, spending days each time trying to customize it and get it how I wanted. And never did it ever approach the productivity and polish of macOS.
Certainly I have issues with Apple, but it's a simple cost/benefit calculation. Right now the benefits of macOS vastly outweigh the downsides for me.
Unless Apple's problems increase to the point of being unbearable (very likely to happen at some point) or the quality of desktop Linux increases significantly (unlikely to ever happen), I just can't justify switching. And I expect many, many other people are in the same boat.
My computer is a tool. Idealistic notions about free software are nice, but they don't mean anything if that software is worse than a nonfree alternative. Free software needs to be _better_ to win, and I just don't see that ever happening in the consumer OS space.
But you’ve only got that choice on an ‘old fashioned’ PC. We could reach a point where PC hardware is unavailable, because the majority of people have switched to shiny but terribly-locked-down devices that have far surpassed anything that a ‘legacy open platform’ can do in terms of performance
Major version upgrades (ubuntu 18 to 20) - here I just re-install and it's expected, I wouldn't upgrade windows 7 to 10 either...
Why not? I've upgraded a few Windows machines from 7 to 10, and the upgrade has gone just fine, assuming there's enough disk space for the OS to store the upgrade files before it starts the upgrade. Similarly, I've upgraded Linux boxes (both Ubuntu and Fedora) across major versions. MacOS as well.
I don't know where you're getting this notion that an OS upgrade is a scary thing to do. In my experience, it's been a routine, if somewhat long process.
parents messed that up (clicked accept by mistake to a microsoft upgrade pop up when that was a thing), system no longer booted and had to reinstall...
Also, I'm old, maybe things have improved but I've had upgrades wipe my hard drive due to centos anaconda bug once (centos 5 to 6) other times it just did not boot (yay using encrypted boot partition but thats on me and updating grub fixes it)
Added benefit is it also forces me to check/update backups
> NVIDIA driver updates (or kernel updates while using nvidia) - caused black screen... I dumped nvidia... these are due to crappy nvidia.
This is a legitimate dispute and I'm not really counting it because as much as I think Linux should have a stable driver ABI, NVidia are being needlessly obtuse.
> Ubuntu deciding to remove old libraries/apps that are not maintained. That's fixed via docker or just keeping an old version.
Which is not a simple task. Why can't keeping old software be simple? It is in sane operating systems. Hell, even Linux can do it right, as AppImage proves, but the Linux Desktop community is so hell bent on making everything as complicated as possible that they pretty much ignore AppImage.
> Major version upgrades (ubuntu 18 to 20) - here I just re-install and it's expected, I wouldn't upgrade windows 7 to 10 either...
Ubuntu LTS receives 5 years of support, but most new software will not be backported to the repository for anywhere close to that long in my experience and instead you're getting about 2 years. Windows 7 was supported for nearly 11 years and it was rare new software didn't support it for that entire time.
> you only get annoyed by those if you are a power user anyway
Precisely. Linux Desktop people seem to think that targeting people who only need a web kiosk is somehow going to make them popular, but if people who actually know about and need the features of an actual desktop computer don't like it why would they ever recommend it to anyone?
> Which is not a simple task. Why can't keeping old software be simple? It is in sane operating systems. Hell, even Linux can do it right, as AppImage proves, but the Linux Desktop community is so hell bent on making everything as complicated as possible that they pretty much ignore AppImage.
Resources make it complicated (time/money/...). I wouldn't maintain another person's library that he doesn't bother with.
> Windows 7 was supported for nearly 11 years and it was rare new software didn't support it for that entire time.
You are comparing a paid product with something free. For better or worse new software works on ubuntu older versions as well, but you need to compile it or work to get it there. Or just upgrade.
I assume you can also switch to Red Hat which have paid support.
> Precisely. Linux Desktop people seem to think that targeting people who only need a web kiosk is somehow going to make them popular, but if people who actually know about and need the features of an actual desktop computer don't like it why would they ever recommend it to anyone?
My point there was if you are a power user you should be able to get it working, it's a skill that's very good to have. Other less skilled people don't hit it by virtue of not playing around.
The 'Linux Desktop' people that you say are targeting things for better or worse put in time to build free products, if you don't like some switch to others or contribute.
> The 'Linux Desktop' people that you say are targeting things for better or worse put in time to build free products, if you don't like some switch to others or contribute.
I did. I used to run Linux on 4/5 of my desktops and now that is down to 1/5, and only because I haven't turned that one on in 6 months. My complaints are made no less invalid by that.
Contributing to Linux Desktop is, in my considered opinion, a waste of time. The community is so dead set on doing things in the most convoluted and complicated ways possible that there is no hope for reasonable ideas.
what do you use now then and how happy are you with that?
I for one am the reverse, tried recently using windows and it just got in the way, plus felt like I was being spied on like old times under communism...
Tried last year MacOS/macbook but I can't even move the titlebar to the right... Plus Apple restricting everything I can do... Plus Macbook couldn't install Linux on it, crappy keyboard, overheating, easiest return I ever did.
I used Lubuntu. I tried many other distros, probably several you never heard of, but Lubuntu was consistently the most tolerable.
I'm pretty much Windows-only at this point. It definitely has its flaws, and it is definitely getting worse as the new "lets make everything suck as bad as the web" culture takes hold, but I still find that it works with me much more often than against me which is more than I can say for the way Linux desktops work.
It is not a matter of recommending Linux or *BSD or anything else. It is just a matter of refusing to give in to closed software on the grounds of "convenience".
I don't go around telling people what type of software they should use, but I do expect technical people and the common developer to understand what a terrible trade-off they are making when they choose proprietary desktop. I feel hard to sympathize with those that complain about the abuse and developer hostility from Apple. They sold their souls to the devil for cheap and are now trying to bargain their way out of it?
Maybe you could give them the benefit of the doubt that they know exactly the trade-off they were making, and perhaps even wish they didn't have to go the route they did, but the alternative just isn't there yet?
If the alternative is not there yet and you are not helping build it, it is even worse!
I don't mind people that tell me they need, e.g, Photoshop to do their work. I do mind the fact that they don't contribute to any alternative. Just paying the subscription to Adobe and shrugging it off, instead of hedging and contributing to the alternatives? Shame on them.
Imagine 10% of every Adobe customer donating 10% annually of what they pay to Adobe to contribute to the development of an open alternative, we'd have hundreds of millions of dollars. How long would it take until Adobe would be no longer needed or at least playing against a more leveled field?
Even more in the case of the stereotypical web developer that uses a Macbook when every other tool they used is FOSS. Puts $2k on a laptop that you will only cripple you and work against you and still think this is somehow good "User Experience"? To me this is like failing an IQ test.
> If the alternative is not there yet and you are not helping build it, it is even worse!
I have seen what happens when people try to help. At best they are ignored. As I've said before, it is my considered opinion that the community is simply not interested in making things better. I would be totally ok with that if they weren't also evengelical.
And also, there's only so much time in the day, some of us have higher priorities than building replacement software for stuff that already exists.
"interested in making things better" != "interested in making things the way I'd like them to be"
> there's only so much time in the day
Then contribute some other way instead of just expecting the "community" to accommodate you and your opinions. I'm pretty sure that you won't be ignored if you find the developers responsible for the projects you care about and spare 10-20 bucks their way alongside a list of the issues and proposed improvements.
> "interested in making things better" != "interested in making things the way I'd like them to be"
Same difference really if our opinions of what constitutes "better" are so drastically opposed.
> Then contribute some other way instead of just expecting the "community" to accommodate you and your opinions.
I have contributed both code and money to projects I think are doing good work. Sadly there are very few of them.
> I'm pretty sure that you won't be ignored if you find the developers responsible for the projects you care about and spare 10-20 bucks their way alongside a list of the issues and proposed improvements.
I can say with confidence that most the projects I've donated to have given me absolutely no special treatment just because I contribute money. I wouldn't have it any other way really, issues are issues regardless and they should be fixed with regard to severity, not who has deep pockets.
Hell, that's probably one of the reasons things in Linux land are so ungodly complicated right now: FAANGs are calling the shots because they have the deep pockets.
> our opinions of what constitutes "better" are so drastically opposed.
I am not sure I follow. You mentioned somewhere else that Lubuntu was the one that gave you the least problems and that you are now using windows. Coincidentally, Lubuntu is the flavor that looks like the most with older versions of Windows.
To me it looks like your assumption is that anything that does not look like Windows 2000/XP is "worse". If you are starting from this point, don't be surprised if others disagree and ignore you.
(Myself, I've been using Xubuntu for the past 8+ years, but I am really not liking the direction Canonical is taking with snap. Perhaps I will switch to Debian + XFCE when I get a slow weekend but this has nothing to do with desktop issues. It's not perfect but the worst problem I can remember was related to get a blank screen after resuming from sleep, which I solved by changing the screen lock program)
> FAANGs are calling the shots
What the big companies are doing are related to the infrastructure side of things and have nothing to do with the desktop - perhaps except Google and their ChromeOS, but Google's ChromeOS approach is looking each day more and more like turn of the century MS and their "embrace, extend, extinguish".
Anyway, perhaps the issue is that you are conflating "Linux" with "Open Source Desktop" and expecting a central place to solve all solutions?
> I am not sure I follow. You mentioned somewhere else that Lubuntu was the one that gave you the least problems and that you are now using windows. Coincidentally, Lubuntu is the flavor that looks like the most with older versions of Windows.
> To me it looks like your assumption is that anything that does not look like Windows 2000/XP is "worse"
That's a very condescending conclusion to draw. I found LXDE less complicated and significantly snappier than alternatives that had their own Ubuntu derivative. I chose an Ubuntu derivative because Ubuntu has the widest range of supported software.
But hey, it all has to do with how it looks right? Thinking like that by the Linux Desktop community is why you guys still aren't taken seriously.
> What the big companies are doing are related to the infrastructure side of things and have nothing to do with the desktop
The desktop experience is not wholly separated from the infrastructure beneath it. The init system, the event subsystem, hardware management, network management, sound system, display server etc. are only abstracted in the leakiest of ways.
> Anyway, perhaps the issue is that you are conflating "Linux" with "Open Source Desktop" and expecting a central place to solve all solutions?
Unfortunately it pretty much is the only option that is even remotely viable. But mostly I focus on problems with Linux because it has by far the most evangelical community.
I am not going to be debating what exact problems you had, but I must be extremely lucky if all those years I never had any kind of showstopper critical issue that made me think "Ok, I can't deal with this and I have to go back to a proprietary desktop".
It's been at least since 2012 that I had installed Linux and couldn't connect a printer or scanner. Meanwhile my wife's laptop on windows asked to reinstall drivers every time she wanted to print something. Webcams? No problem. Wi-fi? No problem as long as I didn't try to use a chipset that was either too obscure or too new and unsupported.
The one thing that I gave up on having on my laptop is low-latency audio to connect a guitar and use software audio effect processors. But the way I solved this was by using a separate old laptop with a custom kernel dedicated to be my "guitar effect box". I still didn't have to give up my freedoms and I did not have to give up any functionality/comfort.
> But hey, it all has to do with how it looks right?
I believe you when you say that LXDE was snappier than the other Ubuntu alternatives, but were the alternatives slower than whatever version of Windows you have now? That will be very hard to believe.
So forgive me for sounding condescending, but you went with probably the most obscure and least popular Ubuntu flavor - the one that has probably almost to no funding from Canonical and maybe a handful of developers interested on it. What were you expecting, exactly?
If Ubuntu was bad for you, maybe try Fedora? If you wanted a more knowledgeable community, maybe try Arch? Why instead of sticking with your preconceptions of how things should work, you ask what are the others doing that let them be productive on a FOSS Desktop? Why is it that upon hitting difficulties your reaction is to go back to the comfort zone of a proprietary and familiar system?
"Is FreeBSD ready for the desktop? Yes and no. Yes, in that I have a very nice FreeBSD laptop where everything works the way I want. But no, in that it took me two months worth of fiddling with this in my spare time to fix some of the "glitches" which arose; while there wasn't anything particularly challenging, I expect that most people would give up long before they fixed all of the issues I ran into.
On the other hand, can FreeBSD be ready for the desktop? Absolutely. I've fixed the issues I ran into — and once we have FreeBSD 12.2-RELEASE with packages built for that release the process of bringing up a GUI will be much easier, as well. The biggest thing FreeBSD needs is to have developers acquiring laptops and carefully working their way through the issues which arise; the FreeBSD Foundation has already started doing this, and I hope in the months to come they — and other FreeBSD users — will publish reports telling us which laptops work and what configuration they need."
There are projects and projects, some I got ignored as well with patch and bug info provided, others reviewed/integrated in a few days (mozilla/rust) or told I was wrong and bugs I reported were fixed another way
> > Ubuntu deciding to remove old libraries/apps that are not maintained. That's fixed via docker or just keeping an old version.
> Which is not a simple task. Why can't keeping old software be simple? It is in sane operating systems. Hell, even Linux can do it right,
I've wondered about these things, and I think the true reason is that Linux is a source-compatible operating system.
Other OS's solve this by the boring and painstaking task of assuring binary interfaces are stable and remain working. They usually do this by hiring and paying people to do it.
Linux does all compatibility at the source level, and binary compatibility is a little hit or miss. The common way to fix it is to recompile a lot of stuff.
As one example, I installed ubuntu 18.04 and it should be Long Term Stability.... but I did an
apt-get update && apt-get upgrade
and upgraded from a 4.x kernel to a 5.x kernel. I recall all the kernel dump stuff broke
Linux has really come a long way in terms of polish, support and stability. Give it a try again!
I use Linux Mint and love it.
MS Teams, Skype and a surprisingly good list of software runs on it natively.
A Hackintosh inside VirtualBox IS a pain to setup, but pretty cool when it works. Windoz inside VirtualBox works better than ever, thanks to MS new attitude on embracing Linux.. which is still hard to wrap my head around.
You are comparing circunstancial issues with fundamental freedoms being denied.
My point is that if you are willing to sacrifice your freedom for the convenience provided by the hypercapitalistic (sic, and lol at how pathetic this term is) companies, then don't complain about the lack of choices available.
> developers don't care about stability and/or polish
Try paying them just a fraction of whatever premium you paid for your iDevice. That might help.
In my experience, people who fight against things like NM do not know what they are doing, think it's still 1990, network configuration is still static /etc/network/interfaces and then wonder why their wifi/lte modem/dns/whatever isn't working and wonder why.
Does the fact that you think alternatives suck make apple have a monopoly? Sure they have a monopoly on your desires, but that isn't a big enough scale for laws to get involved.
There are alternatives out there. So it becomes really hard to claim anti-trust.
I think it’s really easy to claim anti-trust violations actually.
Apple already owns the customer because they’ve invested in the platform, but they’re not providing equitable access to software that other platforms are. This isn’t revealed to the customer though, so it’s not clear as a user that choice is being restricted in this way.
A lot of Apple’s practices have been legal up until now due to their minority market share in all markets they operate in - but what we see those markets as is changing. The App Store is a massive multi-billion dollar industry in itself that Apple holds and exploits 100% control over.
Whether or not a violation has or is occurring is for lawmakers to decide based on whether or not the App Store (or Google Play for that matter) constitute markets within the definitions provided by local laws.
Thankfully hyper capitalism's provided an alternative, Windows, Android, or linux, and pine.
I suppose if you're a socialist you probably don't understand how it all works without a government edict giving you instructions and making sure you and your neighbor have the same marginal product.
However, even without this edict, rest assured that you can make the change without the government allowing you to... Fell free to switch.
I get that you're just venting and that's fine. But were you to run any kind of open software movement and expressing public opinions, calling people "obsessive brand fanatics" would just antagonize them and not get you taken seriously.
to be fair there are people out there that do indeed fit that description, though they are in a minority but tend to be pretty vocal, so it's easy to overestimate their numbers
I don’t think anyone really had that impression, especially since most of the macOS Forge projects got spun off. But you can’t deny they made (and in many ways, still do) good UNIX machines.
> But you can’t deny they made (and in many ways, still do) good UNIX machines.
If you mean the hardware, it's OK I guess. It still lacks basic computer features like PXE booting (unless you count the proprietary "netboot"). You can't really install much on it or use it for anything but MacOS, which really isn't that great, IMHO. For the same cost as a MacBook, I got a really nice PC laptop with double the specs that runs linux flawlessly. I can also update the CPU, GPU, and RAM, which I can't do with a macbook.
Which model is that? It used to be that way up until 5 years ago or so in my experience, but it changed to be “comparable specs with comparable price” - except that the options from Apple are very limited.
> For the same cost as a MacBook, I got a really nice PC laptop with double the specs that runs linux flawlessly. I can also update the CPU, GPU, and RAM, which I can't do with a macbook.
My go to Laptop these days is the Lenovo ThinkPad X1 (either carbon or Yoga) very nicely built with a great keyboard, I hardly (if ever) heard the fan noise and except on a couple of models where the fingerprint driver isn't present, it works flawlessly ootb with Linux.
I'm still wondering what Lenovo were thinking when they came with Gen8 X1.
They used Comet Lake instead of Ice Lake - that results in things like HDMI port supporting only 1.4 (i.e. no 4k@60 there). It makes their current non competitive with 2020 XPS13 or 2020 MBP13, that do come with Ice Lake.
No commercial UNIX was ever on open source camp, in fact they are the very reason while GCC was ignored for several years, it got a bunch of helping hands as Sun started the trend of user and development UNIX versions.
Also given NeXTSTEP heritage, UNIX on NeXTSTEP was always a means to have a foot on the DoD UNIX requirements, there was nothing open source about Renderman, Lotus Improv and many other NeXTSTEP based tooling.
Nitpick: Not really. What you have to do is provide an offer for source code; accompanying the program, not after the fact. If anyone has not provided such an offer, they have already broken the GPL.
Also, the offer is open to any person. This is so that other people with copies of the program can fulfil their obligation by passing on the offer too. So maybe you make one GPL program specifically for Bill, you give it to Bill, and you write Bill the offer, never expecting him to care about the source code.
Six months later a teenager from a country you didn't know existed sends you an email - and the teenager would like source code please. They are legally entitled to that source code because of Bill's offer.
The written offer rule is deliberately the worst case. You should never choose GPL "written offer" with the expectation that this is reducing your work load or whatever, if you want least work just ship the source code with your program and fulfil the purpose of the GPL up front.
I believe the offer is for anyone you’ve distributed the program to. So if it was Bill who shared a copy of the program with the random teenager it would fall upon Bill to provide him with the source and not you.
“[…] a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.”
— GPL 3, Section 6, alternative (b).
Yes, in GPL 3, a link would be enough, but the link must be already provided with the distributed program, you don’t get to only give the link only to those people who ask for it.
In GPL 2, a link is not OK, you must be prepared to send people the source code as “machine-readable copy”, “on a medium customarily used for software interchange”.
rootless.h is a missing system header, not a missing part of the Bash sources (the function it declares is part of libSystem.dylib; it is not part of bash). So leaving it out falls under the system library exception and does not violate the GPL.
Apple does have a handful of engineers working with/on open source projects; whether is a part of their official job duties I don’t know. But it is much appreciated. However, to the person doing open source tarballs: please respond to emails and be quicker about uploading thankyouverymuch
> whether is a part of their official job duties I don’t know
Certainly. From what I've read here on HN you basically can't publish a line of code (or even star things on Github) ob your own, presumably for secrecy reasons.
By official job duties I mean "my manager has told me to spend half my time contributing to MacPorts" versus "my manager (and my manager's manager, and their superior, all the way up the chain to the top, plus legal) has allowed me to send commits to to MacPorts even though I work on embedded platforms".
So ... rootless.h does nothing? I can compile a bash without it that does exactly the same thing? That seems like it would contradict previous comments. What am I not getting?
It contains the declaration of a single function from macOS's libc. You cannot compile Apple's provided bash sources at all without it, but it's trivial to work around the missing header.
OpenStreetMap treasurer here to jump on the tangent. Hi!
Apple's focus is on maximising profit, and ours is on maximising mapping and the width of our membership, especially after the entryism attempt last year ( https://news.ycombinator.com/item?id=19008792 ).-
I'm happy for OSM, that's great for you and in turn the community. But what about small open source projects? One-person projects where the idea of having several people working on securing legal non-profit status and then acquiring non-profit signing certificates from Apple is for all intents and purposes impossible? And certainly not worth just coughing up the annual $99 fee (+tax) to Apple?
$99+ every year is a lot of money to an independent open source developer who's in most cases losing money for their work. The fact that a company worth $2 trillion is demanding it - it's really beyond outrageous.
Outrageous is the entitlement of the current generations for not paying for their tools.
Many of us used to pay for every single piece of software that we had to run on top of our already expensive computer around 2000 euros on today's money.
This is a nice example of what a logician calls a false cause: your conclusion (definition of outrageous) isn't supported by your premise (how much you used to pay for kit).
It's also a straw man, since I was talking about an OS developer wanting to publish their software, and you're attempting to sink it by portraying it as referring to a consumer wanting free stuff.
The subject of your conclusion, 'current generations', is also so vague as to be redundant. Current generations who are alive? Generations of 21st century? Of modernity? Of the West?
Like 1000 euros per year for a MSDN Professional license, or the required certification from several vendors that have to be renewed every couple years.
You don't need to spend €1000 to develop for Windows. Visual Studio Community Edition [1] is free to use for individuals, even for developing paid applications. Even if you're running a multi-developer business, Visual Studio Professional can be had for far less than €1000.
What you're referring to is the top-tier MSDN subscription, which is something that very few organizations will require.
Yeah, but lets not forget that community also is relatively recent, having replaced the Express editions, which were worthless beyond learning purposes, as per license.
Recently tried to compile scintilla in visual studio, but gave up figuring out how to tune all settings in the IDE and compiled with nmake instead, the make file was very transparent and hackable with everything in plain sight.
Or having to pay Red Hat for support to get access to their KB, get updates, and pay separate licenses if you want any of their premium software offerings. Or having to shell out for console dev kits and game engines. And the embedded software world is even worse.
The exception is being able realistically develop for a platform without little/no expense. People really are spoiled by FOSS tooling.
That the exact purpose of all the FOSS tooling -- to make tools free, so more things can get created? Make all FOSS paid and enforce the licensing and you will bankrupt a lot of small commercial companies that rely on them too. Then really a few giants will remain.
This is not about paying for a tool. This USD 99 is a tax paid to Apple to be allowed to distribute the software you wrote, regardless of which tool you used to write it. It could be written with a free tool, it could be written with an expensive tool, it doesn't matter; everyone who wants to distribute software to run on macOS has to pay that tax.
You sound like the people against student loan reform because “I had to pay mine back, why shouldn’t they”. That’s peak entitlement, demanding others suffer because you did.
your perspective is limited IMO, for example you have small tools like a save cleaner utility for The Sims3 , some person made this tool in Java in his free time and shared it for free with the community, why should this dev pay Apple.
There are people that make Visual novels,text based games and other indie stuff(without using Apple tools most are Python,Java or Web tech) for free or a few bucks, I think they would not pay the Apple tax and either not support Mac at all or link to some instructions to workaround this limits while it is possible.
Before Java was a thing, that utility would have been written in Turbo Pascal, Quick Basic, Visual Basic, C, C++ compiler, none of them available for free.
At minimum one would need the Shareware or PD disks tax to get hold of some similar compiler.
Hobbiest either used open source or some freeware compilers. The reason I mentioned Java and C# is that is easy to use for simple tools and you can support all OS, around 15 years ago I bought a book about games and c++ it had a CD with a free/gratis version of a c++ compiler (probably from Microsoft) I made some small games and shared them with my friends (I did not had to pay Microsoft a tax or ask approval)
or use a free (libre or gratis) compiler. GNU Compiler Collection was released over three decades ago, or the BSD licensed Portable C Compiler that was initially released over forty years ago, as for Basic many operating systems had a basic interpreter or compiler built in.
I learned programming on MS-DOS with DJGPP, which is basically the GNU Compiler Collection (and lots of other GNU software) for MS-DOS. It certainly was usable (and included two free IDEs: RHIDE and Emacs), it was as good as GCC on Unix except for the lack of multitasking (which is what led me to Linux). This was long before Windows 98; the earliest DJGPP I can find is from 1994, and I had already migrated from DJGPP to GCC on Linux before 1998.
> Outrageous is the entitlement of the current generations for not paying for their tools.
Sure, but in the case of an open-source developer working on macOS, he has already paid for his operating system; if he is using GCC, he has already paid everything the GCC developers require; why then must he pay extra money to Apple in order for other people to run his software in a straightforward manner on their machines (or, in the future, at all)? How is Apple even a party when two people wish to transact, when one writes and compiles free software on his hardware (paid for) and software (paid for) and the other runs it on his hardware (paid for) and software (paid for)?
Historically free software follows proprietary software because proprietary vendors can't contain their greed and mandate free software, it happened to GNU, git, nextcloud, many times over and over.
This isn’t about not wanting to pay. It’s about being forced to use a tool in spite of much better possible alternatives. If apple allowed SSL-style certification, I’m sure cheaper and better alternatives (similar to LetsEncrypt) would prop up.
Interjecting a side note - while you're here.... a building nr me, was a pub 15 years ago, then it was empty for 5 years, until it became a convenience store. It's still listed as a pub, 2 years after I, and others, have sent corrections. I'd love to love OSM but....
OSM works like Wikipedia. There's no company looking at notes, it's just volunteers. You can also do the modification yourself, it's quite easy, you just need an account.
For that particular case, if you post the URL or location I can take a look if you want.
Actually doing mapping ideally requires a bit more understanding than a map user might want to acquire. So it may make sense to provide a correction and then let people with more expertise apply their knowledge to the problem, rather than stumble about and maybe make more work for somebody else.
Perhaps you notice that (as a gross example too large to be likely) the big field a few kilometres away from you that's used to fly aeroplanes isn't labelled on OSM. You don't know much about maps or aeroplanes, but it's not on there.
If you go into an OSM editor and tell it that's an airport you're probably unintentionally adding false information. Because it probably isn't an airport, there's a good chance OSM cares exactly what it is, like maybe it distinguishes controlled and uncontrolled airfields, maybe it would prefer you label the area one way, and then also label any marked runway (perhaps there isn't one) separately. There's a Wiki full of instructions about the best way to label things. Sometimes there are also local conventions, maybe the Wiki says not to distinguish uncontrolled airfields, but in your area a convention has arisen to add a specific marker for them. All this is stuff that an editor ideally should know, but a random person who thinks "Hey why isn't this on the map" doesn't know.
This is all correct but I think the default map editor does a good job of guiding newcomers for simple edits, and also lets you tag your commit for review if you're in doubt.
For small corrections (such as changing a business from a pub to a store, adding a road, naming a street…) it's perfectly accessible to anyone interested.
For sure for complex edits (like touching important objects such as airports) it's better to make a note if you're not familiar with it.
In some regions you have active mappers looking more at notes, in other areas less. Also many mappers would want to verify before applying the note ... in the end you mostly have many volunteers with their individual intrinsic motivation.
It is useful to report problems as map notes on the website. Less useful than fixing it yourself obviously, but many regions have regular mappers that look through the notes from time to time. So it helps if there are regular contributors caring about the area.
OSM Notes are still useful. But the goal of OSM is a common owned geodatabase, ie a map. I hope eventually every person feels empowered and able to make simple map changes like this.
Many (often cross-platform) apps are no longer signed, so they throw up this warning–I assume that users of these have long since learned that the warning is just something they need to bypass. macOS-native apps have largely adopted notarization and the fee that comes with it. Open source command line tools do not need to be notarized.
Interestingly enough, it seems to be possible to notarize someone else's app, so perhaps it might be a worthwhile use of my developer ID to provide this service to people I trust but don't want to shell out money…
It's important to distinguish between Developer ID and notarization. Signing an app is done by the developer. Notarizing the app is done by Apple.
If you check the code signature of a Developer ID signed app, you'll see the developer's name and Team ID from the signing certificate. This guarantees the app was signed by that developer, as long as the developer has kept their private key secure.
First you sign the app, then you upload it to App Store Connect for notarization. It's an "open secret" that Apple has allowed any Apple Developer Program member to submit any app for notarization, even if the app wasn't signed by them. Apple really wanted all apps notarized. Whether Apple will crack down on this practice in the future, who knows.
The notarization "ticket" is signed by Apple, not by the developer. I've heard of developers who discovered that someone else notarized their app. But nobody else can put their "name" on the app except the owner of the Developer ID certificate. If you Developer ID sign someone else's unsigned binary, you're presenting it to the world as your own. But that's not the case with notarization. Nobody except Apple knows who submitted an app for notarization.
> learned that the warning is just something they need to bypass
Note that I'm not necessarily arguing that training people to click "yes, yes, continue..." is a good idea. Digital security is my day job and I totally see why Apple wants digital signatures for software. However, the message is opaque about what is really going on and just tries to scare people into buying "trusted" software rather than using free software: that developer fee doesn't pay itself.
> perhaps it might be a worthwhile use of my developer ID to provide this service to people I trust
I was thinking the same, we could pool the money, but figured Apple almost certainly prohibits that "for security".
Not only is the message opaque, but it is intentionally misleading. I know the security team at Apple occasionally has trouble coming up with good explanations of what is going on, but this message really can't be looked at in any way other than being misleading, sorry. And you are absolutely right that misleading messages like these train users to click through warnings.
"application cannot be opened" is a false statement. It can be opened, and the user can open it, but they won't tell you how because they didn't get their bribe.
If the signed software is notarized, and the signature checks out, then you can be sure that Apple did some malware-scan-like process to the app on their server at some point(1) and that the app you’re seeing is the same one they saw.
(1) and probably a manual review if the App under analysis was found to call into any but a whitelist of “safe” system APIs.
Without the code signing, you can’t be sure that the app you’re seeing is the same one Apple‘s servers saw. It might be a copy of the app that has had a virus injected into it (which has happened quite a few times recently in pirated macOS software.)
I think we all agree on what the security benefits are, because we know what’s going on. But Apple is telling users that they can’t verify it’s free from malware, implying that all notarized code is free from malware, which is a ridiculous claim to make, and discourages people from using excellent software that Apple, for whatever arbitrary reason they like, have decided not to notarize.
> implying that all notarized code is free from malware, which is a ridiculous claim to make
How so? Even if they don’t catch malware during notarization, Apple also reacts pretty quickly to invalidate a developer’s code-signing certificate if they use it to sign apps that contain malware (as soon as Apple is made aware of that malware-app, for which they maintain relationships with both major antivirus vendors and independent security researchers.) Your computer then receives the new Apple code-signing CRL in a silent update, and won’t run the app (or any app by that developer) any more. Even if you’re offline at the moment, and so can’t contact the notarization servers to find out the app has been denotarized, as long as you’ve been online at any point since the CRL was updated, you’ll be protected. (And where does malware come from? These days, 99% of the time, the network. So if you stay offline, you’re extremely unlikely to run into novel malware anyway. And if you’re online to receive the malware, you’re almost certainly going to have received the CRL update first.)
And sure, there’s a small period of vulnerability before Apple is made aware of new malware; but most malware infections are not from zero-day malware, but rather from malware that’s been going around for a long time already. (And I believe they also push ‘disinfectant’ logic in those same silent updates that update the code-signing CRLs, same as Microsoft does with Windows Defender. So the usual “join a botnet, hijack your browser” kind of malware can simply be reverted.)
Plus, there’s the whole System Integrity Protection thing, meaning that macOS malware can’t really do anything to permanently subvert the Gatekeeper infrastructure, since it lives in the “untouchable” root partition. (It could do something clever with a system extension, but as of Catalina you have to explicitly activate those in the Security preference pane; and probably, as of Big Sur, you won’t be able to activate them at all.) So it’s only people with SIP off (i.e. system extension developers; Hackintosh owners) who would even feel any sort of “deep impact” from any of this malware. Meaning that macOS malware authors basically don’t bother to try to “deeply embed” their malware into the OS, given that the process will only actually work on a tiny fraction of systems.
Anyway, all that being said: it’s not like Apple said they can’t “guarantee” that the app is free from malware, implying that signed+notarized apps would be guaranteed free from malware. They just say they can’t “validate” that the app is free from malware, implying that the apps that don’t show this warning have been “validated” by Apple—i.e. audited, to the best of their own abilities and current knowledge. Signed off on, like a home inspector signs off on a house. And that’s exactly the case. Apple has “validated” those apps. That doesn’t translate to some technical guarantee of safety, like running the app in a VM would give. It only translates to “you can trust this app to the degree that you trust Apple’s validation process.”
It’s exactly the same claim that Chrome and Edge are implicitly making when you download software through them on Windows: the software gets “validated” by Google/Microsoft as not containing malware to the best of their knowledge. It’s an antivirus signature scan, combined with a trustworthiness heuristic based on whether the developer was willing to sign their software. The only difference is that, in Apple’s case, the “antivirus scan” part happens on a server somewhere, asynchronously, rather than on the client. But it’s the same level of effective security.
I think an important corollary is that if a binary is signed and does turn out the be malicious, there's a path to comeback on whoever submitted it. The signing/notarisation process creates a chain of responsibility.
It doesn't say that. It says that it can't verify the developer, and can't verify that the software is free of malware. It's just some arbitrary piece of software, could be written by anyone, and/or could be software that purports to be Word or Photoshop or whatever, but has been modified.
Granted, you could quibble with the details (does pointing out that you can't verify that it's free from malware imply that you could verify that it's free from malware if there were a certificate?). But calling the message "intentionally" (!) misleading?
I... don't think misleading means what you think it means. Misleading statements (pretty much by definition) don't imply falsehoods. They "merely" "suggest" falsehoods to those who don't already know better. If they intentionally "implied" falsehoods then they would be called "lies", not "misleading".
One of the possible warnings you can get literally has "[App name] will damage your computer. You should move it to the trash" in the dialog that shows up. There's a bunch of these, all of them pop up for various GateKeeper/Notarization shortcomings, and none of them actually seem to ever really tell you what the problem was.
1) I searched the article for "damage" and "should move" and didn't find it, so either it was in a screen cap (but I didn't find it there, either) or you meant "literally" in the new sense of "not literally".
2) Apple documentation [1] says (my highlight) "The Apple notary service is an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly."
Is the claim that Apple is not actually scanning notarised software for malicious content?
3) Random unsigned apps presumably have not been scanned, and might contain malware. I still fail to see the problem, or what's misleading (and "intentionally" so!).
I put quotes around it because that is the exact wording it uses: https://www.google.com/search?q=will+damage+your+computer.+y.... You may note that among the apps shown there is LibreOffice and somebody’s issue on GitHub saying they were getting it when creating their Electron app.
> Is the claim that Apple is not actually scanning notarised software for malicious content?
No, the claim is that just because Apple _hasn't_ scanned some particular piece of software for malicious content, that doesn't necessarily mean it _does_ contain such.
> 3) Random unsigned apps presumably have not been scanned, and might contain malware.
Exactly: they _might._ But popping up big hysterical warnings about it strongly implies, particularly to less technically well-versed users, that they_ do._
> what's misleading (and "intentionally" so!).
Strongly implying something that is obviously not true, that's what's misleading. In fact, AFAICT, that is the very definition thereof. And unless they're putting stuff they didn't intend to say into the dialogs they pop up, then yes, it is obviously intentional. Is the claim that their dialog text is un-intentional?
> I still fail to see the problem
Two hoary old quotes (or is the first a proverb? Maybe literally, from Proverbs) come to mind:
1: Nobody is as blind as he who does not want to see.
2: It's hard to make a man see something he doesn't want to see, particularly if his salary depends on him not seeing it.
(Personally, I do data warehousing / ETL programming for a living; currently at the Finnish Social Security Agency.)
It doesn't seem like they verify every app to ensure it is free from malware. Since they respond in the affirmative if they app is signed (by not warning), it seems reasonable for a lay person to believe that an app that doesn't throw this warning is free of malware.
"The Apple notary service is an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly."
They couldn’t verify it’s free of malware no matter how much scanning they do. That’s not quibbling with details, it is the fundamental claim that Apple is making.
In my day job, I work on a relatively large open-source non-GUI application
macOS is becoming an increasingly difficult platform on which to release software. We're going down the notarization rabbit hole (which is a nightmare), but given that we don't fit on the App Store, it's very obvious that Apple doesn't want us on the platform.
My suspicion is that they will eventually charge $$$ for a "developer unlock" on Apple Silicon, a move that I think will make both Windows and Linux look increasingly attractive to developers.
Yes, I've found code signing on Window is a lot more hassle than on macos. You still need to pay for the certs as well. The only distinction is the scary warnings on Windows come slightly earlier.
Unfortunately, it hasn't fixed the problem with running VirtualBox using the hypervisor platform/virtual machine platform (I forget which) while hyper-v is enabled: sha sums (and other hashes like whirlpool, md5, etc) don't work properly. Meaning I can't use wsl2 and VirtualBox. Or use the Android emulator and VirtualBox. And yes, this seems to be a obscure issue that's hard to research (`Intel SHA extension` and `/etc/gcrypt/hwf.deny` may help); I've had to look this up three times now because I keep doing it in private browsing mode or on another device/browser.
Don't mind me, I'm just annoyed that Microsoft won't add support for ssh-copy-id.
Please note that Apple's linker will automatically ad-hoc sign binaries if you aren't using a signing certificate so there is no impact to package managers or any other forms of building software to run locally. Xcode already automatically opts such software out of GateKeeper checks when built from the UI. Similarly adding Terminal to the Developer Tool category in System Preferences will do the same for anything you run or build there.
I should also note directly launching the binary inside the App bundle from Terminal bypasses the UI dialog. The assumption is you know what you are doing in that case.
The comment you're replying to is confusing because it's been copied out of context from where it was originally posted, which as a discussion of a mandatory signing requirement on Apple silicon.
There's a segment of open source users who understand these messages and aren't necessarily dissuaded by them. I make an open source app for merging audio files in iTunes/the Music app. The userbase are people who are technical enough to install Homebrew, (generally) use iTunes scripts, and manage Gatekeeper warnings, but not so determined as to replicate the app's functionality with their own set of shell scripts. https://www.davidschlachter.com/misc/trackconcat
In addition to paying the Apple Developer Program fee, you as an open source or hobbyist developer are required to sign a legal contract with Apple in order to be able to code sign your software. This can be even more problematic than the fee.
>This makes me wonder how open source is supposed to work on macOS.
It isn't. Apple's view of the world is that computer users are non-technical consumers who need to be protected from others and themselves, and that Apple are the ones to offer that protection. Open source is antithetical to this view because it puts the responsibility on the user, which is the last thing Apple wants.
I can sympathize to some extent with this view. There's obviously a large (perhaps majority) share of computer users who it describes - just not small/independent developers/hackers. Those users are better served elsewhere.
I've got to say, I think your proposed message is considerably less clear than the actual one.
E.g., a reader would have to understand the perspective of the developer to even start to guess what that might mean. (Why would a developer pay or not pay $99 to Apple for verification? How do the implications of that affect my decision to run this program?) It would be pretty much meaningless to the average non-developer user.
I agree the price of notarization should be a nominal incremental cost. I don't know if there are many level 3 people doing MacOS development, but if so, there needs to be a cheaper price for them. (The numbers of level 1 and 2 MacOS developers must be practically nothing.)
>>I also can't imagine $100 is easy to come up with in countries below level 4[1]
Someone who is developing for Apple platform in specific, has already spent ~$1000 in devices. Say they couldn't afford to build explicitly for Apple[1], they instead develop for web using a Raspberry Pi and try to leverage smartphone capabilities using PWAs; Alas Apple throws in hurdles there as well so that your PWA doesn't function properly on Apple devices[2].
I get it, perhaps this is part of Apple's aspiration i.e. 'You should deserve to be part of the Apple ecosystem' which is enticing to its customers.
But what's overwhelming to me is, Apple's blatant hypocrisy.
Exihibit -1: Data
Apple calls Google by name, questioning its business model around data and proudly calims 'they chose not to do business with data'.
Then why does Apple advertise its products using Google Ads?
So, it's like 'I will call out dirty work, but I will use the results of that dirty work for my own advantage'.
Exihibit -2 : Values
Apple claims itself to be the beacon of human rights.
a. We know Apple was included among list of other companies supplying user data for snooping in the documents highly regarded to be genuine.
b. We know Apple actively cooperates with an autocratic regime and its highly publicised 'Privacy features' isn't applicable there. But, Apple never includes 'USA only' when it advertises it's 'Privacy'. More over when confronted with proof of Apple's platform being actively used for exploitation of minorities, it outright downplayed/dismissed the impact of it.
Someone who is developing for Apple platform in specific, has already spent ~$1000 in devices.
Ignoring the fact someone might have been given a Mac by someone else, or bought one second-hand for less, or that they might be working on a computer they don't own themselves, why is it that someone who can afford $1000 for a Mac can automatically afford another $100? Surely there has to be an amount that you assume they can't afford, right? If they can afford 10% more then why not 15%? Or 20% or 100%?
You're applying a sort of reverse of Zeno's Arrow[1] to affordability, and I think shows a distinct lack of understanding of how money works when you don't have all that much of it.
I have two macs. Main is a 2013 macbook air which cost £999 (work bought it), so £140 a year. Second is my own mac mini from 2012 which cost £500, so £60 a year.
My main machine is a linux one. Costs nothing to write software on that of course.
I get the feeling that developers who 'came of age' in the last 10-15 years will slowly discover RMS was right all along.
>why is it that someone who can afford $1000 for a Mac can automatically afford another $100
Someone who has invested in ~$1000 specifically to develop applications for Apple ecosystem has to invest $100 to release the application, that is the overall context of my statement in that sentence.
>You're applying a sort of reverse of Zeno's Arrow[1] to affordability, and I think shows a distinct lack of understanding of how money works when you don't have all that much of it.
Cherry picking part of my sentence to make a statement, then claiming to throw insight about my understanding of how money works based on how much of it I have seems like using your own logical fallacy intentionally to make an ad hominem argument.
Interesting to see PWAs being proposed as the ultimate alternative on HN all the time. Questions of quality etc. aside it's essentially Google's platform that they're pushing for via standards that benefit them and their goal of the web as a platform they control. While Apple wants dominance over their walled garden, Google wants total dominance over the web.
You know, I grew up in USSR.
There were no western made cars, only soviet ones: Ladas (based on 40 years old FIAT), Volgas, the dreaded Moskvich or Zaporozhets. Volgas were for the elite and not really accessible for the ordinary citizens, so Lada was it. And it seemed to be a fine car—because you did not know any better. Yes you had to reassemble it yourself after you bought one to make sure it does not fall apart on the road, but otherwise the seemed fine.
That's till the USSR collapsed and western old cars markets got accessible. Almost all soviet cars were replaced by the old, mostly German, ones. Why? Because poeple saw the differenece. And the saw that even 15-20 years old Audi, BMW or Opel were still waaay better than brand new soviet crap.
So yeah, there are thing that may seem fine till you hava a chance to compare them to the truly fine.
Google has a functional monopoly on online advertising, so you’re options for online advertising are google and Facebook, or ad companies that primarily have scams. Seriously it’s just google, Facebook, and then things like taboola, and the most popular sites on the web are 100% google ads only.
Apple’s alternative to google ads is essentially no online advertising.
It mostly doesn't. That is 20% because of bullshit like this and 80% because Apple deprecates, removes, changes and otherwise encumbers their operating system so much that keeping track of it all is a full-time job.
I've seen a couple projects do this by publishing a Mac App Store version. Completely identical to the open source release, but it pays the Apple tax to run without warnings, and it gets App Store-powered auto updates.
The developer who signs the software is thereby taking legal responsibility for the software, and taking the blame if anything is wrong with it. That's not a good risk unless you're signing for someone you trust completely.
It also might be tough to run as a business because it's quite possible the first time you sign someone else's malware, Apple's going to revoke the notarization of all the apps you've signed (which would be for other paying customers).
Not to mention it undermines the purpose of notarization, so if it became popular enough they'd probably just squash it.
That would be true for rubberstamping-as-a-service, a weaker version of running a rogue CA. But project maintainers getting themselves an Apple ID and recouping that cost + x (hopefully) via non-gratis signed binaries wouldn't have that problem at all. Or of they did (malware sneaking into their artifacts), a lost Apple ID should be the least of their concerns.
There'd even be a conceivable but unlikely scenario where some automated scan deep inside the Apple publishing pipeline would detect an otherwise undetected malware intrusion in some upstream dependency or badly vetted commit and thereby indirectly protecting the users of the unsigned copy, by acting as a canary.
How much does that actually change liability vs building and distributing unsigned? Signing has no legal implication other than lowering deniability. What's added is the contract with Apple. Is that such a minefield?
I publish a CLI app for many platforms, including osx, on GitHub. Using the cross-compile feature of Go it just spits out a binary that Macs can run from a Linux build host. I don't own a Mac so don't see what my users do.
Does this change affect running unsigned binaries from the terminal?
> I also can't imagine $100 is easy to come up with in countries below level 4[1].
Apple has as low as 0% penetration in those countries. The market has solved this problem. They still use technology; there are alternative platforms. Android, Windows, ChromeOS, KaiOS, and desktop Linux (which has as high as 5% market share in India) are cheap to use and develop for. It was always going to end up this way. There's the brand for the haves and the brand for the have-nots. Although even people on welfare in the United States have iPhones, consider that they're still the elite in global terms.
Aren't the new ARM macs going to support Docker in a nice way? Wouldn't that solve it? You can have what you want in your Docker, Apple can do whatever it wants (short of blocking the Dockers essential capabilities of course) to isolate said Docker.
But then you have to use Docker. It’s be nice to have something like that on iOS where it’d be an improvement, but on macOS, it’s a step down in a sense.
Many POSIX-y applications will mostly work on macOS, to the point where you might ship in a package manager or something and can probably help with a segfault or two but have never touched a Mac.
Yes but details can get in the way. While every compiler is a cross compiler to any platform, that doesn’t mean that the platform libraries are available or that there’s a linker. For example, the MSVC target uses link.exe, and that only runs on Windows, so cross compiling to the MSCV target doesn’t work in practice even if it could in theory. You can cross compile to the GNU target for Windows though.
Windows 10 also does it's best to try and stop users from running unsigned code, by making the UI complicated.
When Windows 10 finds an unsigned installer it shows a dialog with a Don't Run button and as the name suggest clicking that button does not run the installer.
To run the installer the user needs to first click on the More Info link which will then present the user with an option to Run the installer.
Certificate Authorities trusted by Microsoft for the purpose of Code Signing would need to issue you a certificate with the appropriate EKU (Extended Key Usage, saying this is for Code Signing). Technically a user could add some CA you span up for this purpose to their Windows install, but if you're going to all this bother you could just get them to click past the warning of course...
The CCADB can tell you which CA roots are trusted by Microsoft for this purpose:
You're looking for a CA which has Microsoft Trust Bits including Code Signing, and Microsoft Status of "Included"
Price: A couple of hundred bucks per year. Vendors with very well known brands like DigiCert's "Symantec" brand (famous despite the fact Symantec actually ran their CA so terribly they ended up selling the brand to DigiCert... the CA they'd operated was distrusted) maybe $500 and year and higher. But your users don't care about the brand, so pick a cheaper product like Sectigo's they work just the same.
It's a little more expensive if you want "Extended Validation" aka "EV Code Signing". If you write Windows kernel drivers you need this, otherwise it might only make the UI shown to inquisitive users nicer so don't bother unless you hate money.
NB. Yes ISRG (the people behind Let's Encrypt) are trusted by Microsoft but no they aren't trusted to provide Code Signing certificates, even if they wanted to, which they do not.
My boss doesn't want to pay for a certificate so now all my Windows 10 users (we still have some Win 7 installs out there and even a couple XP) never update my ClickOnce apps.
And this is why out of spite I developed "ClickTwice". It's certainly not as good as ClickOnce but least I ensure they use the latest version of the apps I dev.
Like all other legitimate software on MacOS; they get a developer account and distribute it via the app store.
For years, Windows got laughed at by EVERYONE because there was so much malware on it - in part because of its laissez-faire approach to letting the user install anything from anywhere.
Mac went for the closed garden approach and there's hardly any malware, adware, scareware or whatever -ware you can think of on the platform, which is one of the reasons why Mac is safer and considered to have a better user experience.
Curation is not a bad thing. And if an open source application wants to become popular for the masses - not the HN power user crowd, which represents only a small percentage of potential customers - they have to conform to its rules.
Likewise, they will want to be available through the Windows store as well.
Using the tools and platforms offered by the OS developers is the lowest friction option for installing software.
As for poorer people and countries, isn't this where the open source charities come in? Isn't this where the big FAANGs - including Apple - and the investors and everyone that earned billions off of software should come in? I mean come on, it's only $99.
The substantial lack of malware in the Mac world pre-dates the Mac AppStore, and numbers have not changed significantly since the introduction of that and/or Gatekeeper.
Gatekeeper is a commercial boiling-frog lock-in strategy sold as a security feature nobody asked for.
> open-source charities
Open-source, as a term, was invented in order to sell what was then called Free Software. It has nothing to do with charity.
> Curation is not a bad thing
Apple does little or no curation on the Mac AppStore, because the amount of developers using it is still relatively low.
> ... in part because of its laissez-faire approach to letting the user install anything from anywhere.
This comment makes it seem like installing software outside of a curated store is responsible for security issues, but this is exactly what Linux and other like OSes do. You can install apps from anywhere and I'll wager you'll find less malware, adware etc. for them in the wild, than the Mac. Granted usage of these platforms as a Desktop is way lower making it a less attractive target for bad actors, but much of it owes to inherent OS design.
> And if an open source application wants to become popular for the masses - not the HN power user crowd, which represents only a small percentage of potential customers - they have to conform to its rules.
Open source applications have been popular with the masses way before the curated store app store model came into place. Publishing on an app store has a good chance for increasing outreach, but it should not make distribution and installation of applications in the classical way more cumbersome, should the user so desire.
> As for poorer people and countries, isn't this where the open source charities come in? Isn't this where the big FAANGs - including Apple - and the investors and everyone that earned billions off of software should come in?
It would be hilarious if Facebook, Apple, Google, Microsoft, Amazon, Netflix, etc. decide to start a charitable foundation which just deposits $99 checks into Apple's bank account. They should do it. I wouldn't be able to stop laughing.
Curation, in the sense that Apple uses the term, is a bad thing because it creates a false sense of security. It blurs the line between protecting users from security threats and protecting Apple's business interests.
If Apple was truly interested in protecting users, they would keep these things separate as much as possible.
But they're doing the exact opposite. They keep mixing these things up as much as they can in order to shield their questionable business practices from scrutiny.
On top of that, the iOS side-loading ban is clearly aiding and abetting human rights violations.
> Like all other legitimate software on MacOS; they get a developer account and distribute it via the app store (...) I mean come on, it's only $99.
So small utils and stuff, smaller open source projects etc. are not legitimate? Or should they shell out $99 extortion fee to have the pleasure of giving stuff away for free? This is just one of thousands cuts that will kill traction for Mac software.
It seems like the right balance. As the author says:
> As a Mac developer, it's nearly impossible to run a viable software business when this is the first-run experience of new customers. You'll never get any new customers! This is why every Mac developer I know signs up for Developer ID and ships only signed, notarized apps. It would be financial suicide to do otherwise.
If you have hung your shingle out to make a profit, then the developer account, signing, notarizing, etc. is a cost of doing business, and you can easily justify it. The more customers you get, the more money you get, so you are motivated to reduce the first-run friction.
If you are not in it for profit, you probably have a lot more tolerance for a little first-run friction, and having users drop out of the funnel. Fewer users does not affect you financially. As a hobbyist programmer, I wouldn't care. I'm just releasing a program--not looking to dominate a market.
I appreciate this view except for the last point. As a hobbyist programmer, I am not "just releasing a program". I am usually "helping solve user's problems". And if my solution requires users to go through even more problems before they can use my tools, that's problematic.
I don't need to many money on my side projects. I do, however, want to help people. If I can't help people on a Mac because of the install friction, then it isn't worth my effort to create a MacOS port of my software at all.
> the developer account, signing, notarizing, etc. is a cost of doing business, and you can easily justify it.
But what if I'm not doing this for profit? Can Nirsoft or Mozilla apply for a waiver? Can I? We may not be looking to dominate the market, but it would be a shame if our work just went to waste because people would rather pay for something crappier that is closed source rather than our free (as in freedom & beer) software.
(Yes, Mozilla is a huge project where it isn't worth employee's time to apply for a waiver, I just needed at least one name that people know is a non-profit software developer as an example.)
> Is there any automatic way to tell your software apart from malware?
There is no universally agreed-upon definition of malware. One man's operating system is another man's malware. For me, an operating system that "calls home" for each new executable you compile is a crystal clear case of malware. In the case of this article, then, the only malware in question is macOS.
If an Apple engineer were to compile a variant of Apple’s notarization algorithm where ok means no and no means ok, would the resulting binary notarize its own source fed into it?
Well, if the mechanism is e.g. a blacklist of APIs that shouldn't be used, and a blacklist of known malware hashes (as is the case), then Apple's "is this malware" routine could trivially print "no" for itself.
Sorry to the grandparent, but this is nothing like the halting problem...
This already exists and it is called XProtect. My question through these threads has been "why does notarization exist" and I am still trying to understand why it does, because every answer I have been given simplifies down to "here is a reason that it should exist…wait, that's just what code signing or Apple's built-in MRT does already".
Nothing is calling home when you run a new executable. You're not understanding how gatekeeper works. It works entirely offline without network access.
Close your browser and monitor your network traffic. Compile a hello world with a unique text string. Run it. It calls home the first time you run it. Then it doesn't.
If you are not connected to the internet, it does not call home indeed.
>There is no universally agreed-upon definition of malware.
Doesn't have to be. Just the common user's definition is OK.
>For me, an operating system that "calls home" for each new executable you compile is a crystal clear case of malware. In the case of this article, then, the only malware in question is macOS.
I'd rather also have a party in whose interest it is to not get malware on their operating system confirm your claim that your software doesn't contain malware.
It would be nice if the OS automatically verified these checksums. That would have been a nice OS X feature, but instead Apple ignores the verification process that already exists and invents their own, with themselves in control.
The thing is, your friendly scammer could also publish checksums on their website.
It is clear to you that you're writing fine open source software, not malware. But how is the consumer supposed to tell?
If people trust you, why bother with the checksums? (Over HTTPS, the downloaded content cannot be tampered with. If someone tampered with the content on your website, or performs a MITM, they can also replace the checksums.)
The checksums are there if they happen to grab the binary in some way that is not "using HTTPS directly from my website" and they'd like to check. Why do the know I'm not writing malware? Trust in my software, mostly? It is unclear that notarization actually stops malware–Apple has failed to explain how it helps, but enforces it by decree.
1. You submit your app bundle and your credentials to Apple for notarization.
2. Apple records your information and goes through each library, framework, and your code, checking the code signing info and "fingerprint" of each for known malware.
3. Apple issues the ticket for stapling to the app bundle.
Now say, for example, that libffmpeg-0.1.2-beta2.dylib is found to mine cryptocurrency:
1. Apple goes through their database and finds the app where the malware was reported.
2. Apple marks that fingerprint as malicious.
3. Apple now flags any other apps that use libffmpeg-0.1.2-beta2.dylib (by checking the fingerprint) and disables any versions of any app running that version. Additionally, any other attempts to notarize apps with the malicious dylib are rejected.
Notarization provides 2 major benefits for devs that I can see:
1. Apple doesn't need to revoke your entire certificate just to block one version of an app.
2. Apple's audit trail of who notarized the app (and from where) prevents cases where stolen credentials result in a DoS of the victim (e.g. your account being locked, your name and address permabanned, and funds frozen).
I'd be interested to see the track record since they implemented notarization. How often has it caught malware, both before and after the fact? I haven't seen any headlines about e.g. a popular application failing to launch one day because Apple found a miner in it some time after initial notarization.
They can do that if they’ve obtained a copy of the specific binary with malware. But with notarization they can proactively scan for things that look like they might be malware, and follow up with either automatic rejection, or approval followed by manual inspection.
Ok, I guess that does make sense. Still, it does have the drawback that all distribution must go through Apple, and you need to now need to pay to develop software for the platform :(
Because notarization has the very concrete downside of costing money to do, plus the fuzzier steps of it being a fairly complicated and often picky/opaque process to have to deal with when shipping your software.
It is still convenient enough for me to run software I want that isn't signed, but sufficiently obtuse that neither of my parents have figured it out. Given they are both prone to running any executable that any website tells them to download and run, this feature has probably save me several dozen hours of fixing their computers.
You stop using Apple and go to a reasonable OS. You and I are not going to help Apple keep their $2T valuation, we are no longer the market not been one for 5 + years for Apple
Apple moved their focus from the professional market who were willing to pay premium for productivity with better UX/Hardware/Software, to people paying just the premium .
It is a lifestyle brand now, people money because it is Apple, not because there is real value they gain for the higher cost.
Depending on what you work with, Windows or Linux is by far better option today. MS listens to developers (eventually), WSL , professional support, a lot of graphical professional grade applications all make it attractive.
Linux is lot more flexible, works with less friction makes it easy to work with containers servers etc,
If you really need Apple hardware dual boot or flash a sensible OS .
>It is a lifestyle brand now, people money because it is Apple, not because there is real value they gain for the higher cost.
Oh come now, there are millions and millions of professionals using macOS to do work every single day.
This is basically a slippery slope fallacy. Today they're making it marginally hard, what might they do tomorrow?!?
There are many reasons to prefer macOS to Linux or Windows. The Apple ecosystem works extraordinarily well together. There is nothing that you can buy, not with all the money in the world, that matches the seamlessness of owning an Apple Watch, iPhone, iPad, AirPods, and a macOS device. You can't do it with Linux, Windows, Android, or anything else. It doesn't exist.
I prefer macOS for development work, and for basically any kind of work. I loathe the Windows UI/UX and Linux is far too often a second (or third) class citizen. There is nothing that I have needed to do that I have been unable to do because of macOS. Not once.
Not to mention... if you want to write software for macOS, iPadOS, WatchOS, or iOS you must own at least one Mac.
One arguing that sn established0 pattern of behavior will continue into the future is not a slippery-slope fallacy. There is nothing fallacious about arguing that Apple will continue to make the OS more restrictive in the future, based on a profit motive.
"The core of the slippery slope argument is that a specific decision under debate is likely to result in unintended consequences." [1]
I don't see any claim of unintended consequences here.
manquer is right, though. Since about 2008, Apple has moved towards being a lifestyle / fashion brand, rather than the semi-pro brand they used to be. For me it started with Final Cut Pro, and the slashing of the quite friendly web server experience. They've even tried to make things hard for Adobe at times, forcing people to pay for patches for months if they were doing it "the old way" after an OS X upgrade. They've simply been terribly rude to their own professional users. By cirka 2016 I started actively advicing professionals to shop for other hardware producers, both within creative media and IT. Incidentally, that's also when the last usable / hardware upgradeable Mac came out. Since then they've glued every part on the thing shut. Apple might look pretty, but you're better off using something else. to me this thing is just yet another nail in the coffin. Love, a former Mac fanboy.
> there are millions and millions of professionals using macOS to do work every single day.
Speaking for myself, every time I've used a Mac for work it's because I had to, being that it was what the employer issued.
Could I have made a stink to be issued something else? Sure, but that is trading problems for problems. Have I used worse OSs? Yes, but that doesn't mean I should be grateful for what I find negative about the Mac OS.
I'm not going to be buying myself a Mac as I'm more productive using a different OS, but it's fine if millions of others do. Just factor in that there's employers with IT decisions out there too.
There was one short term gig [small start up needed short term help, I was available] and they wanted//needed me to bring my own laptop for work. "Fine by me, is it OK that my laptop runs off of Linux?" It was fine. Everyone was happy. If similar would happen more often that'd be great, but I'm going to cooperate otherwise within reason.
> Oh come now, there are millions and millions of professionals using macOS to do work every single day.
Right, and how many of those are using macOS because they're "forced" to, as a sibling suggests, or simply by force of habit?
Moving from macOS to Linux or Windows (or from any one OS to any other, really) does require some time. It's not necessarily difficult, but it still takes time. Time that the same professionals might find is better spent on their actual professional work.
It could be argued it's a boil the frog situation, but for me that means the incentives to change the OS aren't quite there yet. Maybe if MacOS 10.17 goes too far with restricting what people can run on it, there could be more people switching. But remember, many of those "professionals" are not programmers, they use standard "enterprise" software, like Adobe or whatever. I bet most of those people have no idea what a terminal is. They would probably not even notice such restrictions are in place.
Is it really that hard for you to believe that people like and prefer macOS??
I don’t want to move to Windows or Linux. The time sink is irrelevant. I am a software engineer and I PREFER macOS. So do my coworkers. So do many, but not all, of my friends. So does my spouse.
Many of you replying here just seem completely unable to grasp that other people exist, with other thoughts and opinions.
So you would stay loyal to the product no matter what it evolves into? I imagine there's some kind of limit to what you would put up with. Or is your point that these restrictions don't bother you at all? That's fine of course but I'm sure not everyone feels the same way.
As someone who never bought any Apple products, I find it harder and harder to see why I would buy one. I have friends who are quite invested Apple owners who say they will most likely not buy Apple again because of the way the product has changed for the worse (in their eyes).
>So you would stay loyal to the product no matter what it evolves into?
No. My line in the sand is Apple locking down macOS so that I cannot install applications outside of the Mac App Store. These "restrictions" that seem to annoy other people do not bother me. At that point I would just migrate over to Linux and maintain a Mac for any remaining Apple specific dev.
I'm not sure exactly where the line is on mobile. I don't feel restricted by the iOS ecosystem, but I do understand that others might. If there became a time where I wanted to abandon iOS then I would likely just go without a phone or get a flip phone. I refuse to use Android or any product developed by Google.
> Right, and how many of those are using macOS because they're "forced" to, as a sibling suggests, or simply by force of habit?
Right, because you know why each and every one of the billions of users do what they do, and of course, none of them would be doing something by choice, because how could anyone possibly like something you do not?
I'm not talking about "each and every one". My point is that many people just use the tools they are given. For example my client has one department where everyone gets a Mac. I'm pretty sure most of them don't have a very strong opinion on the matter, as they do mostly administrative stuff (so basically MS Office). They could probably get a PC if they wanted, but again, they just don't care.
Of course people may choose MacOS. I've been using it for a very long time and hopefully that won't change anytime soon. But that doesn't mean that everybody who uses MacOS does so by choice.
Nobody can deny that macOS has the most integrated system especially if you own an iPhone/iPad/iWatch. However, for development, it's going to depend. Excluding iOS/macOS development, the macOS is becoming less and less attractive even for web developers.
I'm a web developer using macOS on my home machine for personal projects. I use Linux at work so I'm familiar with both, I have no intention of dropping either in the near future. Both fill a particular niche for me while not requiring much cognitive overhead to switch between them. As
long as they don't do anything spectacularly moronic like insisting on the mac App Store as the only source of software or preventing non-Apple OS's being run then I'm unlikely to get pissed off enough to leave the Apple ecosystem.
"To each their own" is the key principle we're missing I think, personally my encounters with Windows have been by far the most frustrating and unproductive interactions with an OS, but some people wouldn't ever step their toes out of Microsoft land. I'm not sure how much of my dislike for Windows is because I'm simply too set in my *nix ways and how much is genuine bugginess and UI obtuseness as well.
Running Docker on Linux is absurdly faster than on a Mac or Windows. My experience was comparing a company-purchased $3500 new MBP with an old Windows gaming laptop running Linux Mint.
On top of that, my old laptop originally cost 1/3 of the price of the Macbook Plus And didn't require a overloaded dongle because it actually has USB and HDMI ports.
Just a note about Docker performance on Linux vs. others: this is likely because Docker on Linux runs without virtualization as more or less a normal subprocess in a separate cgroups namespace. Windows and macOS must use a virtualization layer (e.g. VirtualBox) to run any Docker process, which for obvious reasons is going to be much less performant.
Comparing to Linux: Tiling Window Manager, Package Manager, Much less apps/processes running on your system, Docker integration. It's also moving more toward an iOS like OS.
On the downside, Linux still doesn't have an advanced GUI Git app and other cool stuff like Paw.
Don't know about the others, but I have an iphone and a mbp. I used to have android phones before this, and I bought the iphone just to check it out, since I got a good deal on it.
For me, the "seamlessness" opened new possibilities of use. What I really love is being able to use the phone from my computer. I don't have to have it handy.
Examples :
1. Call and take calls directly from the computer. Extremely practical with corded headphones I use for listening to music. Could I just get my headphones off and grab the phone? Of course. If I noticed the phone ringing.
2. Handle messages from the computer. I don't send and receive messages that often, but for some random 2FA over SMS this is just so practical. Could I grab the phone and read the code? Yeah, but if the computer does it automatically it's much nicer.
3. The Handoff feature, for example when I'm on the bus, start reading some article on some website. When I get home / at the office, I can open the page on the computer. Couldn't I copy the URL and type it in? Of course, but it's clearly a pain.
4. Internet sharing: just activate it from the computer, don't have to copy the password, etc by handling the phone. As above, I could of course do this by hand, but it's much more practical.
Maybe all this could be set up somehow with Linux / Windows and maybe with android too. I honestly don't know - my windows PC can't even get internet sharing via usb out of the box (didn't care enough about this to look into it more). It works fine on Linux though.
It's not a question of having a short attention span or of doing something very difficult. It's just having tools that work for you and getting out of your way. After all, that's why we use machines (which computers are): to automate things.
I get that but I still don't understand why you need it. Actually now I'm noticing that _none_ of the replies answer my question - _why_ do you need it ?
1. If I don't notice my phone ringing it's because I don't want to notice it ringing. If I'm waiting for a phone call I will check my phone often or not use headphones.
2. Is being aware of where your phone is and typing off the code from it really so challenging for you ? I don't know about you but I spend maybe 1-2 minutes per day on punching in 2FA codes and most of the time I don't even realize I'm doing it.
3. DM link to yourself on twitter or any other platform. There is also sync for chrome so you can sync stuff as you want, but to be honest whenever I am reading something on the bus the link is still where I found it when I get home. Somebody sent it to me ? Open messages on computer. Found it after searching google ? Make the same search on PC.
4. I don't understand what is complicated about this. I swipe down on my phone and press icon for hotspot. My computer connects.
It's really interesting that Apple users speak about how everything is so smooth yet they don't want to "fumble around with the phone", seems like an oxymoron for me.
Seamless means I can call or text from any of my devices. Switch a phone call audio from my laptop, to my phone, to my AirPods. Go for a run and stream music from my watch directly to my AirPods, without my phone. Unlock my laptop with my watch. Confirm a password from the keychain with my watch. Pull up a browser tab from my phone. Share a clipboard between all my devices with zero setup. On and on and on.
While I also find these changes to macOS alarming and disappointing, I still vastly prefer macOS to Windows.
My 2013 Macbook Pro is also still the best laptop I have ever used (I also have a 2017 Macbook Pro for work and I have to admit that the newer models rightfully earn their bad reputation).
If Apple continues to head in this direction, I think the right move is to migrate to Linux. Windows is a huge step back, as far as I'm concerned.
There are plenty of comparisons online showing for the same price as a Mac you can get a PC that will edit your photos, compress your videos, render your 3D architecture 3x to 4x as fast as the fastest Mac you can buy. If you're a professional might be throwing money down the toilet waiting for your mac depending on your use case.
Note: I'm not a mac hater. I have 2, one I use every day and love it. I also have a PC I use every day. I still found the comparison videos enlightening.
While I haven't seen those comparisons, I did do a similar one myself. Like GP, I have a 2013 mbp. It still works perfectly and handles whatever I throw at it. As it's getting long in the tooth and it could end up not working anymore, and also because I got curious with the release of the 16" mbp, I looked a bit at the market, and also at similar offerings from Lenovo, HP and Dell.
You are right, for the price of a 16" MBP with 64 GB RAM and 1 TB SSD, I could buy a maxed out Lenovo X1E with 2 SSDs, so I could dual boot windows and Linux (I use Photoshop / Lightroom for personal photography but can't stand Windows otherwise). Oh, and I'd probably have more than enough left to go on a trip somewhere and actually use the mobile part of the laptop.
So yes, on pure performance numbers, I completely agree. However, that's really not the whole story. Not everybody needs absolute maximum performance. I wouldn't still be happy with my 7 yo mac if I did.
I use my laptop every day for (hopefully) years and years. So I care about other things. First and most important of all, the touchpad. I've tried Lenovo (T series), Dell XPS and HP Elitebook models. They all absolutely suck compared to a mbp. A friend has a 2-3 yo Dell XPS. The palmrest plastic is all sticky, the screen is wobbly the touchpad is a nightmare (sometimes it clicks on its own). I have an HP from work I sometimes use. The junction between the palm rest and the body isn't level, so after a few hours of working on it my wrists are raw from rubbing on the body. The same laptop has a power light right in front of the screen that is blinding. I had to cover it with a piece of aluminum foil to be able to work in the evening. These kinds of annoyances make it worth it for me to spend the extra cash on a mbp that will just get out of my way and let me work.
Actually, this is the wrong comparison (at least for me). I went from a 2017 15" MBP that was around 3.5k EUR to a ThinkPad T495 (with AMD hardware) for around 1k EUR and I also maxed out the RAM to 32GB for another EUR 150. I couldn't be happier.
You mention the worse touch pad, and I agree, it took me a week to adjust... but on the other hand, the keyboard of the ThinkPad is so much better, in fact it might be the best keyboard I have ever used (not only on a laptop, and I have used mechanical keyboards).
Also, KDE/Kubuntu seems to have progressed so far over the past 10 years (since I used it last) that it feels better than Mac OS to me currently.
The only pain point remains my photography workflow, which I still haven't migrated and for that I still use the MBP... If I manage to migrate that, I don't think I'm going back to the Apple ecosystem soon.
Currently I feel much more productive at my programming workflow than with the MBP. Doing mostly TypeScript, OCaml and Rust development. And that for about 1/3 of the cost.
Intel Core i7-10875H
Windows 10 Pro
15.6" 60Hz OLED 4K Touch
Quadro RTX 5000
32GB RAM, 1TB SSD
US$4,299.99
16-inch MacBook Pro - Space Gray
2.4GHz 8‑core 9th‑generation Intel Core i9 processor, Turbo Boost up to 5.0GHz
64GB 2666MHz DDR4 memory
AMD Radeon Pro 5500M with 8GB of GDDR6 memory
1TB SSD storage
I've been checking the last few years, all laptops with 32G (which is my main requirement) are about the same price (2000 GBP), except that, of course, Macs have superior build quality and software.
I bought one of these last year; the specs still look about the same AFAICS. Paid just a smidgeon over 2K€ with 64 GB RAM and 2 TB SSD. A little better than 2K GBP for 32 GB.
I’m very curious to see whether this is still true once Apple starts shipping Pro machines with Apple Silicon processors. What if it’s not? How would that change the balance?
You may be surprised how much equivalent laptops from other OEMs have caught up. The Dell XPS is very similar to the Macbook Pro, and Dell supports Ubuntu on it.
The XPS line had some of the worst speakers I’ve ever heard on a laptop up until the 2020 revision- and some of the worst build quality issues I’ve ever experienced with a laptop.
My 2019 XPS 15 went through eight at-home repair visits after hardware failures within four months. Can honestly say I’ve never used a less reliable machine- and every unit I’ve ever touched has had some of the worst coil wine issues I’ve ever encountered in any computer.
As someone who's returned their 2020 XPS 15, I can say that it hasn't gotten better. I can't speak for how well the hardware endures but it does have various flaws. Also didn't find the speakers to be good. Sounded worse than the SB2 I was replacing.
Running an MBP now, but if sticking with Windows was important I'd have chosen a ThinkPad.
If we're sharing anecdotes I also have a XPS13 9650 which is now over 4 years old and I never had any issues whatsoever with it. Runs Linux since day one and I could not be happier.
I got a MacBook-air sized xps, and the hardware is fantastic, better than any MacBook imo - the bezel alone makes it so.
But I have never had a laptop that gave me as much trouble with Linux. I bought two of them over the course of a couple of years, and both the one that shipped with Windows and the one that shipped with Ubuntu had CONSTANT hardware compat issues.
This was a while ago (after the reboot of the line), so the build quality and compatibility may each have gotten better/worse.
I switched to a Dell XPS last year, then sold it on ebay 6 months later and bought a new Mac. WSL is a cool idea, but is still a bolt on solution, I was constantly fighting with it on windows with my unix-oriented workflow, trying to get apps and packages to work correctly.
The trackpad feels like it's from 2012. The speakers are very bad. The fans are constantly whirring. It misses on all the details, while also making it harder to do my job.
>The trackpad feels like it's from 2012. The speakers are very bad. The fans are constantly whirring.
It’s sad because the XPS is beautiful, but these same issues plague every Dell, HP, and basically non-Apple laptop. Even the most premium lineup (Envy, XPS, etc.)
How many years and fancy designs will it take before PC OEMs actually develop an excellent trackpad, not use tinfoil speakers, and use silent fans?
Battery life and hardware support. You'd get better battery life and thermals running Linux in Virtualbox on a Windows/Mac laptop than on bare metal. I'm a Linux enthusiast but every time I use it on a laptop is pain. Laptops are like phones in that they need deep integration with the OS to be usable long-term. The best way to use Linux is on a desktop that you build and can control what hardware is used. (The desktop with multiple monitors is a superior form factor IMO for getting work done.)
Battery on my recent Dell is over 10 hours with Firefox usage.
Leaps and bounds ahead of where things were at 4 years ago.
Libinput, while less fully featured than synaptics, seems to have a better physics model for the trackpad pointer translation. It still doesn't have fractional scrolling though, every scroll is still a discrete increment.
If you use Wayland (which... has its issues, but maybe they're better now?) on an XPS 13 Developer Edition... or even my junk Chromebook with MrChromebox's firmware installed so I can install Linux, you can get all sorts of gestures that are more than just "two gestures." Heck, even Windows supports gestures just fine. So...
I have yet to encounter any non-Mac trackpad gestures that don't feel cheap, laggy, and/or too unreliable to bother committing to muscle memory. Apple's trackpad haptics and OS-level gesture support are second to none.
then use keyboard combinations. they are faster and much more reliable. I am using a wayland window manager called sway, and while im pretty sure you can do everything with a mouse, you can configure it to do a lot of things with the keyboard. for example, I found the three-finger-tap which is used for copying and pasting was too difficult and slow, so i just bound it to mod+c and mod+v
I was gonna reply with this as well, but you just about nailed it. When my hands are almost 99% on the keyboard (especially when programming, or even while browsing the internet), why should I bother moving my hands back to the mouse when I can just use my comfortable and reliable keyboard?
Try fusuma (gem install fusuma, may need user adding to input group to run, or run it with sudo).
Gestures on trackpad for recent (libinput) Linux. It runs xdotool to stuff the keyboard buffer with keystrokes when it sees certain gestures. E.g. three finger swipe left and right do Alt-Left and Alt-Right, which have the effect of Back and Forward in the browser and some other browsing apps (e.g. IntelliJ). Four-finger swipe left and right switches desktop on my machine. It's one more finger than Mac but probably required not to conflict with two finger scroll.
Trackpad gestures are not the same as a keyboard shortcut. A shortcut runs one command as a discrete event, whereas a gesture is continuous e.g. what % zoom did your pinch gesture represent.
Eh. That's not what I understand by gestures. Mouse gestures - they're also a thing - generally use a sequence of orthogonal mouse movements for a single command. Most Mac gestures are discrete, not continuous. Pinch to zoom is in the minority, and is more like a multitouch input than a gesture per se. Phone gestures too are generally discrete; the exceptions are zooming and rotating, and rotating is sufficiently fiddly it isn't seen much outside map apps.
Continuous: pinch zoom, rotate, swipe between pages, swipe between virtual desktops, swipe between fullscreen apps, show notification center, show all windows on current desktop ("Mission Control"), show all application windows ("App Exposé"), show Launchpad, show desktop
Care to enlighten? I've been using all three, and I'd rate Linux to be miles ahead of MacOs, which is a few steps ahead of Windows.
For what an OS should do, Linux does it far better, and Windows does it terribly with its ancient nt kernel.
Then it comes down to usability for end user, gnome gets out of the way, and allows pretty good customization. MacOs is pretty mediocre and restrictive. And Windows is again pretty bad.
Then it comes to software availability, which is a bit a chicken and egg issue that I don't find too much value in. If the software availability restricts you, then you got to do what you got to do, with the options available.
Then it comes to value for hardware. And here, oh my how things have changed in the past 5-10 years. If you don't buy a mac for the software availability, or for some other restriction, you are literally making a terrible purchase. You get a far worse computer for more than twice the cost. This is no exaggeration. Upgrading components when buying is somewhere between twice and ten times more expensive than the equivalent components.
>If you don't buy a mac for the software availability, or for some other restriction, you are literally making a terrible purchase.
I laughed. A lot. The whole thread is full of people noting the level of hardware and software polish on offer from Apple but unavailable elsewhere, and the seamlessness of working in a MacOS + iOS + WatchOS world and how nice that is, and your takeaway is that we're all dumbasses making the wrong choice.
You laughed a lot at that? Well, you do you. Also, there is no need to call yourself dumbass. It's OK to value things in different ways. Apparently, you value the walled garden and the benefits therein. That is OK. I personally do not see that value, and I would call it a terrible purchase. For me, computing power and flexibility is more important. It's also not like build quantity is exclusive to Apple. As a final takeaway, calling strangers 'boyo', that does make you look like a dumbass.
for more than 10 years I loved using Apple . My first introduction to Apple came during the G5 times in edit studios with Final Cut Pro I would rent those on hourly rate even when it was 30-35% costlier than Avid and premier pro . The project always ended cheaper because Apple was that good . My first computer was Apple the Airs were a great product , I loved the MagSafe port I loved the build quality when they move to aluminium. Half my company uses Apple because I evangelised them so hard . Therr a lot of people whose first Apple laptop was because of me.
Today I use a system76, I have used various thinkpads over last few years , I would recommend thinkpad any day over similar priced Apple .
So you're fine with a less polished / higher maintenance desktop OS.
Not everyone is. I have absolutely no interest in going to an environment where I can't run native Office, or a decent mail client, etc. And I have less interest in putting up with the garbage fire that is Windows.
I find Linux considerably lesser effort to work with . I need to install Xcode command line tools to install even git . I need a package manager . the decades old gnu utils , the BSD style command order importance, lack of procfs , poor container support all mean I need to be constantly context switch between my local environment and dev/prod .
I don’t need native MS office, with sharepoint and google there is really limited value to run a desktop version .
Perhaps if you are professional publishing there maybe a need, for vast majority of regular office users who are just doing documents / presentations and spreadsheets for limited consumption , there is no real need for native office. The collaborative features online are lot more valuable
One of the reasons I originally choose OS X was exactly office , it was valid 5-10 years back , I needed to be able to talk to developers and management/customers OSX fit that perfectly , it gave the best of both worlds , today using Linux if I download to file to Libreoffice sure the formatting still gets messed up ,however not once in sharepoint has it happened to me and only rarely in google docs
Pre office 2016 , Apple versions of office had all kind of kinks which made it just as annoying to work with sometimes you were forced to keep mentioning your were on Apple version of Office
Everyone ‘s needs are different, if your workflow does not have suitable alternatives you need it sure .
My point is with average hardware and increasingly locked down OS the alternatives keep becoming more attractive .
Don’t get me wrong OS X was/is still great . first time I did OS X upgrade 10-12 years back I was amazed , it did everything on its own , the files I had opened back exactly it felt out of the world ! compared to windows back then you had to format your disk reinstall all your apps and copy your files back .
The application install/uninstall is great no messy over abused Registry needing you to reinstall the OS every 6 months. The UX is still great .
However the other OSes have caught up and Apple is no longer ahead of the curve enough to warrant the effort
> You stop using Apple and go to a reasonable OS. You and I are not going to help Apple keep their $2T valuation, we are no longer the market not been one for 5 + years for Apple
Or you just buy apple stock and keep using Apple ? That way you are just helping yourself.
>Doesn't take a genius to see that their end goal is to make mac os as restrictive as ios.
I've been hearing that for the past decade. With the exception of some reasonably deprecated technologies (32-bit support, etc...), to this day I'm able to run essentially all of the software on my Mac that I did back in 2010.
If and when Apple does prohibit the running of third party apps, I'll switch to another platform. Until then, this is all just FUD.
While you are able to, many others are not. So it's a hassle for those developing independently to distribute their software to the masses.
So yes, you can run the same software (mostly) as in 2010 knowing the workarounds. But as a consumer you are probably missing out on new, great software because developers wont pay the apple tax to make their stuff available. So you're already affected. It's not FUD, you're just not seeing the issue.
What are these many others, and what issues do they run into? And what great software are they missing out of? You're making a bunch of statements without providing examples or sources.
"These many others" I'm not providing examples of are exactly what the article is about. Didn't think I would have to rehash that...: Most people don't know that you can run untrusted programs with right click, or will even dare doing it.
What "great software they're missing out of"? Who knows? That's my point! It increases the burden to distribute software for Macs, so as a consumer one's missing out on what could have been. My guess is lots of small open source software won't make easy to install distributions for Mac.
They've already effectively prohibited it because, as you can see from the article, installing software that doesn't go through Apple is made incredibly painful and essentially nobody will succeed in doing it.
Apple doesn't really care that you can run it by using a bunch of workarounds if they have 99.9% control over all software distribution.
There's no way in hell that a casual user like my parents would to be able to figure out which 5 mouse clicks to do (without direct guidance) Even the documentation doesn't tell you to right click.
They're competent enough to install software, they just don't explore a lot in the UI. If they can't figure out how to do something with what's in front of them, they have a tendency to assume it can't be done. In other words, if they click on an app and the only options are "Move to Trash" and "Cancel", they're going to assume that their only options are moving the app to the trash or cancelling
Not too relevant here, but I think the main thing keeping regular people from installing open source software is that most OSS is an unpolished usability nightmare compared to commercial counterparts.
I've never used Krita, it looks nice though and I like seeing well polished OSS, but a specific example says nothing about the general trend.
> Why does it even matter if it's a usability nightmare if the user wants it they should be able to install it if it's open source
Sure. And users are able to install it... it's just a little hidden. My point is if you can't even figure that out then you probably can't figure out most OSS either. Most.
That OSS usually has poor usability is like the most common criticism ever. And could you explain how specifically am I dismissing users' wants? I'm just saying an extra 5 clicks is not that big of a barrier, when people in this thread are throwing phrases like "incredibly painful" around.
I will agree that many OSS desktop applications don't have the most modern UI and are not as visually appealing as some paid-for apps. But in terms of usability, "beautiful" apps can be just as if not more frustrating than dated-looking ones.
There's especially often a lazy route to perceived usability that consists in just not providing many features. Of course, it's easy to create an easy to use application if it doesn't do all that much (like $20 mail clients that can't even do GPG - no thanks, I'll switch back to ugly, but perfectly usable Thunderbird).
I mean why let anybody run any software at all? After all, it’s only their hardware that they paid thousands of dollars for. Even the OS itself can have security vulnerabilities. Best if we just ban software, it’s too dangerous.
GP is basically arguing it's impossible to run arbitrary software which is BS. It's not as simple as it could be, but Apple didn't prohibit it, they just hid it behind a few menus. And if you are too incompetent to click through these then you probably shouldn't be running arbitrary software. That's why the comic does not really apply here IMO.
Not sure how reasonable that really was, a good majority of my Steam library went unusable in one upgrade. There should have been some virtualization option available
still the real concern many have is, with the switch to ARM based processors will MacOS effectively inherit the restrictions of iOS
>Not sure how reasonable that really was, a good majority of my Steam library went unusable in one upgrade. There should have been some virtualization option available
Apple had been transitioning to 64-bit for over a decade when they finally dropped 32-bit support. That transition period is more than reasonable.
What makes me sad is there is nowhere to switch to seamlessly.
The writing has been on the wall since gatekeeper was introduced, but we stayed because there is just no better option. Some options are viable, but require sacrificing a decent chunk of the comfort we have now. It's the eternal waiting for the "year of desktop linux" except we moved the goalpost to laptop linux, battery performance, commercial third party support etc.
The funny thing being that people who didn't have these frictions already switched and will say "just do it, it's easy". But we won't have a choice anyway.
I still really don't understand why people get SO ANGRY about Gatekeeper. It's an excellent idea! Enforcing signed binaries by default is the correction option on a consumer OS. It's saved many, many folks many, many hours of hassle in fixing Aunt Millie's computer at Thanksgiving.
And people like you and me can turn it off.
It's not at all clear that Apple will remove the ability to run unsigned binaries. As of now, it's FUD. That's all.
And, moreover, it's mostly FUD in places like this from people who hate Apple anyway.
Most people (me included) are ok on the principle, and I actually don’t disable it. I want more security.
Issues are on the implementation, with for instance the network calls on every command run if for any reason the CLI is not whitelisted.
Also I am not against iOS either, I just need an alternative for the tools that don’t fit the model, and that role is currently filled by macos. The more macos is iOSified, the less it fills that role.
As long as there's software there will be devs who need machines with which they can build it. Those machines will always need mechanisms for circumventing safety constraints, almost by definition. I guess it's possible Apple could decide, one day, "macOS isn't for developers anymore, it's just like iOS, devs will just have to go elsewhere". But that seems like it would be a profoundly stupid thing to do.
I do wish they would just bundle up all of the safety-guards under a single toggle - put it in the recovery UI for maximum obscurity - to make this stuff as convenient for developers as possible. But I take the fact that they don't to be a shortcoming of product development, not malice.
If they remove the ability to install software that I want to install they lose me as a customer forever. I don't have a problem with the warning. Windows has similar but it's super easy to turn off completely. I know you can do it as well on Apple as well but it's a bit more hidden. Mostly I use linux, but I also have a small mac pro that I like to travel with and has 90% of my development software on it.
Look at iOS, you need to pay for a developer account and the signed applications can't be distributed (otherwise they'll revoke your account) and have short-lived certs.
Those people already pay for that account and thus could sign whatever they need to work. For them it wouldn't change a thing. It's just everyone else.
It could. But still, someone has to develop iOS itself. And someone has to write the server back-ends that all those apps depend on. And the cloud infrastructure that those back-ends run within.
By definition there will always be something out there that can't be developed on a locked-down platform. So there will always be an open platform, even if Apple decided one day (again, unlikely I think) to recuse themselves from that market.
Apple always has the option of fusing their Macs and selling blown ones externally, and then writing their OS to turn off parts of the OS that they use but think you don't need. They haven't done that, but it is always possible that they could.
Sure they could. They could basically fork macOS and maintain an entirely separate version just for developing iOS, iPadOS, WatchOS, and macOS. I don't know why in the world they would do that.
This whole "issue" is just Apple trying to get people to put their software in the Mac App Store. Maybe a day will come where Apple try and lock down macOS to the level that they have locked down iOS... but I just don't see it happening. Apple will always need developers to develop software for Apple products.
Maybe they will drop a nuclear bomb on themselves. Anything is theoretically possible.
IIRC you don’t need to pay for an Apple dev membership to self-sign and run the app on your own device. They made this change a few years ago I believe.
To be fair, the world itself has changed a lot in a decade. Risks are much higher now than before, as malicious actors have become much more sophisticated, and we're running much more of our lives through software. The downside of insecure systems is orders of magnitude greater - life savings wiped, every personal detail leaked, etc.
> Doesn't take a genius to see that their end goal is to make mac os as restrictive as ios.
I don’t see that. I see them trying to protect average users from malware, something they have done a pretty decent job of.
It’s the job of a gui shell (finder/explorer) to discern user intent and do the thing it thinks the user wants. This is that happening. Users don’t want to run malware.
I’ve not seen anything that suggests that Apple wants to remove the ability of people to run arbitrary programs. Indeed, it is essential for developers to run novel binaries.
I'd really be interested to hear someone who defended Apple's 30% cut battle against Epic on why Macs shouldn't be locked down and everything forced through the store.
People like Gruber coined the term "App Console" to try and justify locked down iOS yet we're also told how powerful iPads are and how they're computers.
Why is it ok for the iPads to be locked down "App Consoles", if that's ok shouldn't it also be fine to lock down Macs and force everything through the App Store and make sure Apple gets a cut from say Creative Cloud subscriptions and Alberton Live installations?
I personally don't agree with any of it but I'd like to try and understand why people think one is ok but also not the other.
The iPad is by definition a more limited environment in that it IS locked down and heavily sandboxed. What you get from that is a hyperportable tool that pretty much Just Works All The Time without the need to worry about security or administration.
There are absolutely things you can't do there. For any advanced user, it's unlikely to provide a sufficient computing environment in and of itself -- but there are lots of folks for whom a nice iPad is entirely enough. My 80 year old mother only rarely touches her laptop nowadays (realistically, it exists for the version of Quicken she likes + the need to write embroidery patterns to flash drives for use with her INSANELY complicated sewing machine).
This is good.
But you trade some things away by setting the dials in that way (high portability & battery, high security) -- obviously, there's less direct user control, and less ability to customize, and outside environments like Pythonista I don't think there's any way to write code on it you could use on it.
But that's fine, because if the iPad doesn't meet your need as a primary, you probably have a regular computer on your desk, too. A Mac works just like computers have worked for 40 years -- you own the machine, down to the metal, and can do anything you want with it. This freedom comes at a cost (increased risk of crashes or malware infection), and may not represent the dominant mode of computing for regular humans going forward. That's fine. Not everybody needs that.
So the distinction is actually really simple: They're different products, positioned to address different user stories. Why shouldn't they have different approaches to lockdown/security/customization?
I mean, car makers have several models available, too, with often very very different capabilities and functions. That's also ok.
For 99% of customers that is just fine. I don't know how old you are but think back of the early / mid 2000's where everyone and their dog got computers and went on the internet, while the main operating systems then - Windows XP - were not ready yet, in that they couldn't protect their users.
Corporations don't sign the apps they run on company-owned computers all the time. This isn't going away unless Apple wants to dump the very valuable business customers. You don't need to worry.
Well, yes, until you get to the point where (as the article shows) you cannot run unsigned applications on Apple Silicon. How much more slippery does this slope needs to get?
It is important to understand what this requirement is and isn't. The requirement on Apple Silicon is only that the binary be codesigned. That ensures the binary is unmodified when it is encountered again later. You do not need an Apple-approved signing key for this. Ad-hoc signing works just fine.
Apple's new linker automatically adds an ad-hoc signature to ensure people building outside Xcode don't experience any disruptions as part of this new requirement.
This requirement does not change GateKeeper or Notarization.
This cascade of comments is missing the point. It's like someone is pointing out the darker shade of clouds at the horizon and saying, "I think a storm is coming," and everyone here is just going, "Well, it could also be nightfall. I mean, the night has to roll on in at some point too."
Yes, it's just an initial check. But is it necessary? What exactly is the use case basis for Apple transmitting and logging data on every application you run on an operating system you have a consumer guarantee of zero-tampering post-sale.
So, let's work this out: How easy is it to not upgrade macOS, retain consistent performance as usual, and not lose support if the userbase remains unsatisfied with Apple's change to an exchanged good? By my understanding, as with Windows 10, Apple will eventually require you to upgrade. If you're upgrading to a system that maintains the same performance and does not introduce express limitations to the product post-sale, that's great! Go for it, live merry. However, in this case, the userbase has zero clarification on both co-owned data transmission and a remote check that appears to trigger a constraint on workflow. There's no use case basis that makes sense for doing this, because Apple has established guarantees for decades prior to this new process that claim macOS is not susceptible to malware. So, it begs the question: Was Apple violating consumer protections by making false guarantees, or is Apple violating consumer protections by limiting the function and utility of the product post-sale?
That's what people are asking right now. We don't care about the nuance of the check. We care about the basic characteristics of, and more importantly the legitimacy of any use case for this check, given promises made to consumers at a prior time of purchase.
macOS is, in fact, susceptible to malware. (A notable example hit HN just the other day [1].) I don't think Apple has ever literally claimed that it isn't susceptible, though they may have sort of hinted at it (especially at the height of the "Get a Mac" campaign). To be fair, there has not been very much macOS malware then or now, though it's questionable how much that has to do with macOS's design as opposed to factors like the size of the target userbase.
As drawfloat pointed out, Apple doesn't have to explicitly guarantee no susceptibility to malware. The FTC Act considers anything from an Elon Musk tweet about flamethrowers to a casual joke at Apple's Keynote, and weighs whether a reasonable consumer would expect the product to reflect that claim. That doesn't mean implied guarantees are as easily prosecuted. But it does mean that when Tim Cook or Steve Jobs, or another named executive, is on stage and says something along the lines of, "We don't have the same problems as Windows," and a reasonable audience member understands he's referring to malware risk on macOS vs Windows, that's enough to say Apple has made a legal guarantee to the consumer. The law is open-ended like this because promises can look like anything, from outright printing FREE SAME-DAY SHIPPING to printing in an FAQ that most orders arrive within 7 days. If it wasn't cost prohibitive, you could actually file a small claim against, say, Amazon, for a two-day Prime delivery not arriving within two days.
The broader point though, is that Apple has established the belief that macOS is not susceptible to malware. That's why people don't "need" a virus scanner running in the background.
And this belief is widespread enough that it warrants questioning the basis of a use case for this check: Why does macOS need to send my data to a remote server upon initial load of each application to verify it with Apple's whitelist (approve-list? what's the right term these days?), if the operating system's existing protection has to date fulfilled the implied guarantee by CEO, Tim Cook, and former CEO, Steve Jobs, of zero or limited, but otherwise insignificant, exposure risk to malware?
For code itself, which is what this change affects, code signing is verified by the kernel at page-in time. Combined with W^X, there's no way to bypass it. Verifying associated resources is done separately by userland and is a somewhat messier process.
To be clear, this just means all binaries need codesign --sign - run on them before they will run. You don't need to pay to do this, so it's a minor annoyance rather than Apple centralizing control of app distribution.
That's exactly what I'm confused about, where is the slippery slope? It's an extra step in your build process–an annoyance, not a slope. It's like if there was a requirement that macOS would not run binaries unless you stuck "APPLEISCOOL" at the front of all your binaries.
The slippery slope is at the point where a few years ago you could run any application you wanted, then you started getting some scary messages for unsigned applications but still could run applications, then you had to work around the scary messages and now you cannot run unsigned applications.
As far as I understand, there are (surprisingly many) different code paths in the operating system based on whether a binary is signed or not and I would guess that Apple no longer wants to keep around the one where unsigned binaries are a thing.
I believe one of the main security benefits of these requirements are ensuring a binary hasn't been altered maliciously, or otherwise I guess. Not preventing outright malicious applications.
Also so the os doesn't have to repeatedly rescan apps, etc.
sha256sum generates a sum, you still need to store that sum somewhere that isn't controlled by the malware creator or they can just change the sum too.
All major Linux distros for example still have no viable way of creating signed programs or anything like Gatekeeper.
Malware authors frequently sign their applications with valid Developer ID certificates. This change is not something that would keep them out and I strongly suspect that was not the goal when it was made.
The key here is that Apple can also remove and retroactively revoke signatures on signed applications, which is why you rarely see Mac malware stick around after it makes the headlines these days.
When a piece of (signed) malware becomes even moderately sucesful, Apple shuts down all its installation vectors by banning every dev account that's ever signed it. If nothing else this makes it harder to develop malware that spreads rapidly through the App store, forcing bad actors to invest a lot more work into finding 0-days.
If you have a vulnerable app, it might modify another already approved app that is already installed, thus omitting the scary message about untrusted code before being run.
To the average user, yes, but Apple can see the chain of trust and can do the data science required to figure out who signed what. You better believe that in the even of a large scale malware distribution via signed binaries, they're going to be revoking those certs and tracking down the accounts responsible (like after the VLC website hack).
> The notary service maintains an audit trail of the software distributed using your signing key. If you discover unauthorized versions of your software, you can work with Apple to revoke the tickets associated with those versions.
Yes but "ad-hoc" code signing is almost useless on its own on macOS, it almost always goes with notarization, which does require a revokable certificate with a chain of trust attached to a Developer ID for the initial code sign.
> You can only notarize apps that you sign with a Developer ID certificate. If you use any other certificate — like a Mac App Distribution certificate, or a self-signed certificate — notarization fails with the following message: "The binary is not signed with a valid Developer ID certificate."
Out of curiosity, what's the point then? Why not just phone home with the hash of the binary, instead of forcing developers to assign a UUID/pseudohash to their binary?
I think you've misunderstood the article. Ad-hoc signing is absolutely supported on Apple Silicon Macs. You don't need Apple's permission to build your own software on your machine, nor to distribute it to others.
Disingenuous. "Signed software" is generally understood to mean signed with am Apple-issued certificate. The fact that you can ad-hoc sign with no certificate at all makes that kind of signing little more than a linker step. You might as well have said "you cannot run uncompiled software" for all the relevance it has.
I never understood "signed software" to mean anything else than "signed software", which includes "self signed software" since that software is also signed. And "self signed software" does run in Apple Silicon, at least for now.
From the article, it seems like things have gradually become more & more restrictive though. My interpretation is that Apple is gradually shifting macOS to the same application distribution model as iOS: Complete walled garden, attempts to bypass are against ToS, actively closed off with every update, and warranty-breaking.
However my impression is also that it's mostly non Mac users who are concerned with this, so it seems unlikely that Apple will have much incentive to reverse course unless they screw up and go a few steps too far too soon and create a bigger outcry. Even that would probably just delay the process.
> From the article, it seems like things have gradually become more & more restrictive though. My interpretation is that Apple is gradually shifting macOS to the same application distribution model as iOS: Complete walled garden, attempts to bypass are against ToS, actively closed off with every update, and warranty-breaking.
Nothing has really changed in this article though. You can still run software just as before.
It's being made more difficult. Increasingly technically more technical savvy is required. At the same time that apple increasingly pushed their app store on macOS. The fact that at this point in time savvy users can still run arbitrary software doesn't negate a general trends that might be moving away from that direction.
I believe this is bubbling up because apple recently revoked Epic's certificate (sorry if that isn't quite the right term) and these are now the steps required to run Fortnight on Mac.
Technically Apple has (or will) terminate Epic's developer program access, which means that they will no longer be able to sign new builds. They have not revoked the certificate at this time, so those binaries should still continue to run until the certificates expire.
AFAIK this is false, though I could be wrong. I believe cancelling the account is separate from revoking the apps. I think Apple has only deleted installed apps off of user’s phones in rare, exceptional cases, such as actively malicious apps.
Can anyone confirm if I am recalling this correctly?
Apple probably won’t delete the app from user phones, but it won’t be available to new users anymore (those who have never downloaded the app using their Apple account) and Epic will be unable to submit new updates to the App Store, which in practice kills the app
I see this argument repeated whenever there is a conversation about issues like this.
But I almost never see anyone pointing out this is the equivalent of think of the children applied to software distribution.
While indeed this kind of control reduces the chance less informed users run random code downloaded from sketchy sites. I think it does little to actually curb malware, since all you have to do to pass the bar is pay 100 dollars to apple.
I believe this is Apples attempt to fully control the software distribution business for macs the way they do for iphones. Eventually mandating that all software comes through their store, uses their payment provider, etc.
As someone whose elderly parents run Windows and have never run 'any executable that any website tells them to download', and I myself have not seen a website that gives me a random executable in over a decade:
Out of curiosity, how often do your parents encounter the dialog in the article?
I really don't know how they end up with this, but my grandparents run windows and regularly have some kind of malicious advertisement installed. It's a not easy to keep their computers clean.
Try using safari on a MacBook. Last time I spent 10 minutes on my wife's laptop, every website was filled with garbage ads saying your Mac is infected with a virus you must download an anti virus quick (#malware). The shadier websites (streaming) sent a dmg executable as soon as they were opened.
It's shocking, it's worse than Windows (sad fact: Mac has more malware/adware than Windows since the last 3 years).
Of course you can't see that if you're not on Mac or if you have an adblocker. Ads detect the browser and specifically target Safari. Safari removed support for extensions last year, dropping all adblockers, it's wide open to targeting.
Safari's ad-blockers are not really comparable[0] to what's currently possible on Firefox, Chrome, or other Chromium-based browsers.
They rely on declarative blocking and/or system-wide interception (ie, you run them as an entirely separate app outside of Safari). The majority of them are not going to be able to handle things like CNAME unmasking or page source rewrites[1]. The declarative blocking API in Safari isn't built to support those kinds of features.
[1] The exception to this being maybe AdGuard when you run it system-wide? I vaguely remember reading that it sets itself as a MITM for all of your web traffic at an OS level, which allows it to do page-source rewrites. I could be wrong about that though.
It's very partial support, but it is a step in the right direction. If they keep going in this direction they could eventually add the webextension request-blocking/modification APIs.
I use Safari on a Mac without an ad blocker as my main browser, and I don’t see anything like what you describe. What kind of websites are you visiting where you get that kind of result?
> Safari removed support for extensions last year, dropping all adblockers, it's wide open to targeting.
This is not true. There’s a dedicated section in the Mac App Store for Safari extensions, including ad blocking extensions:
The issue is not whether expert users can run unsigned software, the issue is whether unsigned software will even be written and distributed by developers. Perhaps some free software, but it's not a viable way to distribute paid software.
To be honest, I hoped microsoft and their windows rt experiment would be the 'alternative' to that so you have three choices for the end-user (get chromeOS, iPadOS or Window RT). To get 'stuff done' a user really has no business having access to the system or circumventing controls.
Currently most desktop management structures try to do it backwards by taking an OS, i.e. macOS or Windows, and then trying to glue restrictions on top of it which always leaves holes or undesired side-effects.
The best 'middle of the road' so far seems to be active monitoring with Apple's or Microsoft's native management tools or with stuff like OSQuery; it also tends to be less intrusive for people that are convinced they are 'better than average' (nobody is) and don't need to have an MDM babysitting them.
But even then, there will always be people thinking their personal preference definitely means the big bad corp did it wrong or their personal need is more important than trying to do something about the constant barrage or crap any machine faces during its lifetime.
What about Windows "S Mode"? That's sort of the latest iteration on the Windows RT experiment, but you can turn it on/off on any Windows PC now.
I know the complaints remain that not enough apps are in the Microsoft Store, especially compared to iPadOS and Chrome OS if you include Android apps, but it is comparable to Chrome OS without Android support and still seems like an "alright, not terrible" third choice.
I kind of agree. I think the right click option is actually kind of genius since it gives people who somewhat know what they're doing a very easy to way to open unsigned software, while protecting people who don't know what's going on on their computers.
You don’t need admin to simply run software on macOS. And even then installed is (or should be) nothing more than extracting the app to a directory and running it. Installers are and anti pattern in macOS afaic.
What if the app is required to "do stuff" during the installation? Ex: informing a related app of its availability, writing an installation date for trial period expiration, etc. You need to do that with a script or auxiliary program, which is invoked at installation.
It would be much better if the application would take care of this on the first run (or a subsequent run if it is copied from a backup or other device). Restricting these things to only the installation phase makes App's less portable. Because you always need the installation file as well. On macOS pplications are bundled directories. So they are a single 'file' containing everything needed to get the App running (binaries, metadata, assets, etc). Using an installer on them is just an cumbersome antipattern adding useless steps to a process that could be a simple drag and drop.
This is not possible in a "chicken and egg" situation: My app is a plugin for a main app. The main app can't find me to launch me until I "seed" its directory with a pointer to me.
To make my app visible to the main app, I place a "cookie" in the main app's directory (not my design), which is done by the installer script. Without this, the main app will never find my app. And the user can't launch me until he sees me in the list of available plugins. Vicious circle.
The fact that the standard model of computing is that applications are opaque machine code blobs that can access everything in your user permission space is the core problem in privacy and malware. Applications should see nothing but their executable jail, and whatever was intentionally allowed to them by the user (eg, Open file dialog giving the application an opaque file handle, etc, not carte blanche access to the entire filesystem). Ideally, the notion of machine code blobs should be done away with as well.
Mobile OSes got to rethink everything in an era of constant adversarial connectivity and started off on a better foot in this regard.
This works for some type of software, but not all type of software. For example a file server or a file manager wont work. A VCS client wont work. A game engine that needs to keep track of imported resources (especially when you want automatic imports when the file is saved via a 3rd party tool - e.g. saving a model on Blender or a texture on Krita causes an automatic reimport/convert to the engine's format). Basically any sort of content management software that doesn't provide everything itself but relies on 3rd party tools already installed on the user's machine wont work.
Software like clipboard managers also wont work. Screen sharing and remote desktop software similarly wont work. Screencast software wont work. Hotkeys software wont work. Most desktop automation software wont work.
I could go on and start looking at what i have installed to extend this list (i'm sure most of the software i have on my PC wont work), but i guess you get the idea. Almost everything that doesn't fit in the media consumption model that you'll often find on a phone or a tablet wont work (and amusingly enough, at least on my Android, stuff like a file server does work, though i've heard Google wants to remove that functionality).
It is possible to still make all of these work by having frameworks that hold the permission to access the functionality, or by having entitlements to more tightly restrict which applications have access to a feature.
For example, some backgrounding modes on iOS require an app to get an entitlement to act as a VOIP client, or a mapping directions app. The system access is limited to only give access to the things which a VOIP client or map should need in the background.
On iOS, ReplayKit allows apps to participate in screen casting - both with first party support for an app being cast, and for an app which wants to share a video stream out.
Screen sharing on macOS also will likely move to ReplayKit, but it currently requires the user to approve a request to share the screen.
You can open a 'folder' rather than individual files to gain access to a full project structure.
Clipboard managers are difficult in a sandbox model where the clipboard manager has no permission to stay running in the background. Similar applications like custom keyboards on iOS solved this by having a smaller 'extension' stay resident, and having that extension run with a very restricted set of permissions. For instance, no access to shared storage, IPC, or to the network. A keyboard must work without these permissions to get in the App Store, but they may prompt the user to elevate permissions.
> It is possible to still make all of these work by having frameworks that hold the permission to access the functionality, or by having entitlements to more tightly restrict which applications have access to a feature.
I agree with this, but I can't fully get on board with it.
I've worried about the lax security model of desktop software too. Apps these days need to ask permission before accessing my documents, or my desktop, or my downloads folder, but they can still access all my company's code and SSH private keys. It feels wrong for this to be open-by-default. Same with accessing the clipboard, or drawing on the screen, or changing system settings, or injecting code into the Finder: these all seem like things I want to opt in to doing.
But my problem with this — and it's a stretch, I know — is that if the only extra entitlements a program can have are the ones that Apple explicitly allows, then no developer can have any ideas outside of what's already been thought up. As we increase security, we also dry up the innovation well.
For example, there's a Finder API that allows you to add badges to files in a directory to reflect their syncing status. This is an API that Apple allows apps to use — but it was originally implemented by Dropbox, which injected code into the Finder. They had the idea, and the idea turned into a general API.[1] (Now that I think about it, there's almost certainly prior art to this, but Dropbox was where I first saw it)
Similarly, I use a whole bunch of background utilities that currently use the "Screen Recording" or "Accessibility" Privacy permission on macOS, even if they aren't screen recorders or accessibility helpers: things like letting me move windows by holding Option, or switching apps by using the keyboard, or a Spotlight replacement, or adding a delay to ⌘Q, or the snippet manager I just used to insert the '⌘' character there. The entitlements that allow these applications to continue existing were developed after the applications themselves, and to use them, I have to opt-in to functionality that was previously allowed. If the entitlements didn't exist, we'd have to rely on Apple to think it up, and the way they're behaving, I doubt they'd be as generous as allowing one like "can read any pixel and draw anywhere on the screen".
Clipboard managers are another thing that wouldn't exist if OSes started out locked down. Not only do you need to access the keyboard, you need to do so from the background! It was the openness of desktop OSes that gave developers this idea — on iOS, nobody would have thought it up.
Way back when, in iOS 4, Apple gave apps the ability to run in the background, but only certain types of app — music playing, voice calling, and location-based apps received dedicated APIs to do that. Again, Apple waited to see which types of program were popular on other platforms, and based their APIs around allowing those specific kinds of app. If all platforms started life locked-down, maybe one or two of these categories won't be available today.
f.lux, the program that runs in the background and dims your screen red in the evening, is another example. I used to love using it on my desktop computer. But in iOS, they asked for the API, and weren't given it. And honestly, I'm kind of on Apple's side with this one: tinting the screen is the sort of system property I want only done by the OS itself, not an app, so now I use Apple's implementation of the same idea, Night Shift. But again, if the ability to tint your screen did not exist on desktop OSes, I'm willing to bet that Apple would not have come up with the idea.
And even if the idea is there for an entitlement to exist, that doesn't mean that Apple are going to allow it: adding one costs developer time, testing time, and documentation time. In a recent episode of Accidental Tech Podcast, I discovered that Switchglass (a Dock replacement I use) isn't allowed to Quit other apps, a feature I've sorely missed, because it's disallowed by the macOS sandbox. The idea for the entitlement is there, and the need is there, but Apple isn't bothered about apps opting in to such fine-grained functionality as quitting others. Ask me it through a prompt! That would be great!
I hope you see what I'm trying to say here. It used to be the case that macOS "allowed" iOS to exist, because it soaked up all the complexity — and slowly, features that proved popular on the desktop were allowed to exist, in a secure fashion, on mobile. I'm worried about how when I run random programs on my Mac, they have access to all my important files. But I'm also worried about the future. The tighter the grip OS manufacturers have on what sort of code can run on users' machines, the less innovation we'll see, and the worse computers will become.
Sure it does! We just need sufficiently granular permissions for all of this stuff. “Do you want to give ClipboardManager access to your clipboard?” Yes. “Do you want to give TikTok access to your clipboard?” No. I agree that clicking a million permission boxes is annoying but ideally it should only be something needed for apps that don’t fit the media consumption model.
The real problem is that the desktop security model is outdated - it was designed for a world where software developers are trusted by default and users need to protect their data from each other. Today we can’t trust that developers will respect my data. I mean, the fact that any application I run or any npm module I transitively install could upload or delete any of my personal documents is insane. We absolutely need to preserve my ability to run software I write, and run screencast software, file servers, etc. But permission to read my data should not be given by default to any software I happen to run. The Epic thing makes me nervous but generally I think Apple’s direction here is the right one.
> I agree that clicking a million permission boxes is annoying but ideally it should only be something needed for apps that don’t fit the media consumption model.
Media consumption apps are nice and good but I think pretty much all innovation is dependent on media production apps. A permissions model that treats media consumption as the most important use case will necessarily inhibit artistic expression and utility.
> The real problem is that the desktop security model is outdated - it was designed for a world where software developers are trusted by default and users need to protect their data from each other. Today we can’t trust that developers will respect my data.
Why is it any different today? You can always only install applications you trust. It would be useful to have sandboxing for untrusted applications (especially when said sandboxing would also allow you to monitor what the application is doing), but not all applications are untrusted.
The UNIX permission system was designed when computers cost millions, they had lots of users through timesharing (many of whom were programmers themselves). And computers had comparatively little software. And most of the software that was on the computers was installed by the system operators; who could be trusted to not install software from disreputable developers. The threat model was malicious users accessing each other's files; so user accounts with limited permissions kept us safe.
Today I have several computers. Each computer only has 1 user. And yet my /etc/passwd file still has 110 entries somehow. And it doesn't really help - the thing I need to protect the most on my computer is my data, and most programs on my computer could read and modify all my data with impunity if they wanted to. The permission model nothing to protect my own files from the programs I run.
Using tools like homebrew I install new software very frequently, and I don't have time to vet the code I run. There is a staggering number of software developers who have contributed code that runs on my computer. Some of them work at companies in direct competition with each other. Some of those companies I don't really trust. (Hi Facebook). So I rely on sandboxing in the browser and on my phone to keep my data safe.
The UNIX user permission model just doesn't meet modern needs.
> A game engine that needs to keep track of imported resources (especially when you want automatic imports when the file is saved via a 3rd party tool - e.g. saving a model on Blender or a texture on Krita causes an automatic reimport/convert to the engine's format)
A game engine isn't likely to do that in a production build. Even if it wanted to though, these sorts of "file ticket" sandboxes still has support for "directory ticket" and "file watchers". Even if storage specifics like "drive" or "path" are opaque to the application, they can still ask the user for permission to an entire directory (either explicitly in an "Open Directory" or implicitly in directories they naturally own such as "app data" and "resources" directories). Figuring out "where" that directory is for the user in Blender or Krita might not be straightforward, but just because those "tickets" are designed to be opaque to applications doesn't mean they have to be opaque to users and the operating system has lots of interesting possibilities to answer user questions about where things are, such as smarter Save File dialogs that are "ticket aware". ("Open Tickets > Game X has an Active File Watcher on this Resource Directory")
Almost all the same applies to other similar tools like file servers, file managers, VCS clients. Opaque/transparent is a "cone" in "ticket" based systems. It probably should be opaque where exactly my "file share" folder is stored, and all of my folders that are not my file share folder to a file server, so long as the contents inside that file share are transparent enough. The hard thing is defining those "cones", but the past default of "everything is transparent" is a problem and the over-correction in some systems to "nothing is transparent" sometimes blinds us to finding better ways to define these visibility cones rather than complain that they exist at all.
(Fwiw, all of the above is possible in the strict UWP Windows sandbox today: you can ask for directory tickets, you can ask for file watchers with those tickets. This isn't entirely theoretical, there have been practical applications, if not enough.)
> A game engine isn't likely to do that in a production build.
I actually have worked on two AAA game engines that did exactly that. The entire idea was to make importing stuff easier and minimize the time between the artists editing content in their content editor and having it imported and visible in the engine.
> Even if it wanted to though, these sorts of "file ticket" sandboxes still has support for "directory ticket" and "file watchers".
How are those given exactly? You'd need to ask for permission for every directory? If you cannot have the program monitor the directories, how would it know that a file changed before asking permission? Or the permissions will be asked at startup (since it is very common to edit existing content instead of only importing new content - the editor remembers the original paths for all content and can start monitoring it at startup).
TBH this all sounds like a UX nightmare, especially when the entire goal is to make the process smoother.
macOS ships with a quite strong and granular capability-based security model with its sandboxing mechanism (at least, when it works and is applied correctly). The feature is there, advanced applications already make use of it, but it is difficult to get arbitrary applications to adopt it (its inner workings are declared SPI after all) and it is not really exposed to the user at all except via App Sandbox, which is fairly limiting.
The permission model works pretty similar here to a restricted user account. It sees system files as read-only, cannot see the contents of certain files and directories, etc. The root of the filesystem might be of the parent system or of the sandbox itself.
On macOS, reading certain directories (like the user's Desktop) will result in a user consent prompt due the sensitive contents. There is, after all, no guarantee that the request was directly initiated by the user, and not some curl-pipe shell script. There is however a "Developer Tools" entitlement that encompasses all of these sorts of prompts - but this is meant to be enabled through a developer action and not any user-presented prompt.
On macOS, there are basically higher-than-root system integrity protections, such as modifying files that the OS maintains. You can disable this system protection by booting into recovery mode and running a command-line tool.
Generally, the goal with system integrity protection is not to restrict professional work but to restrict the ability to publish software that tells the user to disable system-wide protections. There was recently an issue with the Google installer where a Chrome update wiped out part of the system when these protections were disabled, primarily noticed on hackintoshes and video production Macs where tools require SPI to be disabled to run unsigned kernel extensions, etc. Oops.
Apple has been trying to reduce the need across macOS releases to have applications need to be installed at all, with recent releases making items like browser, system, and kernel extensions bundled inside the application itself rather than being distributed across the filesystem. The goal is likely to eliminate app installation on macOS from needing to be a privileged operation.
The question was more about _should_, than _does_, and there's all sorts of places it currently breaks down.
For example, say you use pass [0], it's just a Bash script. You want it to be able to access pbcopy/pbpaste, for basic functionality to work. You don't want to give it permission every time.
However, you probably don't want sh to be able to access pbcopy/pbpaste without permission.
Scripts themselves should hold certain permissions, but those permissions are currently held by their host application, because sh /usr/local/bin/pass isn't identified by the system as an application. The app is still considered to be sh.
The average user would be terribly confused by all this and would hardly bother to even analyze requirements that are asked by any app. They will simply allow everything.
More importantly, if they see something like a permissions screen, they will have a false sense of security.
And severe restrictions on the app, would hamper user experience. Code signing and developer ID are the most practical means to ensure quality software.
Users of iOS seem capable of forming nuanced views of the permissions each app should have. I don't see why this shouldn't be done for macos. The problem is 'everything' apps like browsers -- you need to be able to enforce permissions for individual sites.
With weak sandboxing we're still left with the problem of LPE -- once you're on the box (with, by default, unlimited network access!) you can gain root quite quickly, because the kernel interface is rather large. So you still need code signing and developer IDs. However dev IDs could be a federated system -- user's shouldn't have to accept Apple as the total and sole authority for accepting or rejecting apps/developers.
That should probably be an option, but I'll take ownership of my hardware and data not you or Apple. I don't mind having the checks in there as long as I can overrule them. Otherwise you just get the Epic treatment and you don't really own your hardware, you're just loaning it from Apple and you can only do what they let you do. That, to me, is a losing proposition in the long haul.
It's hard to believe that this would be a cash grab. Even if there are 1M developers in the world, that's only $100M, which should not be worth the friction and cost to implement and maintain this scheme. Consider that Apple's most recent quarterly revenue was just under $60B. Apple made $260B in all of 2019; $100M is not even four hundredths of a percent.
Developer time to build out the signing and notarization features is not free, and running the notarization servers in a highly-available manner is also not free. As much as we all like to call out Apple sometimes for how they don't take good care of their developers, adding hoops for your developers to jump through is not a great idea.
So I think in some ways Apple really does it for the security aspects, and also probably just because Apple likes to maintain rigid control over their experience.
> As much as we all like to call out Apple sometimes for how they don't take good care of their developers, adding hoops for your developers to jump through is not a great idea.
Why not?
Apple has been pushing very hard lately to increase their 'services' revenue, by getting users to sign up for Apple Music, iCloud, etc. It makes perfect sense for Apple to force developers to jump through hoops, not because it makes the developers' lives worse, but because it gives Apple a leg up! Some examples:
• When launching Chrome for the first time, you have to opt-in to Chrome notifications, but macOS pops up its own notification telling you to use Safari instead.
• Similarly, Apple News notifications are allowed by default, but you have to explicitly opt-in to notifications from other news websites.
• Again, similarly, you'll get Apple Music adverts through push notifications, something that's explicitly disallowed in the App Store guidelines.
• System Preferences gets an obnoxious (1) badge next to it, because I haven't signed in to iCloud.
I use SyncThing, rather than iCloud Drive. It definitely benefits Apple to waste the SyncThing developers' time and money keeping up with things the iCloud Drive team doesn't need to deal with — do you really think the sort of company that pulls shit like the above would do anything different?
> • When launching Chrome for the first time, you have to opt-in to Chrome notifications, but macOS pops up its own notification telling you to use Safari instead.
Link to a screenshot of this Safari notification when Chrome is run?
> Similarly, Apple News notifications are allowed by default, but you have to explicitly opt-in to notifications from other news websites.
Reference to an article about this?
> • System Preferences gets an obnoxious (1) badge next to it, because I haven't signed in to iCloud.
You're right, I should have been more specific: it shows the badge even when I don't have System Preferences open, while other apps (such as my chat client, or mail client) only show badges when they're open. It shouldn't show the badge at all, though.
Though don't underestimate the number that gets a developer account to get macOS/iOS beta versions. There are public betas now, but the developer portal gets them faster (and more frequently).
The money is barely a drop in the bucket for Apple. I was more hinting at that this inflates the number of developers, since a chunk of them are actually people who pay to get early beta access.
Not going to argue about the profit based motivations, I'm sure someone else would do it. How about the power that it gives Apple over developers, is that worth their time?
Not exactly how it works though, in a corporation as big as Apple each business unit has it's own budget and profit/loss margins. Apple is simply too big and complex for someone or a group of someones to look at it in the way you are describing.
So while $100M might not look like a ton to Apple as a hole, it might look like a lot to the business unit responsible for apps on macOS
It's not the $100/yr that benefits Apple the most, in my judgement. It's the ability for Apple to control what software its users can use, e.g. to promote App Store sales. If they don't like your company, they can switch off your software remotely on all your end users' computers. If you have a competing product, they can just switch it off and there's nothing you can do about it except develop for Windows and Linux.
It absolutely is a cash grab, and part of a series of unethical behaviour from Apple.
Since the latest awful hardware products (terrible keyboard, control strip thing that breaks, no escape key) with MacBooks and a great improvement with using Linux via Purism and System76 I've managed to move away from Apple.
The newest iteration of Macbooks has a very nice new keyboard (no butterfly mechanics anymore) that reminds me of the really old models, better than my 2013 model IMO, one of the best in a laptop that I've used, though it doesn't beat the really old Thinkpad ones; those were incredible. Of course, this sort of thing is a matter of taste to some degree, but my private 2020 13" and my work 2018 13" are several worlds apart, they've really fixed that issue for me.
There's also a physical ESC key now, and for me, personally, the touchbar is a pretty neat feature after a bit of getting used to it, and some tweaking.
No idea if it is prone to fail, but I believe the rather sizeable fleet of touchbar Macbooks at work hasn't had a broken touchbar yet (there have been other failures)
I've yet to see anyone get a Linux machine and not struggle with something; maybe that'll happen someday soon, but so far everyone seems to have issues that I'd find unacceptable.
Don't forget that after you've paid them your money, they can revoke your certificate and spread lies about your software, and there's nothing you can do!
Well, you can voice your complaint on the Internet and hope that Apple notices this problem and fixes it in less than 24 hours. I wouldn’t advise betting one’s business on this, but it did work in this case.
(Hopefully Apple comes up with a more formal process going forward...)
It might be different if it went to a charity of your choice. I've been to conferences where the entrance fee was just a token amount to make sure you didn't register, reserve a spot, and didn't show. Or first-class train tickets: you don't pay for more expensive seats, you pay for it being expensive and keeping other people away.
People from rich countries that have the spare time and skills to write reasonably decent software are likely to have $100 of disposable money per year and would probably be donating anyway, so a donation might be reasonable, and I'd understand if Apple takes a few dollars to do <whatever they claim to currently use this money for, I imagine there is some sort of vetting going on even if it's only to check that you're not on their malware authors blacklist> but it really won't cost a full $100 per developer per year. For that money they can fly in from the nearest English-speaking country to verify my scowl.
I think this is part of a longer term and much larger cash grab of eventually forcing all Mac software to go through the app store. Apple will of course require a modest 30% cut of all sales for providing their benevolent oversight.
It's more of a choke hold than a cash grab. Your app broke our store rules, we removed it and you're suing us? We'll terminate your dev account so your apps won't run.
Assuming you mean Epic, Apple has only given a deadline to cease breaking their developer agreement before their developer account is terminated.
Epic is empowered to get their app back in the store tomorrow if they want to release an updated build without the contract-infringing functionality.
They probably have lost their chance to be features by Apple or to be invited back on stage in future keynotes, but at least to me it appears Apple is trying to not be punitive in their reactions. Epic has consistently been given a no-nonsense way to recover.
Do you want the manufacturer of your computer or its operating system - and only them - to be able to specify exactly who can and can't develop software for it, and whose software you can and can't run? I don't, and I don't see any situation where I'd want a single entity to have that power, even if it comes with security benefits (which are not mutually exclusive to this setup either).
Exactly. I'm sure they do basic vetting of tools in their store, and I suspect they will remove the most eggregious offenders. But that doesn't make all of the tools in their store safe, and all of the tools OUTSIDE of their store unsafe.
Especially when it comes to privacy protections, where every user's standard for privacy is going to be unique.
I deal with this every day, because we only notarize our electron app in CI if we're building the master branch or an RC branch. I don't see it mentioned in the article, but what gets me every time is that the "right-click" trick only works the second time you try to launch the app. The first time, right-click or not, MacOS won't let you launch the app.
I do wish Apple had a free tier for open source projects, just like many other tools on the web.
As an alternative, I wish there was an easy way to "sponsor" open source projects for this sort of thing. (I guess there is in some cases, but it's pretty hit or miss)
On Windows, although it's pretty easy to run unsigned applications, it's a huge pain to install unsigned 64 bit drivers, even if it's just the inf file that's custom. I've ended up signing open-source drivers several times with my own code signing certificate (a few hundred bucks every few years) although I haven't distributed the result. Drivers for things like USB SDRs.
> I don't see it mentioned in the article, but what gets me every time is that the "right-click" trick only works the second time you try to launch the app. The first time, right-click or not, MacOS won't let you launch the app.
Thanks for the comment! I've referenced this in a new addendum to the article.
I thought that a developer status will autosave me either from malware or from being babysitted, but then [1] happened. No matter how hard I tried to start that binary, OSX didn't allow me to do that. Damn OS which knows better, who do yo think you are? Did you see checksums, site certs, my competence, my willpower? I thought that it must be something with a build process that transmission uses, some signature didn't get into the bundle, etc, and went to their forum for help, while trying to self-sign that app and to reduce the system protection level in a console. As I found out later, that was yet another snafu that happens with transmission every few years, and it's not that it is a particularly small or inactive project.
Moral of the story is, if you want to protect your users, you have to bring some level of inconvenience and frustration to them. Or be sure that I will run that malware no matter what you say.
Since you mentioned self-signing, I always laugh whenever I see people mention this on threads related to iOS apps and "being able to use your own software". These self-signed apps last for only 7 days before it expires, and then it needs to be sideloaded again with another self-signing, otherwise you can't run the app.
Being required to load your app every 7 days is such a major pain in the ass. Does anyone actually think that sideloading/self-signing is a realistic and viable way to build your own software stack (that can also be shared with other people) on an iOS device? It's obvious that this mechanism is only for development environments and the intention is that the app will eventually be signed with the $99/year version.
Yeah I read the forum expecting to find a false positive but then it turns out the build included malware and the OS worked as expected to identify it?
There seems to be a trend in the US society today to error more on the side of safety than liberty than I've seen ever before. Particularly, this is a change in the tech community which has been a bastion in the fight for individual freedoms since I've been alive.
In the end, when you make that bargain at the levels we are making it today, the safety is only temporary but the damage to liberty is unrecoverable without starting over.
It's a bad bargain.
To be clear of straw men, I'm not saying that individuals (not groups) who have personally demonstrated bad behavior should have complete freedom to repeat such acts. This argument is, and always has been, about pre-emptive actions against the innocent in the name of safety.
Most people would hypothetically eagerly trade a miniscule amount of freedom for a massive amount of security, because it'd be a good deal.
You're welcome to be an ideologue and paint the world in black and white of course, but more people will try to assess the tradeoffs and have some appreciation for nuance.
Obligatory disclaimer: I'm a staunch supporter of open source software and don't own any Apple devices because of it, but I do appreciate and understand why a company or an individual would want to have a device with a more locked down ecosystem.
The majority of people is fine living their lives as to ensure to destruction of all organized human life on the planet, just when it comes to the climate. Thinking of history, all the darkest hours of it involve a majority thinking they're right automatically, by virtue of being the majority, and persecuting minorities, or doing all sorts of stuff that in hindsight is just evil, embarrassing and gross.
> Free thought requires free media. Free media requires free technology. We require ethical treatment when we go to read, to write, to listen and to watch. Those are the hallmarks of our politics. We need to keep those politics until we die. Because if we don’t, something else will die. Something so precious that many, many of our fathers and mothers gave their life for it. Something so precious, that we understood it to define what it meant to be human; it will die.
-- Eben Moglen
If people don't understand that it takes away from them, not from the importance of the issue.
But they had a choice. You can always build a pc with linux/uac=0 or buy it from those who know how to do that. Aren't you conflating freedom with everyone living the way you want? And with uninformed choices, which nobody has to inform. And with ecosystems that do [not] fit your particular views. As a conscious informed user myself, and after experimenting, I tend to choose some of Apple's products. Not Apple's as a company, but as an idea of how things should work. You do not take my freedom to be able to do that, do you? Remember that all that Apple has is what their customers chose to give them. You cannot just go and blame them for getting what they want, based on some freedom morals that weren't even trampled for them. Hardware and software that is free in many regard is still there. Don't you think that you just blame people for their choice?
Hold on a minute -- MacOS phones home every single time you launch an application? As a non-user of MacOS, this strikes me as utterly bonkers. You'd have to place a massive level of trust in the developers of your OS to accept this. And furthermore, surely the constant attempts to phone home have a negative effect on the user experience when the computer's network connection is missing or slow!
Perhaps the fine article has mischaracterized this behavior?
This is correct, it attempts to check for notarization each time it is launched. In the case where there is no network connection most apps have the notarization "stapled" to the app meaning it actually carries the successful notarization authorization attached to the binary. This may short circuit checking for notarization with Apple's servers but I don't know if that happens for sure.
You assume correctly. Although I do use Windows for my day job, my expectations for that OS are so low that I ascribed its performance shortcomings to incompetence. I'm not tremendously surprised to learn that self-sabotage was involved. Sometimes after I type my password at the Windows 10 lock screen, I'm treated to a 5 minute wait during which I get to speculate about what terrible decisions have got us to this point. I could cold-boot Arch on a ten-year-old netbook, use it to check the weather, and shut it back down in that amount of time.
As I understand it: If the application has been notarized before distribution, it should contain a signed “ticket” which allows the app to run immediately. If not, it will hit a server, but it caches the result on success, so it should not have to repeat the check.
A couple other ways to deal with it (at least for some instances--not sure this applies to every kind of executable).
1.1 Hit "Cancel" in the warning dialog.
1.2 Open "System Preferences" / "Security & Privacy" and select the "General" tab.
1.3 It should have a notice about the unverified app being blocked, and offer the chance to approve it. Do so.
1.4 Try to launch the app again. You'll get the dialog again, but this time it should have a button to tell it to go ahead and launch it. That will also remember that you have approved the app so you should be OK from them on (or at least until the app updates, and you will have to redo this).
Another way is to fix it from the command line.
2.1 Locate the executable.
2.2 Do "xattr -d com.apple.quarantine /path/to/executable"
I just hit this today when doing some web testing with Selenium, and it could not use chromedriver because the developer was not verified. My chromedriver is installed via Homebrew and evidently it had been updated since I last used it. A search for how to deal with that turned up both of the above solutions as part of this Stackoverflow question [1].
I have to look up this fucking procedure every time I update our internal executable tools. And for whatever reason the security setting loads up some sub tab for me and I always forget you have to go back to general to find the little thing at the bottom to allow the app.
This is so far beyond reasonable from a ux standpoint and they have no reason to improve because what am I going to do? Not use macos to work on iOS stuff? It pisses me off so much
If you have an automated process to update internal tools, you will likely have a much better UX with updating that process to appropriately deal with quarantine.
Homebrew Cask specifically opts into Quarantine, interestingly; I believe this behavior was to match the general behavior of the system to add this at most places where you could download an app. Very few other third parties opt into this.
> There isn’t a specific identity requirement for this signature: a simple ad-hoc signature issued locally is sufficient, which includes signatures which are now generated automatically by the linker. This new behavior doesn’t change the long-established policy that our users and developers can run arbitrary code on their Macs
So, the sky isn't falling yet, but it is reasonable to be concerned.
> This new behavior doesn’t change the long-established policy that our users and developers can run arbitrary code on their Macs
This irks me, I don't know why. Maybe because calling it an Apple policy is something that can be changed. I consider it more a right that I can run arbitrary code on my computer.
So yes, I would agree, quite reasonable to be concerned.
Now we are literally only one step away from not being able to run our own software on Macs without paying Apple 100$/year (or whatever amount they want).
They just need to disable adhoc blocking, and require a signed developer. All systems are in place for this. The only thing they need to justify such a change is a "catastrophe". For example, next year, some macos malware might pop up, that could have been prevented with this signing. And Apple could just use that as an excuse to fully lock their platform.
Should be possible to get around, no? I guess technically they could lock us out, but my naïve self is thinking that surely there would always be a way to disable it, like how we can disable SIP now, through Recovery-mode?
I know exactly why this requirement exists (parts of the OS have entirely different code paths depending on whether a binary is signed or not) but it is still annoying to have to ad-hoc sign everything.
The power that tech companies accumulate with tactics like this, and the justifications for that power, are strangely reminiscent of autocratic governments: we decide which programs you can develop and run, and we can levy an arbitrary 30% income tax (on top of regular VAT). But don't worry, it's all for your safety and security!
While there is some truth to the security argument - security after all is sometimes at odds with freedom - good computer security can certainly be achieved without this degree of centralization of power. Maybe you can't protect a determined user from hurting themselves, but that seems like an acceptable price for freedom.
Frankly, I’m afraid your premise is a bit of a straw man argument.
I’m constantly running npm, mvn, sbt, docker and some that download hundreds of megabytes from unknown organizations, hosted on unknown servers, written by unknown developers.
Next to that, I’m running desktop applications downloaded roughly under the same circumstances, and was the update image it just installed when I opened it genuine? Transmission was 0wned, as well as Handbrake. Any other I was never aware of? Perhaps one that I’m currently using?
I have several GB of irreplaceable (to me) photos and financial documents on this laptop. When was the last time I tested my cold-storage restore procedure? (Hint: never.) What if I get hit by a ransomware? What if they grab my GAccount cookie and run away with my identity?
All this makes me fret, and aware of how much vulnerable my information persona has become.
I can run Linux, and trust Ubuntu or Debian or whatever to thoroughly audit and verify every line before PGP signing any package released for distribution (riiight, it’s already a gift out of free will, am I going to make demands now?) I could manage, begrudgingly though because I’m more interested in using the tool than to constantly grind it’s sharp edges.
But what about normal users? Not necessarily idiots. Just people that haven’t explored the dense thicket of Linux on the desktop and ACPI, and kernel driver (oh, by the way... what about those drivers?) Don’t they have the right to some trust and expectation of privacy? (that they can immediately forego and upload to Facebook)
Why must everyone constantly have to risk their own neck to defend someone else’s perception of freedom. Why should they all pay (in terms of risk and time mitigating against it) for something that someone else presumes it would benefit them?
Apple can abuse their grip on their integrated platform. Apple can turn this infrastructure into a rent-seeking scheme, into extortion.
But for the time being, they can’t deliver cryptographic app control soon enough.
Heavily sandboxing apps by default is fine, and some Linux distros are, slowly, moving to do this - see e.g. AppArmor and Snap.
Even giving warnings by default about unsigned apps requesting high privileges would be fine if the implied message weren't basically "everything we haven't checked is malware". Something like "We nor any other provider you've chosen to trust have no idea who made this and we haven't checked if it contains malware. This program may steal and delete all your files. Be really sure you trust the author before running this." would be much more honest.
Good security does not require a single entity becoming the sole gatekeeper and taxman for a huge fraction of users.
> Why must everyone constantly have to risk their own neck to defend someone else’s perception of freedom. Why should they all pay (in terms of risk and time mitigating against it) for something that benefits someone else alone?
I'm not advocating for Windows-levels of "install anything with access to everything with barely any warnings". And I wouldn't say you're "constantly risking your own neck" if you deliberately ignore warnings.
In computing as in society, I don't see how we can remove all possibility of getting cheated into hurting yourself (by installing malware in this case) without essentially submitting to some form of autocracy. And I think freedom benefits almost everyone, at least indirectly. As a concrete example, in the Apple/Epic case, an alternative game store would likely result in healthier competition i.e. lower prices. As another example, Hong Kong protesters with iPhones would have had an alternative way to coordinate: https://www.bbc.com/news/technology-49919459
> Even giving warnings by default about unsigned apps requesting high privileges would be fine if the implied message weren't basically "everything we haven't checked is malware". Something like "We nor any other provider you've chosen to trust have no idea who made this and we haven't checked if it contains malware. This program may steal and delete all your files. Be really sure you trust the author before running this." would be much more honest.
Well, what you ask is what's written in the very first prompt screenshotted in the blog post; it says "the developer cannot be verified", "macOS cannot verify that this app is free from malware." I don't see how this choice of words is much different from your proposal.
I don't want to go too deep into the "alternative store" discussion, it's much broader than this, but let me just say Adobe Flash. I don't think Apple will ever relinquish the strategic power to force developers to adopt APIs and track their lifecycle, and never again have to deal with the Flash scenario.
If they let the door open to "alternative stores" good luck explaining to the general public how it's not their fault if <insert major app> works like shite and kills hardware performance. As an example, to this day, people still rant about Apple's "proprietary music file formats" when really it's just bog standard mp4 (it's even unencrypted... you can copy it over to any industry-standard decoder and you're good to go. Good luck with WMA (if they're still around) or whatever madness Sony came up with.
The moment they would decide a major overhaul, you'd see "alternative app stores" advertising "backward compatibility", "freedom from Apple's treadmill", fragmenting user experience in an endless passing of blame about who's fault it is for the rot.
> Well, what you ask is what's written in the very first prompt screenshotted in the blog post; [..] I don't see how this choice of words is much different from your proposal.
There are nuances about the UI and wording as discussed elsewhere in this thread, but my main objection is about Apple positioning themselves as the only one who decides which apps don't get that warning.
> [..] I don't think Apple will ever relinquish the strategic power to force developers to adopt APIs and track their lifecycle, and never again have to deal with the Flash scenario.
I don't see how alternative stores would prevent Apple from breaking backwards-compatibility on an OS they would still control. Even open source projects do BC breaks as they see fit. And I think Microsoft demonstrates that proper BC is something a company the size of Apple could well afford to do if they cared to.
The Flash case could be seen to support my position as well. Wasn't it a case of Adobe getting into a dominant position (for their particular niche) and then "abusing" it by letting Flash stagnate with awful security? It's good that we eventually got rid of Flash, but wouldn't it have all been much easier if Adobe had never become that dominant in the first place?
You can of course say Apple would never let something stagnate in that way, but all companies have their (sometimes shifting) priorities and interests. Often they'll align with you as the user - that's the nice thing about capitalism - but there's no guarantee that they always will (e.g. that Hong Kong example), and a dominant player in the absence of healty competition is always incentivized to charge as much as the market will bear.
> If they let the door open to "alternative stores" good luck explaining to the general public how it's not their fault if <insert major app> works like shite and kills hardware performance.
Is this really that big of a problem? Seems like something platforms already deal with by surfacing and by default restricting apps' energy use etc, though this too can be a double-edged sword. I have a few apps on Android that need to constantly show a pointless notification just so they can run in the background, and they have legit reasons to do so, and I'm OK with the battery drain.
Again I'm compelled to draw an analogy to society: freedom indeed requires some degree of responsibility and understanding from everyone. Benevolent dictators are a great place to "outsource" all that. The trouble is that they (or their successors) rarely stay benevolent for long, especially if you're not in their ingroup. I've yet to see power accumulation have good long-term consequences in history.
"Y'know, it sure would be a shame if our OS went around telling users your software's a virus. Now we c'n make sure this little problem doesn't happen to you, all you gotta do is fork over the $300 (yearly of course) to join our developer program."
Nothing like a good old protection racket. No wonder Apple's worth trillions of dollars.
This behavior frustrates me, as a seasoned (=old) Mac user, but I am simultaneously quite grateful for it existing on my parents Macs.
It would be nice if there was a Sys Prefs option to add a "run anyway" button to the initial prompt. It wouldn't even need to be on by default. Just give me the option.
If you want your computer back, use this to disable the protections. In the system preferences, a new option is enabled under the security section, "Allow apps downloaded from: Anywhere"
But then you’ll have websites that walk you through changing the setting. At least this way you have to make a decision every time, even if it costs you a few clicks each time you do it.
They can do the same for the right-click technique. Omitting a setting does not change the user's understanding of the decision, it just makes this completely undiscoverable and tedious for users who know what they're doing.
A system setting risks allowing you to make more mistakes.
Imagine I install some app from a trusted third party and am walked through the steps to toggle the system setting to allow installs. Then a year later when I am installing some untrustworthy tool, I am no longer warned (at least not to the same severity) that this tool is unsigned. It leaves me more likely to install that software and end up putting myself at risk in the future.
Take for example the Android settings for installing third party apps. I can enable it on a per-app basis, but that permission persists for the lifetime of my device. If I allow Chrome to install apps for me, that enables apps from ANY site from now until the EOL of my device, to more easily make their way onto my phone.
If I am asked every time (or even periodically) I am given a moment to consider if I know what I am doing.
But the setting doesn't have to get rid of a warning, we're discussing requiring the right-click to even show the option of running the software.
Gatekeeper right now won't even allow you to run an application unless you somehow know and remember to right-click. This is sadistic. Many well-informed users won't even know about it and even more will forget to right-click on the first try. This is far from "forcing the user to make a choice about each binary". It's clear Apple doesn't want users to even be aware that there is a choice.
Well... there happens to be a way to disable that part of gatekeeper functionality if you want to. It’s just not a checkbox in settings, if I recall correctly it’s a terminal command that requires sudo. https://www.imore.com/how-open-apps-anywhere-macos-catalina-... But really, really only do this if you know what you’re doing. You can leave gatekeeper running but in a mode where it is much less restrictive. It may still prompt you if you just downloaded an app from Safari, that sort of thing.
It would be a command-line option. They used to have a system preference to disable signing, but a lot of software (including Minecraft, for a while) walked users through disabling gatekeeper security for the whole system rather than sign their individual software or try and explain right-clicking.
I was surprised to find out the something even worse is happening on default installations of Windows 10: you cannot install non-Microsoft software at all unless you go to the system settings and disable "S mode".
It's impossible for someone who's not technically oriented to know how to disable S mode or even what it is, and trying to get my mum to install Google Chrome on her new computer was harder than it has any right to be.
When did the ability to run software get this bad?
Fellow devs, I have to take the minority view here. How is $99/yr a number that any business should even care about? Even for OSS.
The reality is that the HN audience are complete outliers. Just look at the junk your friends and family install on their machines.
On a related note, the equivalent in Windows is SmartScreen. It prompts similarly to Mac for unsigned downloads as well as signed ones where there isn't yet sufficient reputation on the signing key. That last part is frustrating - we have a downloadable software component for our SaaS. It's not that frequently used and every time we renew the cert (third party BTW, not with MS), it takes a few weeks for SmartScreen to start trusting it.
> How is $99/yr a number that any business should even care about? Even for OSS.
If you’re including solo non-business programmers in that, I’ll say I find it unreasonable to ask of an open-source developer that in addition to giving their time to develop and support their software, they should also give their money.
It’s great that $100 is chump change for you. That’s not the case for everyone, certainly not all open-source developers.
When an application turns out to be malicious, Apple can remotely disable it by invalidating the developer certificate globally.
This is how some mac viruses were stopped; the developer license was retracted, making the executables hard to open in the background, slowing the spread or even killing the virus during its infection track.
It also maintains Apple as the administrator of your computer. You may have paid Apple for your laptop, but it is Apple who decides what you can and cannot execute. Options to work around the blocks Apple throws up are reduced and made harder with every new release. I predict that eventually all binaries will need to have Apple's blessing or be signed with a corporate certificate, just like on iOS. It's still years away, but the direction Apple is taking this is obvious.
I dont see any problem with this. If you don't feel comfortable doing this then you definitely shouldn't be running random code from the internet. I would take it a step further and force it to be run from the command line.
Also, what kind of "viable software business" has trouble paying $100 a year?
> Also, what kind of "viable software business" has trouble paying $100 a year?
The issue isn't the money. The blog post was written in the context of the widely reported story of Apple threatening to terminate the developer account of Epic Games, which would prevent them from signing and notarizing their Mac software.
> The blog post was written in the context of the widely reported story of Apple threatening to terminate the developer account of Epic Games, which would prevent them from signing and notarizing their Mac software.
This part is key. It shows that Apple's signing and notarising requirement isn't about money, nor is it about security. Epic broke the rules of the iOS store, and now they'll be forbidden from developing on a completely different platform, just because Apple doesn't like them.
I suspect they will fix their app before this deadline. I very much doubt a judge will rule in Epic's favor for an injunction to removal when they have so obviously orchestrated this problem themselves.
Apple has a single developer program for all platforms. Epic has been given a deadline to stop breaking the contract terms - submit a new version of the application which reverts the remotely activated payment logic.
> Apple has a single developer program for all platforms.
This is the fundamental problem, in my opinion.
When Developer ID was introduced, it was supposed to be a mere formality, a cryptographic guarantee that an app was signed by a particular developer, to be used only for security purposes. At the time, nobody knew it would be used as leverage in an App Store dispute.
Epic didn't simply implement their own in-app purchase plan; they also remotely updated Fortnite to include it after it had passed app review (because Apple wouldn't have approved it, obviously). The whole point of app review is that Apple can assure its customers that apps are safe to run. Having apps that bypass the review by updating themselves subverts the whole process. You may not agree with the app review process, but it's clearly a feature that Apple regards as an important selling point.
Given that Epic has demonstrated a willingness to subvert the app review process, why should Apple allow such an untrustworthy developer to place its apps on any Apple platform?
(To be clear, Apple has said that they will gladly allow Epic back onto the App Store if they remove the private in-app purchase mechanism, so apparently Apple has not fully adopted this viewpoint -- at least, not yet...)
I would request that we please not rehash this debate here. It's been debated at length already elsewhere on HN. This is why I didn't mention Epic in my blog post, even though it's clearly the subtext.
The point of my blog post wasn't to defend Epic, but rather to explain how difficult it would be for a developer to continue distributing software on the Mac without a code signature approved by macOS. The frequent suggestion is that users could "just right click", but it's not that simple.
The issue isn't from the user side, but from the developer side. Most potential users will see the dialog, not even know they can work around it, and trash the app.
Whether or not this is a good thing for most users is certainly arguable, but it still doesn't sit right that you have to pay Apple and get their permission to distribute macOS software that isn't presented to the user as likely malware.
The minute you admit you are in Iran, American companies aren't supposed to sell to you, generally speaking. So, yeah, that seems like a weird comment.
There was no intent at all to be disrespectful. The comment caught my eye in part because I have something of a personal interest in Iran.
I used to speak regularly with an Iranian who was, among other things, a software developer. My general impression is that money was not nearly as big a problem as other things, thanks to the embargo.
Your comments don't quite fit with my understanding of things, which could be just my lack of knowledge about a lot of things. If you are Iranian and, thus, your primary language is Farsi, perhaps it's due to a language barrier.
I'm trying to bow out of this discussion and already deleted one of my comments. It seems like a rather lot of negativity over a minor observation on my part. So it doesn't seem like a good place to try to start a meaty discussion of open source in Iran, or I think that would be interesting to me personally, in spite of my limited knowledge of code and so forth.
Iran's economy has changed significantly in the past few years. University professors that used to make around ~60K USD a year just a decade ago are now making 8-12K USD a year today.
Money is definitely a problem, and even if you have the money dealing with American companies completely closing off everything is a pain.
Thankfully Android, being a little bit more open than Apple has allowed a ton of small software startups to become successful in Iran, while Apple has stifled them and has actively removed Iranian apps from the AppStore.
This brings back me to my original point that if we keep wanting Apple to keep control of what can be published and used on the iPhone then what happens when a questionable US government decides to start banning apps that help people organize? Or apps that help people combat and fight injustices?
Apple, keeping the strict control they have on the iPhone, is making the future of computing under a questionable state a dangerous thing.
It's been a few years since I've had any contacts in Iran. I wasn't aware the economy had gotten so much worse, so my apologies. My information is out of date, apparently.
I did look up some things about the embargo and open source platforms at one time. My knowledge of programming is limited. I know a little CSS and HTML and I run some websites, but I don't really think of myself as a "programmer." So I probably can't meaningfully add anything here.
I'm generally on your side. Thank you for the interesting comment.
I spent years homeless. There are sometimes homeless developers in the US. You don't have to actually go asking snarky questions about wage rates in other countries to make the point that, for some people, that's a meaningful barrier, even for the software industry.
If that's the point you want to make, there are much better ways to make it.
I can't speak for @aaomidi who made the original comment. I can tell you that my own reply wasn't snarky. I simply pointed out that Iran isn't the only country in the world where the value of $100 is much different from its value in the United States. It seemed counterproductive to the discussion to get hung up on Iran's special status, so I mentioned a country I happen to know quite well.
I don't think there's anything wrong -- or snarky -- with pointing out that there are whole countries out there where each and every developer would have as much trouble coughing up the necessary $100 as you would have had during your years of homelessness.
1. As a part of US's sanctions against Iran, Apple does not do business with Iran, period. Neither does Google. That's why people install Cafe Bazaar (https://cafebazaar.ir/), and alternative app store, on their Android phones the moment they get them—they cannot install apps from the official Play Store, nor can Iranian developers upload their apps there. The same is true of App Store of course, and there is little recourse there.
2. Your information regarding salary seems terribly outdated. There has been several sudden and dramatic losses of purchasing power and reduction in exchange rate for Iranian Rial, cumulatively by a factor of more than 7, during the past year. At the current exchange rate of ~220,000 IRR to 1 USD, the monthly salary of an Iranian developer would be around ~160 USD a month. Breaking the 100MM IRR/~454 USD barrier would be difficult for most developers. So even if the sanctions were lifted tomorrow, only large corporations would be willing to spend 100 USD at the current exchange rate, not independent developers.
$100 can be an hour of a developer's salary here, or even less. The relative cost is quite large. (Oh, and guess who isn't making a developer's salary in Iran? Someone who is just starting out, or an open source developer, or someone who is currently unemployed…)
But if youre just starting out or only releasing your code for free, then you're not a "viable software business", which is what the article was complaining about.
With a laptop you're getting something worthwhile out of it, a physical object that can last you for years.
With a gate like the developer account that's far less possible.
It's also easy to buy a mac laptop from the blackmarket in Iran (since Apple can't legally export their laptops there), but you can't really do that with a developer account.
The actual java binary (JVM) can be (is?) signed and used for many different apps/programs. But the .jar file that is executed probably can't be signed.
(Note: I have never done any "native" Mac programming)
As a data point: I am indeed a new Mac user, and I would never have guessed how to override gatekeeper and run the app if it weren't for (I think) stackexchange or a similar site providing detailed step-by-step instructions on how to do it. I'm a Linux power user so googling is no strange thing to me, but still, macOS really goes out of the way to hide this choice!
In my opinion, it's simply not possible to learn how to override it by following macOS UI "hints". Every step of the way seems designed to hide this possibility, instead of giving users a warning and a clear choice.
Disabling Gatekeeper
From the Apple menu, open the "System Preferences" application.
Click on Security & Privacy > General tab.
If the lock in the left-hand corner is locked, click on it, then enter your Mac's username and password. This may not be required.
Click "Anywhere" under "Allow applications downloaded from:".
If you followed Step 3, please click the lock in the left-hand corner to return it to its locked state.
Close "System Preferences".
Since MacOS Sierra the “Anywhere” option is not visible per default in the System preferences. To set Gatekeeper to “Anywhere” you‘ll need to do it via terminal:
$ sudo spctl --master-disable
Then the “Anywhere” option is visible in the System Preferences UI. But only while active.
Sounds like a security disaster for non-technical users. It is much better to trust a specific app from now on than to trust the entire internet from now on.
I've seen a few apps get around this by packaging them in a standard .dmg and giving the user clear instructions ("just right-click") as the background image of the opened container, or as the name of the app, or on the download website, etc. Although generally these have been fairly tech-oriented apps where the users would be likely to know the shortcut anyway!
I would say this is an example of how the user interface - and some of back-end plumbing - of a desktop environment [1] is not designed to serve users, their needs and interests. Instead, it's designed to serve Apple's needs and interests at the expense of users.
For me, this is a much stronger reason to avoid Apple software than whether it's FOSS or not. I can live with a company which makes some proprietary app. I mean, I am annoyed that access to it is restricted, but at least the "deal" is upfront.
When what I see and what I can run and how it runs is the object of direct partisan manipulation, and trade between commercial companies for epistemic access to a captive user-base, that's a whole other story and it just makes me sick.
---
[1] : I'm letting Apple enjoy the benefit of the doubt and assuming the "signing" business is not enforced at kernel-level, only by the graphical desktop environment's application launching mechanism.
I'm the main contributor of an app that emulates Windows's alt-tab feature on macOS (https://github.com/lwouis/alt-tab-macos). I researched this topic extensively, and eventually decided that I would bite the bullet and pay from my own pocket, so that users get a good UX when launching the app.
It is displeasing to see Apple not having an open-source program in-which they give free certificates for popular open-source projects after a review. They are a large beneficiary of OSS after all.
Furthermore, smaller companies do it frequently these days: Jetbrains gave me free IDE licenses, poeditor gave me a free account, github hosts the project code, ticketing system, and is the distribution channel for first downloads and updates, appcenter hosts crash reports for free, travis does the CI for free, etc.
I came across this problem when I wrote a small program for a humanities professor to help him draw some diagrams. He didn't want a web app ("it'll disappear once you graduate!") so I wrote him a mac app and emailed it... it was such an adventure getting him to ignore the security warnings to run it...
I mean, you're right, but I feel like the approach we're taking is that we have to cater to the least tech-savvy person that might every use the computer, even if that damages the experience for millions of folks that are tech-savvy. I really disagree with this this one-size-fits-all approach. It seems like a local optimum, at best.
Of course, more tech-savvy users can always use a system that doesn't impose this on them, but I really wish Apple would provide different experiences to different users. If they hadn't diluted the "Pro" moniker, I'd advocate for "Pro" editions of MacOS that remove all this stuff. As it is, every amateur user thinks they should get a Macbook Pro, and my user experience on any distro of Linux is miles ahead of any other platform available. That's fine for me (I'll probably never leave Linux unless something drastic changes), but what about folks that love Apple and are tech-savvy? Why should they have to jump through increasingly arcane hoops to do very reasonable things (like run unsigned code) on a machine they bought? It seems like a massive missed opportunity.
Apple needs to finish migrating their casual userbase off MacOS and onto iPadOS. Macs should be reserved for engineers and other technician oriented jobs in the various industries. That should be the delineation.
The industry tried to morph desktop computing into the world of Windows XP playschool friendly computing. Teach grandma for the 6th time how to click the start menu, etc.
That was a stopgap solution, and that era is over. We need to move all of those people off of computers and onto consumer-safe devices. Then restore desktop computing back to its originally intended audience, take off the bumpers and give unregulated access back to serious computer users who want to build important stuff, not get constantly nagged and prohibited from doing so.
I developed a pretty cool piece of freeware to control a consumer hardware device and I ran into this "very scary Windows dialog" problem. I specifically don't want to charge for this application as then it's a business and it's just a hobby for me.
Thankfully one of my users is also a developer and now he signs all my releases.
What does this have to do with users being savvy or not? Unsurprisingly nothing at all.
Apply that thinking to all the other "bad things people could be tricked into doing" and that's how you end up with an authoritarian government controlling every aspect of our lives...
Why operating systems don't provide a sandbox for running unsigned apps by default?
Is your app signed and secured? it runs in the current environment. Otherwise it runs in a copy of the current environment, with restrictions as to what it can send/receive from the world.
I've ran into this signing issue a few years ago when trying to distribute a binary for applications (tips, if you need to do so here: https://henvic.dev/posts/cs-security/), and while it's a hassle for developers I really appreciate that this is making using computer applications safer.
The next step is naturally adding boundaries about what applications can do just akin to containers, sandboxes, or permissions found on smartphones operating systems. I just hope some sort of standard emerges so we don't have each major vendor implementing their own incompatible system (okay, too late).
Is this only for GUI applications? Admittedly I have not done much development on Macs, but have done some exclusively-command-line work post-Gatekeeper, and have never seen that dialog (or a textual equivalent) appear when running binaries I compiled and even copied across machines; from the Terminal.
Incidentally I have never run those binaries via the Finder either, so it makes one wonder where exactly this check is --- is it something Finder does when you open apps, or when GUI libraries are loaded, or something much lower-level in the kernel, like on an exec() call? I don't have a suitable machine around at the moment to check, but the need to right-click suggests Finder is doing this?
I think it's fine to have more checks for "normal" users, but I'd like to see a better UI for experienced users.
Either there could be a pro-mode app/setting that let's use tone down the warnings a bit and give the extra "allow" option already in the first dialog.
And/or let app developers add an extra dialog to ask for ermissions on install (e.g. in Homebrew).
I haven't tested this but apparently you can disable Gatekeeper completely using:
> Can you distribute Mac software over the internet without signing it, thereby avoiding Developer ID and notarization entirely? Technically, currently, yes, although Apple has indicated that a future version of macOS may not allow unsigned code to run at all.
To my knowledge, this depends on what APIs you wanna use. Using certain capabilities like Network Extensions (the on-device low-level networking APIs) requires paying $99/year for the Apple Developer Program. See https://developer.apple.com/support/app-capabilities/ for more details.
Man, I really love my Macbook pro. Such a nice piece of hardware, and the OS is a joy to use. But the more and more I learn about about their business practices, the more I wonder if they are really in line with my own values, and the more I think about how much benefit I really gain from the OS.
Probably not as much as I used to think.
I'm starting to realize something. I'm almost certain I would be just as happy with a Linux capable laptop, loaded up with Ubuntu 20.04. I'm pretty sure my next laptop purchase will not be another Macbook pro. It will surely save me a ton of money, and I'm just not convinced the walled garden is adding enough to my life to make it worth it.
While I second the Kubuntu/KDE recommendation, I would suggest not using a Mac theme. Because that might get you in the mentality of trying to make a Mac out of Linux, which it is not. I recommended learning to use Linux like Linux with all its unique ideas and it's a joy to use.
I've been interested to try it out, but I've never hated Ubuntu desktop as much as some seem to, so I've never gotten up the motivation up to give it a go. My boss is very adamant about Linux Mint, and I tried one if his installations before, but I guess I just don't care enough about the desktop experience to make it worth it. Give me a terminal, a text editor, and a browser and I'm happy :)
This is just a cash grab from Apple, and part of a series of unethical behaviour by a monopoly (eg app store dictatorship).
This would be totally unnessisary if apps could run in a sandbox. It wouldn't matter if I run some random game I downloaded off the internet if it can't do anything on my computer outside of its own sandbox.
And to the people that say it's not possible, just look at web browsers. Each website runs in a complete sandbox.
For apps that do need to use operating system functions, then you can escalate privellages with user consent. For example, mobile apps, that ask:
"This app wants access to your camera"
macOS ships with some of that already; users immediately complained that it turned their OS into Windows Vista. Perhaps the underlying problem is that Apple's sandbox is not under the control of the user, it's mostly controlled by Apple and (to a lesser extent) by the developer of the software itself. It's really a strange model if you think about it…
That's actually an interesting way of looking at the problem of untrusted code. We might still need a way to signal to people what pop-ups really mean so we avoid the blindly yes-yes-continue clicking, but in general if things are as sandboxed as they are on mobile (sans the sensor and network access that still allows you to do a million invasive things, so perhaps more like websites where the software is closed when you close it) then we might not need this sort of scheme.
I (a savvy and sophisticated dev who knows all about right-clicking) got stuck with this today with an unsigned binary of a command-line program. Right-click open didn't work since the OS didn't know which program to use to open it, so there was no "open anyway" option. Launching from the Terminal gave me the familiar MALWARE! PRIVACY! pop-up. Solution was to open System Preferences -> Security, click the lock, then below the radio button for "allow apps from App Store/identified developers" there was a button allowing a manual override.
Just thought I'd point out this is bad for Unity and Unreal. Tons of student devs making small games putting their apps on itch.io to share, no longer allowed unless they pony up $99 to Apple.
This is my biggest concern too, cross-platform game dev is just so hard to do and you even have to deal with stuff like gatekeeper. I was just thinking about giving up cross-platform entirely now and just do the web which is plain simple
Apple doesn't provide a good documentation and tools about manually creating Apple Bundles, Signing, Notarization and Stapling. When you cannot use XCode or don't want use XCode you're in the Apple Hell.
I've ported an already running and shipped desktop application from Linux and Windows to Mac. First and foremost you save some time with the network stuff because there is still some POSIX in MacOS and you got a usable shell also.
We cannot use reliable XCode, because it doesn't support Meson or anything else outside of Apples own world reliable. Compiling code with 'homebrew' works, yes.
Creating the App Bundle itself is the first big burden, because outside of XCode you have to create them fully manually and using 'otool' to adjust every library path which is used internally. Even the icons hurt you, you cannot provide one big PNG or SVG (Linux) or melt it into the executable (Windows), but a icon for every size someone can imagine. ImageMagick is your friend. What year is it? 1995?
Because we cannot use XCode and have to build stuff repeatable, we also stick with 'codesign'. Which is horrible to use, because it's recursive option is not reliable and you have to sign nearly every bit in an App Bundle. On Windows you sign the executable or the howl installer, here you sign nearly everything.
But watch out. The certificates included by default in your Apple Account are likely all false friends. There badly name and desribed and you cannot see that you don't get the needed type of certificate as non account owner. So be careful with company accounts! Always ensure that multiple people have permanent access to this as owners and can immediately accept the new terms - or you cannot ship anything anymore. Happened this spring/summer to me :(
Signing done? Fine. Now hope that your 3rd party code doesn't show nasty bugs after signing, like the code from our partner. Furthermore don't expect that MacOS tell you via console what is going wrong and what you need. But itt could maybe help a little.
Then uploading for "notarization" follows. I hope you didn't make something bad and placed the right file in the wrong directory of the bundle or it is declined.
Finally attach (staple) the notarization to your bundle! Or every Mac there will ask the Apple Servers if it is "okay to execute this code now". Without it privacy is lost. And the startup is slowed down also.
If your going to support MacOS, don't assume that portable code is portable. The shipping is actually the hard part.
To me, this is a lazy solution by Apple. It's easy to keep a whilelist of apps that were manually reviewed (for money!) and just block everything else, leaving out anyone who's not willing to join the club (or cannot afford it) out in the rain.
A more appropriate approach would be to use an actual malware detection system, similar to what AVs to when checking binaries (check fingerprints etc.). AFAIK this is what Windows does, and it's way more inclusive.
As macOS market share increases, Apple definitely doesn't want to become another attractive platform for malware developers like windows XP.
I guess if they make it more intuitive for users to run unsigned apps when they want to, everyone would be still happy with those restrictions. It's just about adding clear message and additional buttons. So I would be happy with having the option and my grandmother is also happy with some level of protection.
I'm actually a big fan of this. Since macOS has become a lot more prevalent and most of my family and friends use it, I was asked A LOT less for IT support.
> Finally, we see the instructions to right click. Err, control click.
Just recently I was on my iMac playing the "free guess" minesweeper where guessing is safe whenever you're forced to guess. The game needed a right-click to mark mines, and I got frustrated by Apple's magic mouse being prone to misinterpret right-clicks as left-clicks.
Now I know how to be in control with my right-clicks!
You can ensure it doesn't interpret a right-click as a left click by lifting the fingers on the left side before right-clicking, then it'll work every time. Or do like me and ditch the magic mouse, since it's insane that a mouse would force you to do that just to be able to right click with any consistency.
The approach may be to start educating the Apple users to understand that these messages do not necessarily mean that the app itself is malicious or from a malicious source.
Yes, I know that ideally Apple should be allowing opensource apps or at least have a mechanism for opensource apps also to be verified by the gatekeeper and that is something we can campaign for, but in the absence of that, we can atleast educate the users.
How do you ad-hoc from windows? I run into this just yesterday and had to explain people to CTRL+CLICK. I'm not planning to give apple anything and the users already know that. Whats funny is that on windows you can also get an app signed via a third party(not sure if this applies this day) and avoid this kind of scary dialogues.. just because you paid someone.
> Can you distribute Mac software over the internet without signing it, thereby avoiding Developer ID and notarization entirely? Technically, currently, yes, although Apple has indicated that a future version of macOS may not allow unsigned code to run at all.
I don't mean to keep beating a dead horse on this subject, but why are we acting like this is only a possibility? Apple is almost certainly going to remove the ability to run unsigned code in the future.
A day or two I wrote[0] about the timeline that took Facebook from guaranteeing that you'd never need to sign into an account on Oculus to requiring a Facebook account on Oculus. Different company, same story.
We spent a long time having concerns dismissed, and then once everyone was used to the idea and the uproar had been reduced to a manageable level, Facebook did it. People get told that they're paranoid when they express concerns about the future. Then those concerns turn out to be correct, but by then the concerns seems less dystopian, and we've moved on to dismissing other concerns even farther down the road.
I'm not going to find the other threads and articles, but:
Voice assistants: same story different companies. Concerns about recordings leaking, being distributed outside of the company to 3rd-party contractors were all paranoia until they weren't.
Facebook, again: same story different company. Facebook would need to be stupid to use 2-factor phone numbers for advertising and promotion services, the people worrying about that scenario were paranoid. Until they were proven right.
Browsers: same story different companies. You can not run unsigned extensions in Chrome. You can not run unsigned extensions in Firefox unless you are on the beta-version developer branch. In both cases, even though Firefox technically has an escape hatch, the effect is the same: normal no longer have the unrestricted ability to write software for their own devices.
I'm not going to argue that Mozilla's worries about malware aren't real, I'm not even going to argue about whether or not they made the right decision overall. BUT, anyone who thinks for one second that Apple isn't in a position to bring up the exact same security justifications for removing unsigned code from the Mac is fooling themselves.
We keep on taking these companies at face value, assuming the most permissive, conservative version of their policies, and then using that assumption to avoid talking about the real dangers of a corporate war on user-controlled general-purpose computing.
When Mac signing came out, so many people were telling me that it was stupid to object, because this was just about stopping specific malware. It would never be used to enforce a ToS or directly punish another company. So when we have conversations about Apple's dominance in the space, about what walled gardens mean for Apple, we need to have those conversations under the assumption that the most likely future is one where those same exact policies apply to both Apple phones and Apple desktop computers.
Also importantly Apple consider it legitimate to refuse to notarise apps if you break unrelated rules that have nothing to do with distributing malware and don't give any reason to think you're shipping anything dangerous. This was part of their threat to Epic Games.
I have a 2015 Macbook, and for the last year or so I've known it'll be my last Apple product. Perhaps in fifteen years when they've been displaced and have made a semi-contrite return a la Microsoft, I might look at buying their stuff again.
What about cli applications / scripts? Why macOS doesn't seem to prevent users from using those downloaded from the internet when they can also very likely be malicious according to apple
As I understand it, the article was referring to applications built with Apple development tools. How does Gatekeeper treat applications developed in some other way, say an Electron app?
It's hard for me to understand how so many in their right mind can spend thousands of dollars to buy Apple stuff that are made to screw them and to transform them into toddlers!
software security 101: if there is an "accept the obvious risk" button, the user WILL DEFINITELY CLICK IT. No matter how clear you make the danger, they always choose potential malware instead of inconvenience. It's better hide because, otherwise that button would be the center of any social engineering attack
> On macOS Catalina, Gatekeeper not only checks whether the software was signed by a valid Developer ID certificate, it also "phones home" to check whether Apple has notarized the software, again refusing to run it if the check fails.
Nit: I believe stapling is supposed to fix this issue.
> Nit: I believe stapling is supposed to fix this issue.
No, even stapled apps phone home. The difference is that stapled apps can still run if Catalina can't contact Apple (e.g., no internet), whereas unstapled apps can't.
Look closely at the Gatekeeper dialog with and without your internet connected.
It's good that at least one company these days still cares about protecting its users from viruses and malicious software. I feel like most other companies are just not doing enough.
All of these people saying: switch to Windows, its better than mac!
Have you really done that? REALLY? Because I work on Mac, Win10 and Linux every day. And Win10 doesn't even come close to the other two in terms of reliability, stability, and lack of unnecessary bullshit.
I just don't understand who can make this claim that Win is better than Macos with a straight face. Maybe for gaming. Maybe. The cost/fps is clearly better for Windows machines, but Apple puts so much more thought into their OS than windows. Hell, Windows still pops a Win95 dialog for drivers. Come on, man. Seriously? MacOs doesn't have built in ads on the home screen. MacOS has no REAL viruses, and clicking on malware is even harder now than Windows.
Downvote me all you want, but I think a lot of people on this thread have never even used a mac for more than five minutes, let alone developed on one. HN monoculture is real.
> I think a lot of people on this thread have never even used a mac for more than five minutes, let alone developed on one.
I have when I started a new job and they gave me a macbook and said it's the only option. After 2 weeks I told them I'll need a linux box asap or I'll look for a different job.
It's in my opinion the absolute worst garbage of an operating system I've ever had the displeasure of using and their window manager is a completely unusable pile of shit. Their hardware doesn't have proper cooling and burns your hands if you actually use your CPU.
People have different tastes and I really don't think you should assume people haven't tried.
Yes, I too think windows is way better than OSX, but well I also think windows is pretty shit. However, I wouldn't quit my job over having to use windows, I would have for OSX.
What OS are you running on linux that makes the experience worth it over macOS? I've grown up using macs and am used to the UI quite a bit. Have a work machine (t580) and used Ubuntu and Fedore. Both have been quite challenging to use with lots of Graphics Issues. Sure the thing runs everything I throws at it in terms of docker/system-services/etc., but UI stuff is really ugly.
I have the same opinion as well, to each their own. On my case Mac OS is dead last in my personal list when it comes to developing, I would even put it behind windows despite all the windows issues I hate.
I use both daily, I love a lot of things about MacOS but I also love how insanely powerful my PC is for the price, OS preferences stop mattering when one machine can render a 4K image in 40 seconds using CUDA GPUs and the other it would take hours.
Windows 10 is slowly getting better every year and honestly there are only a handful of things left that I think really lag behind Mac, Explorer being the main one.
> reliability, stability,
They're basically identical at this point, my PC runs for months on end just fine. One of my macs does the same, my USB-C MBP is the only standout as it kernel panics in it's sleep most nights.
If your Windows machine is having stability issues you probably have bad hardware, like my USB-C MacBook.
> and lack of unnecessary bullshit.
I'd say they're both about the same, windows has it's issues but so does MacOS, my mac asks me every single day if I want to update the OS and there is no "No" option only "Restart" and "Later", it also warns me 10 times a day that the disk space is running low, yep I know it's low because it's a 128GB machine, nothing I can do to fix that.
The days of Apple being against this sort of stuff is over, my iPhone asks me once a month to subscribe to Apple music when I open my Music app, out of the box iPhones have Apple News notifications. Daily OS upgrade request on MacOS
>but I think a lot of people on this thread have never even used a mac for more than five minutes, let alone developed on one. HN monoculture is real.
I for one, use mac and develop on it for my job, and I like windows significantly more than mac. Dunno how many people here are like me, but I don't think this kind of assumption is fair.
I used exclusively Mac on my personal machines from 2005 to 2018. Then I switched to a Windows 10 laptop mostly because I was curious and it was cheap.
Windows 10 + WSL is amazing. It's definitely getting better all the time while macOS is getting worse each year. Plus more and more I just seem to only need Chrome, VSCode, and a Terminal.
I did that after 13 years as an Apple/Mac customer. I was enough of an Apple fanboy that I queued on Day 1 for iPhone 4, and did various other things that only diehard Mac users who obsessively read Daring Fireball and MacRumors do.
I use a ThinkPad X1 running Windows 10 as my daily driver now.
The last Mac I bought was so unreliable that it had to be repaired six times by the Apple Store. Every six months the internal SATA cable failed. (Mid-2012 MBP, it's a known design flaw.) On one of those occasions, I had to help the Apple "Genius" understand what a SATA flex cable was, how it plugs into a drive, and provide him with part numbers because he couldn't find the parts on his Genius iPad. Apple refused to give me a replacement machine when I saw the manager after the second incident. It wasn't until the 5th time (!) that they finally offered a replacement Mac laptop... however, it would have been a 1st Gen butterfly keyboard model. Y'know, the model known to break so often that Apple introduced a separate "warranty" just for the keyboards.
It honestly took me until the 6th repair to understand that being an Apple customer (nevermind an Apple developer) is being in an abusive relationship. I had to get out, because frankly I needed to get work done instead of all this Apple Store repair downtime.
And that's not counting the $100s of App Store software I've lost when Apple forgot to renew their App Store Developer certificates & now the OS thinks those apps are corrupt, and I can't redownload them because the App Store doesn't keep old versions. Or how Apple mistakenly revoked the developer certificate for Charlie Munroe Software, so now the Eon business timer app I use has been remotely disabled by Apple, even though there's nothing wrong with it and Apple admits it was a false positive. Much like the false positive that won't let me launch my old REALbasic compiler anymore because Apple has flagged it as malware too. At least I stayed on High Sierra, so the 32-bit software that interfaces with my digital hardware guitar amp still works.
I've had my ThinkPad a couple of years now. I don't "love" it, but it hasn't broken on me in all that time, making it more reliable than all 4 Mac laptops I've owned. The keyboard is better than any Mac keyboard I've used, except maybe the 2000-era Pismo G3. Sometimes reliability and stability is more important - just get work done with the least downtime. I can run more audio software on Windows than was ever available for my Mac. I've never had a virus on Windows and I only run the built-in firewall & anti-virus, and I don't need to deal with that Xcode worm that's going around right now. And while there's a ton of apps that don't use it yet, the Windows approach to HiDPI works better for me than the Mac 2x / 4x system... being able to run a crisp 1.5x on my external monitor instead of jumbo 2x icons is bliss.
Honestly, I do understand where you're coming from, there's a lot Mac OS X got right and a ton of areas for Windows still to improve, but this is not the Snow Leopard era Apple anymore. Apple just doesn't care, unless it's an iPhone. And I switched to Android years ago....
Windows 10 is fine in terms of reliability and stability. In fact I find it better than Mac.
However in terms of the actual topic - code signing, Windows is almost as bad. If you run an unsigned program now you get a Smart Screen prompt and you have to click a small "More info" link and then "Run anyway".
It's a bit more obvious than Mac, and I believe there is some system to recognise common but unsigned binaries, but they're still clearly trying to head in the same direction, Apple are just a bit further down the slope.
I switched to Win10 last November after 7 years on OSX (and 9 years before that on Linux). I've had no issues with it. Are you sure you use Win10 everyday?
The problem is that we didn't buy into a walled garden. Mac OS X was an open platform from 2000 until 2012, when Gatekeeper was added, ostensibly for security. It's very difficult if you've invested in a platform for many years, and then the platform slowly transforms into something different.
Has Apple become increasingly hostile? Yes. Since inception? No. For example, my MacBook Pro from 2007 had a user-replaceable battery. I just popped the old one out, popped the new one in, 15 seconds, it was brilliant.
I'm not sure what the point of that page or posting it here is. This system (and others like it) is neither new nor special. It's also not a bad thing, and I haven't seen anyone come up with a better alternative.
Generally you see all major distribution options have signatures with a CA-type trust structure no matter what you use, be it open-source, free or commercial paid software. On Windows, macOS and at least all Linux distros based on dpkg or rpm you have signatures and circumventing that requires a bunch of steps that will prevent most users from shooting themselves in the foot.
Question for you: what exactly does Notarization protect against? I have watched all the videos about it, I read the developer documentation, I notarize my apps because it is required by the OS…but I still have not gotten a single good explanation as to why it's useful. Apple claims that the process is extremely tolerant…so does it try to accept everything but blatant malware? Does it let malware through? What happens if malware does get through? Why does having build-specific certificates help security–is there any reason why Apple would disable a single build of an app from a malware publisher?
> Apple claims that the process is extremely tolerant…so does it try to accept everything but blatant malware?
From what I understand, the most common use case here is to match against known malware inserted into an otherwise normal release, either from an infected dev machine or by way of an attacker coopting stolen credentials. It's not going to guard against truly novel malware for obvious reasons, but the vast majority of malware is a repackaging of stuff that's already out there.
As far as I know it is intended to prevent identity abuse in both the IRL sense as cryptographic sense.
It means that it is harder for an attacker to abuse your systems or key material to sign something in your name.
Perhaps the best analogy I can come up with is the dns-01 verification with ACME and a lower TTL. You need to compromise more pieces of the puzzle on a shorter timeline to attack that specific part of the system.
Code signing was always a thing, and you can sign your code without notarizing it. And in fact notarization doesn't really seem to have much to do with the person who wrote the thing you are notarizing–you can notarize other people's software!
They would never admit it, but gate keeping is another factor from their perspective. I don't judge them on that, it does filter out some crapware, but I don't use OSX or any OS that tries to pull this stuff.
I do. If I remember correctly, it was the website that was hacked, not the signing certificate that Handbrake used (in fact, I think it used none). Code signing would have worked here.
If the message were completely transparent, something like "The developer didn't pay $99 for us to do a cursory check on them (or whatever it is that Apple does with that money), are you sure you want to run their software? [Move to trash] [No] [?]", then that would give the user the relevant information to make this decision, but as it is, virtually no mac user will understand what is really going on.
I also can't imagine $100 is easy to come up with in countries below level 4[1]. The OpenStreetMap Foundation recently introduced a way to waive the yearly £15 fee for OSMF membership if you have a certain number of map edits or otherwise contributed to the project. The OSM community seems to be quite diverse, but I can't imagine that Apple computers are less widespread than OpenStreetMap.
[1] https://www.gatesnotes.com/Books/Factfulness#incomegroups