If I'm reading this correctly: as soon as you drag a path into the terminal window, anything run from Terminal has access for that path apparently permanently, through a special hidden capability attribute, bypassing the usual Catalina permissions dialog.
Assuming it's true - I've no reason to think otherwise but I'm not using a Mac at the mo so can't try it - this is fascinating. It has the ring of a quick workaround for something: in desperation we bung in a whole new capability for this, let's hope nobody notices, we'll do it properly in a later release. Is there a better explanation? If it is a grubby workaround, what would it have been a workaround for?
It's a workaround for the new permissions system in Catalina being a complete mess from the get go. Previously, the Terminal in Catalina would pop up a dialogue asking for permission to access the file you'd just dragged in.
Good security or not, this entire system should not have been rolled out unless/until Apple had a sane UI for handing it. They don't.
> It's a workaround for the new permissions system in Catalina being a complete mess from the get go.
I fucking hate it. Programs can write to my user dir and folders within it freely, but god forbid they touch my sacred Downloads, Documents, and Desktop folders! Better ask me first!
It's like it was made by the iOS dev team but also the iOS dev team has never used a computer before.
I had to add ruby to full disk access to get emacs to work, because emacs is launched by a ruby script and it was insane that I couldn't ls my Downloads folder from it.
It’s not perfect but I think there is sound logic to to the way it works. My wife is a fairly average computer user, and she has not added any files outside those folders that are in the user directory by default (Downloads, Desktop, Documents, Pictures, etc...). I think it seems pretty logical that Documents or Pictures for most users will include more personal data that needs protecting than ~/.config or whatever.
Yep. But authorizing things on a file-by-file basis would be a terrible UX, while granting programs blanket access to all of ~/.local wouldn't be very secure, since lots of programs need something from there. It's the same situation as ~/Library for macOS-specific code. App Sandbox avoids this problem by giving each app its own virtualized home directory, so that data sharing between apps becomes opt-in instead of opt-out. It still exists... too bad nobody wants to adopt it.
Okay, so if I turn off iCloud Drive syncing—because, for instance, the free iCloud Drive storage space is a pitiful 5GB—why can't I also turn off these special permissions?
(You can add individual apps to full disk access but it needs to be done manually for every new app you install.)
>It's like it was made by the iOS dev team but also the iOS dev team has never used a computer before.
I have had the same thought for at least three years now.
Apple has managed to pull itself back in 2019 with Qualcomm Settlement ( Finally admitting they were dump enough to start the fight in the first place ) iPhone 11, Mac Pro and MacBook Pro 16". Not perfect, late, slow but at least they reacted on the Hardware side. ( And Apple Store is getting some love too )
But Software is taking a beating. And it has been for a few years in a row already, especially on macOS. Swift adoption and future is also not very clear. It seems sticking to Objective-C for another year would be safe bet. Catalyst and Swift UI maybe another blow to macOS as well[1], they may be good, but I have doubt whether they will be great.
There are a lot of hardcoded paths within ~/, including macOS-specific (~/Library/Application Support, ~/Library/Caches, etc.) and Unix-inherited (dotfiles). Essentially all applications need to access at least some of those, so you'd have to grant permission to every application you open. In contrast, hardcoding paths within Documents or Downloads is uncommon and considered an antipattern; thus, most applications only access Documents and Downloads when the user manually chooses a file within one of those directories. If the application is using a standard OS file open/save dialog, it automatically gets permission to only the files the user chose, without having to grant it blanket Documents and Downloads permission.
In theory you could make the permission correspond to "everything in ~/ except dotfiles and Library", but that would be more confusing.
That said, the whole thing is a hack. The "proper" solution is/was App Sandbox, where the whole home directory is virtualized, and apps get a whitelist of what they can access rather than a blacklist. Much more secure! Too bad nobody adopted it.
Given how absolutely terrible "real" computers are at having a meaningful security model (see also https://xkcd.com/1200/), I am entirely in favor of letting the iOS dev team, who have implemented a meaningful security model, design things while making sure they never hear about how "real" computers work.
On the other hand, they're easy to open and repair, and it is the user who decides what software they are allowed to run on their computer, unlike anything that iOS dev team got involved with.
(Oh, and don't forget that basically all older iOS devices contain a local vulnerability which cannot be fixed.)
The problem with that is iOS is only a consumption device, not an actual computer, their security model makes no sense if you are using the device like a computer.
I use my iPhone like a computer. (I think. I mean, I don't spend a bunch of time cleaning up malware on it or reinstalling my Python environment because something got screwed up one day, does that mean I don't use it like a computer?)
You can but it's not really designed for that usage, hard to access files, unability to create a folder hierarchy, a lot of use-cases are outright forbidden by Apple...
The "sane UI" in this case is one that has already existed for years, and it's the user intent of dropping a file into an application to give it access.
I'm not a fan of what Apple has done in Catalina, but I don't think your take is completely fair.
"For years", simply opening an app implicitly gave it access to most of the data on your hard drive. Regardless of whether you opened or dragged in a particular file.
In Catalina, Apple wanted to restrict this. As a consequence, in the initial release of Catalina, you could drag a file from your desktop into your Terminal, but your Terminal would be unable to access the file. Which of course is terrible UX!
So what Apple did in this update is give back the behavior you're describing: dragging a file into the Terminal implicitly grants the Terminal access. But unlike in old OS's, the Terminal doesn't suddenly get access to most of the other data on your hard drive as well.
Unfortunately, in order to walk that fine line, Apple had to add an extra, undocumented file attribute. And because Apple policy is for anything TCC-related should be SIP-protected, that attribute is protected by SIP and cannot be manually edited. And because a UI for revoking access to individual files would be a complete nightmare, there's no UI pathway either...
Apple's intentions with all of this absolutely make sense. The problem is that good intentions don't matter if the execution is flawed.
> And because the UI for revoking access to individual files would be a complete nightmare, there's no UI pathway either.
Why would this be a complete nightmare? There's no way to override a permission that's been granted to your ancestor - so why shouldn't the OS be able to remove the flag when you remove the permission in Settings? Why shouldn't the OS be able to maintain the invariant between the permissions dialog and the on-disk representation, given that the on-disk representation can only be set by the OS? That's just sad.
The OS can certainly remove the flag. I just can't imagine what the UI for that would look like in Settings, once you've dragged many hundreds of files into the Terminal over the course of using your computer for several months.
You could put it in the individual file's get info pane, but more users open that pane than know what the Terminal is.
This is the kind of setting you'd want to control via either the Terminal (ironically) or a third party app. But Apple has decided only Apple-supplied software with a UI is allowed to control TCC.
MacOS has some very strange ideas on removing permissions. When you allow kexts in modern OS releases, the signature gets added to an SQLite database in /var/db (you have to consult this database to get the signatures if you want to whitelist kexts in an MDM). Now kexts are quite invasive, hence Apple's caution on allowing them in the first place.
What happens if you want to revoke a kext? Delete the entry from the SQLite DB? Nah. Guess what, SIP prevents any and all deletions from that DB. You have to disable SIP to revoke a kext's permissions. And because the signature is not a hash, but instead a two-part vendor/product, it's entirely possible for a malicious version of an existing kext to be released that is then permitted by the signature.
As an admin with security focus, this to me seems completely backwards. I get that Apple don't want to make the permitting operation to be too difficult in the first place, because these are end-users we're talking about, but the lengths they go to in order to prevent the permissions being revoked is downright strange.
This is pretty straightforward in a locked-down Settings UI. Checkboxes that add to staging for a bulk "remove" action, a "select all visible" meta-checkbox, a search box to filter the view. Showing full paths might be tough, but you can wrap lines or ellipsize/tooltip if necessary, and rely on search for power users with deep hierarchies. Gmail has had this exact pattern for eons.
Right, because only Apple-supplied software can possess the right entitlements. If there was any other way to do this, applications could bypass the protection (looking at you, Dropbox…)
This is normal behavior. In DaisyDisk, you must drag the volume to the DaisyDisk window to grant it permission to inspect the volume. This is exactly the same behavior for Terminal.app. I don't see anything wrong here.
What's wrong is there's no way to revoke the permission if you dragged the file on accident. That is a HUGE flaw. Every app has an "undo" function for a reason.
I am talking about a very specific feature of the Powerbox that gives you a security scoped bookmark to access a resource, even if you are sandboxed: https://news.ycombinator.com/item?id=21828138
I dunno. It makes me think of the Zen of Python's "Explicit is better than implicit." .. which is the point of the entire new Vista-style permission granting dialogues to begin with.
Those dialogs recognized the product had to change significantly for the time and were accompanied by many changes below the surface. I'm not sure that the same level of recognition exists in Apple circles.
Is this automatable with AppleScript, or similar? If so, then there are possibly some vectors that Apple are trying to block.
There are likely many better ways to handle these situations, but I'm not sure I really expect it from the recent Apple. The most recent Apple is recognizing issues (i.e. MacBook Pro keyboard), but I doubt the whole organization has caught up.
What, dragging a file and dropping it onto an application to grant it access? I am not sure. I'd like to think that doing this programmatically would be locked down if it allows an application to extend their access without user consent.
It's so darned important to skip the step of asking the user if they want to make the file world, group, or user readable. So darned! I bet the guy who invented this has a patent and is a vp by now ....
I don't think this is by passing any permissions system. My understanding of how the new permissions system works is that dragging a file into an app grants access to that file. You drag an image into a Microsoft Word, and permissions is granted to access that file so it can insert it into the document.
This situation is different because a SIP protected file attribute is set on the dragged file. There is no UI to manage this behavior and it persists forever, unless SIP is disabled.
As other commentators have noted, it reeks of last-minute-workaround to close what would otherwise be a high priority UI bug for the permissions dialogs.
Snow Leopard, 10.6, still is the best MacOS version (except for the fact that it's not supported anymore). Only minor improvements have been made since then, and many deteriorations.
...Who is the original commenter at this point? GP mentioned preferring Snow Leopard, which doesn't have TCC. Cannam and m463 didn't express a preference either way.
m463 (https://news.ycombinator.com/item?id=21830357) totally does: they dislike this feature because they edit the path after copying it, so it's giving access to the wrong thing. They're missing the point that earlier versions of macOS didn't have anything at all.
Before moving to catalina, I would like a way to disable it and revert to the old behavior.
Terminal is a trusted app to me.
Actually, I probably won't be moving to catalina. I've been a diehard apple user since I got a G3, but the way apple is going their computers are too nerfed. It's like trying to get excited about buying a car from a rental car trade-in lot.
Perhaps you do. I definitely don't run untrusted software.
In any case, the fact that the Terminal could be allowed to access all the disk has nothing to do with child processes being allowed to access all the disk.
Alas, this approach does not yield fruit for the traditional core Unix files such as in /etc, these are not exposed through the Finder interface.
However there are a lot of potentially sensitive files that could be trivially replaced by modified versions if this tactic works everywhere, i.e. /System/Library/Security/authorization.plist
Learn something new every day! Should have thought of 'open' though.
So if I do this "Terminal drop" thing on a plain file (like sudoers) I gain no special powers, Unix perms still apply. So maybe this only does something on directories.
I don't know. I'm not addressing the underlying problem and it's veracity, I'm addressing the semantic confusion of op's intent regarding edit of the sudoers file.
The motivation seems laudable: obviously if I drag a file to the terminal I want it to be able to access it, one less confirmation dialog the better.
But it's the implementation that seems deeply flawed: that there's no obvious way to remove the permission afterwards, or even a record of it where you'd expect to look.
There seems to be a reasonable solution, however: as soon as the terminal window gets closed (and perhaps any background processes ended), the permissions get revoked. And even if your terminal process gets killed without cleanup, permissions would be revoked when your terminal app reboots or your computer restarts.
Not really. Consider the case where are editing a configuration file for some service, need to type a file path, and drag in a folder from the Finder instead.
Even simpler, if you write a small shell script with a hard-coded path in it that you run occasionally, you expect that script to keep running even across reboots. That means that, _if Terminal.app automatically adds a permission_, that permission has to be fairly permanent.
In both cases, under your suggested solution, both scripts would work when you test them, and then silently break when you close the terminal window.
I do wonder whether Terminal.app should automatically add that permission, though. If, instead of using emacs/pico/nano/vim, you use a GUI app to edit that script, that doesn’t happen, either, does it?
Actually, that wouldn't work; AFAIK the binary that executes the script needs to have permission, so for example if you're adding a cron script cron has to have the permission, not the Terminal.app, since the cron script running has nothing to do with Terminal.app.
You probably are right about that, but it can’t be fully correct. If I type “cat ”, drag in a file, and hit return, it isn’t Terminal.app that reads the file, but cat, launched by my shell (and that’s the simple case; nothing forbids me from nesting a few shells, a few sudo’s, etc. before accessing a file)
⇒ this special permission must either directly apply to all command-line tools or always be inherited on fork and/or execve (it cannot require a magic system call, as the command-line tool reading or writing data might not be Apple-supplied)
I can see how opening this door “just long enough” gets hairy very easily. So, as others suggest, this may have been implemented as a “OK, let’s do it this way for now, and try and figure out whether we can better later”.
This is not always the case though - in order to run some applescripts, you need accessibility permissions... But you grant that to terminal.app, not applescript or not even bash that invokes applescript
Which process? Remember you are usually (but not always) at the command line prompt to pass an argument when you drag things to the Terminal. You can't trivially find out the process(es) that gets run from there so you are tracking the shell. Do you wait till the shell exits? Many developers spend months in a single shell.
What if the process forks? What if the process double-forks to become a daemon, and then Terminal quits? Who removes the xattr then when Terminal isn't even running? Does the kernel now remember that and need to do I/O when _exit(2) is called?
What if the process forks and itself exits? Does the permission gets revoked upon the parent process's exit and the child process's access suddenly get revoked? If so, does the opened file descriptor still works or it becomes unusable? If the file descriptor still works even though the on-disk file doesn't contain the permission xattr, would the file descriptor still continue to have special access when transmitted over a UNIX socket to an unrelated process? Do you instead track the original PGID instead of PID? Then what if setpgid(2) is called?
What if the system loses power and no one cleans up the permissions? Do you keep a log of files with such xattrs and clean them at next boot? What if at next boot the file system isn't mounted or mounted at a different mount point?
The problem is that whoever implemented this change did not care to consider the full workflow lifecycle/consequences of enabling that change. And whoever did the code review on it ALSO failed to consider, or push for, dealing with it holistically and thoughtfully instead of just GSD’ing it (“gettin’ shit done”)
I have a hard time buying that anyone deliberately shipped a security feature that permanently opens access short of rebooting the system into a special mode. A "full deliberation" would presumably include input from a dedicated security team, who would almost certainly say "better not to ship this at all and leave the old behavior in than ship this". So either the security team missed this entirely, or the security team was not consulted, or the security team is not empowered to stop this sort of half-baked feature from going in... I really can't come up with a pleasant scenario here.
"We're trying to ship a system-wide permission lockdown, but we ran into unforeseen complications that will require more than a single release cycle to properly address. Instead of holding up the entire rollout for a release cycle, we'll ship the lockdown today and most users will benefit from it, but users that would otherwise run into the complications get a permissive bypass. Once the infrastructure to address the complications is in place, we'll get the lockdown working for them as well."
The near-complete inability to revert the permission is what argues against this. You could get this past a security team if you had the ability to revert it and Apple could document it away. It is the fact that it is a non-revertible thing (in any reasonable manner) that is the key here. The odds that there's some way to chain this in an interesting way are just too high, even if the security team couldn't immediately come up with it.
Or the security team agrees that the benefit to the 99.9% of macOS users (that have no idea Terminal exists and accepts drag and drop) is worth the additional risk to the 0.1% who use this.
But that's a result of the Agile way of development. The MVP is "be able to drag a program to the terminal and run it", and the next feature, way down on the backlog is "allow the user to revoke permissions granted by dragged-in apps"
It's actually the result of product-owners deciding to release unfinished software, likely responding to incentives from management to ship (boolean) without clear standards about what is shippable (non-boolean).
I don't like this... but in fairness, this is not a regression.
The Documents folder was not protected from Terminal before Catalina. Now, items in Documents are protected unless you explicitly drag them into Terminal-- but there's no way to "re-protect" them.
That is, at worst, users are in the situation they were before Catalina.
The normal privilege UI of drag-n-drop indicating file access intent has been around for many years in macOS.
The tracking has been through signed bookmarks, with no specific place for apps to persist this info. Keeping it in an xattr for ACLs seems like a great solution, and the exact opposite of a “hack”.
Unless you want the OS to spy on the keyboard this is near-impossible to pull off. And even then it's very hard to do well (backspace, tab auto-completion of paths, copy+paste, etc).
I wasn't saying that to suggest a practical use of this "bug", just to point out that if it were as intentional as the above commend suggested then it should have been expanded to other methods of listing the file directory like typing it in.
Thats what folks want though (persist across reboots). Let's say you have a script that works now, you reboot, all of a sudden terminal can't call it until you re-drag it?
How about you configure your system that you don't get those "million permissions dialogues" (you must copy/paste a lot...), and then the rest of us will have to just confirm temporary changes like these?
It used to be you could trust your software—even most closed source—because only bad guys wanted to exfiltrate data from your system. The simple advice of “never run an unknown executable” was an effective security practice.
Now, the incentives of software vendors—even open source—have changed to include data exhilaration en masse. Off the top of my head:
* Games that scan and upload the file hierarchy of your system.
* Software sync packages that casually provide third parties for research (Dropbox).
* Keyloggers embedded in the default operating system (web search in windows start; Ubuntu web search; Siri web search.
* Pervasive practice of running untrusted code with JavaScript and the seemingly inexorable march of granting web pages greater system access. Copy something to your clipboard, open a web page, and it can grab it, along with so much more.
* The push toward ever greater “telemetry”.
* Automatic background file uploads, like Microsoft defender sample submission.
* Pervasive ingestion of our data by governments.
The greatest threat to our security has become the vendors and government—the very entities whose job it is to protect us.
Yes, searching for a local file or program 100% of the time because I would never intentionally search for anything else from there.
That's specifically for Windows Start. If Ubuntu or Siri "web searches" are clearly distinguished from OS searches, then of course I'd expect them to send data somewhere.
Wait, how do you know that does search per keypress? I've just tried in Windows, and it doesn't seem to do anything until I press enter when the loupe is selected.
I don’t know what use a terminal application is without full disk access: the first two things I do on a new Mac are disable SIP (so I can use dtrace effectively) and grant all my terminal applications full disk access. I’m getting really tired of Apple’s “security” policies that have no user-controllable ways to override the defaults.
What it has made me actually very grateful for is the sudden revelation - to me - that Dropbox, Google Drive and others are secretly reading files that I never explicitly asked - or wanted - them to read. macOS now asks and allows me to block by default.
No, Google, I DIDN'T intend for you to snoop in my Mac ~/Downloads folder - that area is PRIVATE. So in a way it's good for user Privacy, Apple Inc.-aside.
Yes. I was shocked at it, but on second thought I'm not surprised. Since Catalina, security prompts come up early on about Dropbox and Google Drive wanting to access your "Downloads" folder. Caught red-handed.
I know there's been third-party macOS apps providing this 'filesystem firewall' functionality for years now, but it's nice to have some of it come from Apple directly.
I think I will eventually move to Linux, I just don't feel private on my own damn computer anymore. I don't trust Apple either. Or I will harden macOS more and more because of this abuse.
Don’t get me wrong, I appreciate it as a default, it just doesn’t make sense for Terminal.app to not have full disk access from the start: I.e. when I first open the terminal, I should get a security pop up requesting full disk access
If that's what you want, it looks like you can get that by typing 4 characters. This seems like an awfully small price to pay to allow other people the possibility of much increased security.
I'm not sure what GP is referring to by "four characters", but in Mojave you can add Terminal to Full Disk Access in System Preferences, and I highly recommend it.
I assume Catalina would be the same, but I've never actually used Catalina. (And probably never will at this point.)
Remember when Windows Vista first came out, and people reviled the utterly useless intrusive UAC security dialogues that would pop up every 10 seconds? And the inexplicable confusing security model that just broke most things? And the fact that it was a waste of time because it just trained people to hit "yes" all the time?
Apparently Apple doesn't remember, because they seem intent to make all the exact same mistakes.
Personally I’m all for increased protection in an OS, the old model of users having permissions no longer works. You can’t trust your users, they don’t know what they are running when they install software. This new model of each “app” having permissions is far more sensible, more of that please!
The problem is the false alert - you HAVE to press yes all the time to get NORMAL work done.
Same with cookie acceptance popups - supposedly this is allowing the user to know the website uses a cookie blah blah, but you HAVE to press yes in most cases to go forward - so everyone has now been trained to auto click yes all the time. At this point those popups could say anything and at least 10% of users would still click yes.
I like the other approach. NO popup / warning unless something meaningfully unexpected.
The problem is there are bunch of warriors on the net and elsewhere that do things like claim that users hate cookies and won't accept them, so need to be warned of them by every website (which can still technically set cookies regardless of any popup). If users don't want cookies they can block them at the browser level - that's how you have actual control BTW - a scam site may not put up the cookie notice and may still be able to set the cookie.
Meanwhile - no notice required when your ISP tracks your every move AND is the monopoly provider. The internet folks have priorities totally backwards - so much focus on evil google it is ridiculous. The reason many people give google their entire search / email history is that they trust google more than the Chinese phone folks, the samsungs, the comcasts etc.
Some real actions to improve the net:
We need actual criminal prosecution and responsibility for websites distributing malware, browser exploits or downloadable.
Owner goes down even if it was their ad network, they can then sue the ad network and if they can recover from them great, if not they still pay for picking a crappy ad network. If their webhost was hacked they still pay for picking stupid web host, they can then sue webhost to recover.
We need to ban and criminally prosecute sale of info by places like ISPs and DMV's that are monopoly providers that have strong paid revenue streams already.
> The problem is there are bunch of warriors on the net and elsewhere that do things like claim that users hate cookies and won't accept them, so need to be warned of them by every website (which can still technically set cookies regardless of any popup). If users don't want cookies they can block them at the browser level - that's how you have actual control BTW - a scam site may not put up the cookie notice and may still be able to set the cookie.
Tracking was being targeted, cookies were conflated with tracking (they are, but not really at the level that's being talked about), and some ineffectual legislation was passed to target cookies instead of tracking. It's almost as if the whole process was steered to that point to avoid any useful change with regard to online tracking...
The function of the website for the website owner is to make money. Setting cookies helps with that. The EU law says I have to disclose lots of stuff and get your consent before tracking you. So every website added a disclosure and consent button, and every user clicked on it.
If you think of the wasted power of a billion warning / notifications a day, you realize how little folks will pay attention to these warnings, it's the only way to get on with your life is to tune these warnings out.
SERIOUSLY - can't someone do a study in the EU showing that their constant warnings mean folks have totally tuned them out?
That "Analytics Advertising Feature" MUST be unchecked by default. Only users that actually want to be tracked are tracked.
Every "tracking feature" (cookies, fingerprinting, IP tracking, whatever) must be hard opt-in, and the website has to provide an option for the user to opt-out if they change their mind.
If a website only use functional cookies (colours, session, login, cart, language) they don't need consent, just disclosure (and it doesn't have to be an ugly cookie bar).
What would provide users some actual security is if their browser would block this unauthenticated insert of javascript if its in a secure page. In other words, DON'T trust the website to do the right thing, just take control at the browser level.
There's a very clear "I refuse cookies" button, which I can click and continue to the website. [1]
The point of those things is that I can refuse cookies or tracking without retaliation and without loss of functionality. Remember: functional cookies don't require consent.
They are doing it right.
It is all the non-compliant companies with only the "Accept" button that are training users to click on it. Those cookie bars are not compliant with GDPR at all.
I agree that this is an excellent approach, but the wording should be a lot clearer. When I see “This site uses cookies to offer you a better browsing experience” and a yes/no choice, I assume I’ll be missing out if I choose 'no'. Really, there should be an explicit reference to 'tracking', and a reassurance that everything will work perfectly if I choose 'yes'.
False, the EU website has an ugly cookie bar with a button called "I accept" that everyone has been trained to click yes on.
"That "Analytics Advertising Feature" MUST be unchecked by default. Only users that actually want to be tracked are tracked."
False, users can be presented with an accept / reject button on a standard cookie bar, clicking accept can opt them into tracking - please LOOK at the EU website example I provided.
"Every "tracking feature" (cookies, fingerprinting, IP tracking, whatever) must be hard opt-in."
This can be done though an accept button on a website that users have been trained to click yes on. My earlier suggestion that folks do a study on how many users navigate into these policies for every website they visit to make fine grained selections if such options are even available stands as well.
"If a website only use functional cookies (colours, session, login, cart, language) they don't need consent, just disclosure (and it doesn't have to be an ugly cookie bar)."
I gave you an example of an ugly cookie bar on an EU website subject to GPDR - I can find many more.
This is the problem with these folks messing the net up. Everyone should do this / shouldn't do that, but no attention to what is actually happening.
I want to be clear, billion of pages are showing I accept buttons, some without reject buttons if they are disclosure only, some with reject buttons that kick you off the site, and some with reject buttons that opt you out of tracking, and users are being / have been trained by the EU alert notices / disclosure only notices (which generally DO have an I accept button) etc to waste their time clicking I accept everywhere.
This is bad for actual user choice, actual privacy.
> The function of the website for the website owner is to make money.
Making money is not functionality to me. (Also, note that your website doesn't have to make money: I run my personal blog at a monetary loss, just like many other people.)
> The EU law says I have to disclose lots of stuff and get your consent before tracking you. So every website added a disclosure and consent button, and every user clicked on it.
Right, what they didn't realize that everyone would just make it super annoying in an attempt to lampoon the law instead of actually changing their behavior because they'd just make usage of their website conditional on it. Hence GDPR, where now users can actually click "no" and not be penalized for it.
I'm just saying that it's 90% visual and 10% effectual (and I think I'm being generous), and likely makes the average person think good strides have been taken towards the problem of line tracking, which I don't think it really helps. It's the perfect level of highly visible and almost useless that it may in fact be counter-productive.
Implementations where there's only an "Accept" option are not following law, though.
The user must opt-in of their own volition, must be able to reject the tracking cookies (or any kind of tracking) and must be able to opt-out later as well.
Btw: Functional cookies (colours, session, login, cart, language) don't need consent, just disclosure (and it doesn't have to be an ugly cookie bar).
It explicitly _doesn't_ target cookies, it explicitly targets any and all stored information that can be resolved to "a specific person", which simply includes things like session and tracking cookies.
Session cookies are allowed (without consent) if your site has something like login or stored settings. You don’t need consent if the cookie is required to provide a function to the user.
Only cookies that are used to track user behavior, and can be tracked back to that particular user are disallowed. So things like Facebook like buttons would require consent (since Facebook will use the info gained to target you with ads in another context) while basic Google Analytics or similar is fine as it only presents aggregated data to the site owner and does not leak data cross-site. Some GA features do require consent though (like demographics tracking) as it requires Google to cross-reference between sites. You can generally turn these off (I think they even are off by default?)
Note that implementation of the law also differs between EU countries. So a few are more strict. It is up to the national privacy agency to set exact rules.
What do you mean? If I have a website and ONLY use a single cookie to save a setting for the user for the background color - I still have to get the user to accept the cookie, right?
Users don't want cookies. It's just that websites have made their tracking apparatus dependent on them, so they force users to accept them if they'd like to access the site.
> Meanwhile - no notice required when your ISP tracks your every move AND is the monopoly provider.
Yeah, no. Nobody likes their ISP, it's just that it's a lot harder to enact change here.
That would be me. I was using cookies in the context of "cookies that cause popups to appear", because the other kind of cookies are useful and don't annoy people.
Indeed! Someone should make a browser API that lets services direct the user agent to set domain-scoped key value pairs that are passed back to the service on subsequent requests. That would let the client automatically prove that they're authenticated while maintaining the server-side statelessness.
Cookies save me from having to log into every site every time I want to use a service. That is more important to me than not allowing cookies due to some conspiracy-theorist reason.
What I might not want is cookies from OTHER sites like Google being set and/or accessed by non-Google sites. But having worked at web startups, I know firsthand how important google analytics were, so I don’t know.
Exactly. This is why when you double-click on an arbitrary app you downloaded on macOS, there is no “yes” button. You have to bypass the protection by right-clicking on the app or by changing system settings.
This is also why if you access a website with a self-signed certificate, the browser will not give you a “yes” button to go through. You have to jump through a few hoops somewhere else (depending on browser) to accept the self-signed certificate, and then you can see the site.
I dislike the whole "Pop out the system settings dialog and enable things for the application" approach. It throws users into what is likely unfamiliar territory, which just obfuscates things further.
I think a list of permissions with detailed explanations as to what they mean, and a Yes button, is a better approach.
Although there needs to be an option to disable or even mock certain permissions.
Agreed. Especially since there is no explanation why the permission is needed and what it does in detail. I am a very experienced Windows dev but on my MacBook I have no idea what the exact implications are when an app requires access to “accessibility”. I am pretty sure the regular user doesn’t even have the foggiest idea but just does what he is told to do to make the app work.
There is also no log what tells how the app uses the permission. It would be nice if there was a log of each use of the granted permission.
The rule of thumb I would use for an “accessibility” permission is: if the application’s purpose is specifically for disabled people (like a screen reader or magnifier for blind people), and you are disabled, then you should choose Yes. In all other cases always click No. Accessibility tends to be a “back door” permission that allows apps to do all sorts of powerful things they shouldn’t, like read the contents of everything you type and scrape the screen. “Normal” apps should never ask for accessibility. If they are, their developer is likely up to no good.
Dropbox, for example, was recently in the news for silently granting itself accessibility permission, and Apple had to update the OS to prevent them from doing it. They still beg users to grant them this, even lying (EDIT: misleading users) on their web page about the purpose of the permission [1].
I have had several legitimate apps asking for accessibility permissions. I guess it comes down to the fact that permissions are not fine grained and specific enough. Same on android and iOS.
Accessibility permissions allow you to do a lot of really great stuff with Apple Events that is otherwise just completely impossible. I use it very heavily on my own machine.
It's certainly a powerful ability, and I can completely understand not wanting to grant it, but realize that the need is legitimate (and non-malicious) in a lot of cases.
This does not really work for things like tiling window managers on macOS—are you suggesting developers of such tools are up to no good, or are such system-control tools not fitting “normal”?
If it can’t be done with non-a11y APIs then it doesn’t fit the definition of “things Apple thinks normal app should do,” whether we agree with that or not. The way I see it, and I know it’s a controversial view, if a developer is ok with flaunting the platforms rules about the purpose of an API[1], then what other behavior might they be ok with?
1: The first sentence in Apple’s accessibility API docs[2] is: “Accessible apps help users with disabilities access information and information technology.“ I think they are very clear about the purpose of these, and it’s not “to do cool innovative things missing from the regular API”.
Contrary to ryandrake's rule of thumb, window managers are accessibility tools. You could argue that there should be a distinction between accessibility-tools-for-disabled-people and accessibility-tools-that-are-useful-to-everyone, but I'm not convinced such a distiction is useful, especially since Apple would likely try to subvert and destroy the accessibility-tools-that-are-useful-to-everyone, since those conflict with their user-hostile UI design, as https://news.ycombinator.com/item?id=21828816 alludes to with:
> I think they are very clear about the purpose of these, and it's not "to do cool innovative things missing from the regular API".
> This is also why if you access a website with a self-signed certificate, the browser will not give you a “yes” button to go through. You have to jump through a few hoops somewhere else (depending on browser) to accept the self-signed certificate, and then you can see the site.
The button doesn't say "yes", but at least on FF there's a way to bypass the warning with one or two clicks from the error page itself. I think you might be confusing this with HSTS, where a certificate failure on an HSTS-enabled website will not let users bypass the warning (and for good reason).
I'm not confusing this with HSTS. I'm being a bit vague because the UI is different for each browser. It used to be that you could just press one or two buttons, but these days the necessary buttons are usually either hidden, or in a separate settings page.
I just checked and it's one or two clicks on the same page (i.e. not a different location) for FF and Chrome. Maybe it's different in Safari (as we are talking about MacOS here), but I haven't used in so long for me to know.
At least recently, HSTS failures could still be bypassed if you really wanted to (if, say, you made the bad choice of using the .dev tld for private domains). It's just completely undiscoverable for good reason.
UAC is even more useless, it just tells you "Do you want to allow xyz to make changes to your computer", without telling you what any of those changes or permissions are. Android permission prompts are way better.
>UAC is even more useless, it just tells you "Do you want to allow xyz to make changes to your computer", without telling you what any of those changes or permissions are.
The UAC prompt is essentially "sudo" - it grants root to whatever program is running. You really can't really implement a more granular permissions system without significant engineering effort, because you need to ensure that granting one permission doesn't result in privilege escalation. For instance, granting permissions to modify Program Files/Windows directory can be abused to get steal permissions from other applications, by replacing existing executables with malicious ones.
>Android permission prompts are way better.
Not really comparable because "regular" Windows programs aren't sandboxed. So they kinda already all the android "permissions" already (eg. contacts - they can simply read off disk). A better comparison would be root, which essentially has the same ux as Windows UAC.
The problem is that for any one who says it and means it and can represent it, many say the same, and then go crying and blaming the OS when their system is compromised and their data damaged/stolen...
Maybe, but I don't see it as a solution for serious work or desktop applications and to be honest, I haven't seen a good mobile OS yet.
And even the casual users in my circle think that he permission jungle on phones or tablets is unworkable and intransparent after a while. A flood of apps that cannot be trusted from a commercial shop tends to be the larger security threat.
> You can’t trust your users, they don’t know what they are running when they install software.
Big Brother knows what's best?
For decades the Mac has trusted the user, what would be different now other than the company having the largest ever pro user base? Why assume pro users don't know what they are doing?
after updating to Catalina I was suddenly unable to edit my own ~/.ssh/config. I still can't figure out why not. I could rename it and make a copy, though, and the old one shows this:
I had an XCode project sitting in folder on desktop. Small utility that I modify frequently.
After Catalina upgrade compilation of that project stopped working.
XCode started complaining "file is inaccessible" or something like that. Peculiarity is that you can open that file from IDE through Open containing folder / Finder.
And XCode provides absolutely no clue why it cannot access files of the project...
Road that is seeded by such good intentions goes where?
I learned about this option last week after a app I installed wouldn't open, but from a different zip it would while they both had the same MD5 and SHA hash. There was a xattr flag on it that prevented it from working. Just a hunch.
Not related, but my "/backup" folder was removed by Catalina upon upgrading. Of course I backed up "/backup" to an external drive daily, but still...not cool.
"Millennials are killing Unix" — well put — in what world does a system shell application need permissions to access files?! It's supposed to let you poke under the hood, shouldn't it have that permission to begin with?
Plus, if drag-and-drop somehow enables Terminal to get that file permission, then I guess either of two things: (a) Terminal is special -cased to be the only such app capable of gaining that kind of permission from a drag-and-drop, in which case well why didn't they just grant it permission from the start; or (b) it can be exploited by any other app to gain similar permissions on drag-and-drop of a file onto the app... ?
I don't think this is new? Applications usually get automatic access to files you drag or copy into them; this is a Powerbox feature that has been around for a while. This presumably just extends the feature to the new restricted directories, using a new xattr instead of the one used previously (com.apple.security.private.scoped-bookmark-key, I think?).
Okay this is very strange. It seems like OSX is regressing in the file permissions experience and heading towards a more Windows like experience. Considering how incredibly annoying the Windows file permissions experience can be, I think that's a terrible direction to head towards. There are so many complaints of people being unable to access files after the Catalina upgrade. I hope they simplify their permissions and adopt a more sane/less user hostile model. Security needs to be implemented in such a way that it doesn't get in between using my device for work.
This was a joke, but I wouldn't be surprised if Unix usage started declining. I work at a University, and most incoming freshmen (for CS) don't understand what a filesystem is. Their primary computing device is their phone or a tablet, both of which don't require them to use files.
To be fair, I think most incoming freshmen at my university barely knew what a file was back in the mid 90s. None knew anything about UNIX or Linux. Things are at least mildly better now.
When you choose to ignore the wisdom of the generations before you, and re-invent "solutions" to problems people who understand the situation never had, you get blamed for screwing things up.
1) I granted full disk access to iTerm2. Later I removed them in the UI and I can still do everything in iTerm.
2) I have "Documents" granted to a Java app. Tried opening or saving a file in various locations and didn't notice any restriction other than my ordinary UNIX user's
3) I created an app bundle containing a plain shell script that reads my firefox cookie file and curls the file somewhere. This app runs just fine with no prompt whatsoever.
Tangentially, for pages like this that don’t have any sort of max-width set, you may want to use a bookmarklet like "Force Narrow Page" available here https://alanhogan.com/bookmarklets
— or of course one of the reading-mode browser extensions, etc
How is this user hostile? The behavior described is that the Terminal is granted access to the file automatically when the user specifically tries to access the file in Terminal.
The purpose of System Integrity Protection is to create a privilege level that not even superuser has access to.
Luckily, if you’re an experienced user you can boot into recovery mode and permanently disable it, so it will never bother you again. Takes five minutes. Then sudo will be able to do anything.
An accidental copy/paste means you cannot return the permissions to their default state without disabling SIP. Presumably the default permission is a good one, otherwise, if it’s so terrible that it’s a good idea to make it so difficult to go back to it, then why is it the default?
exactly. It's user hostile to correct/undo the action. A 2 second drag-drop mistake requires the following to correct it:
1. realize mistake / stop-everything-you're-doing
2. reboot to recovery mode
3. change SIP setting --> disabled
4. reboot to normal OS
5. delete attribute you never wanted
6. reboot to Recovery again
7. re-enable SIP
8. reboot to normal OS and get back to work if you can remember what you were even doing 20 minutes ago.
Why would you bother to do any of that? If you accidentally allow an unsandboxed app to access ~/Downloads without prompting, it's just like using a Windows or Linux or Mojave system. Who cares?
Creating a privilege level the user of the computer doesn't have access to sounds user-hostile to me. I'm all for security, but it's my computer. (I know you can reboot/turn it off etc., but come on, I shouldn't need to reboot my computer into some sort of recovery mode to remove an attribute on a file because I dragged a file. What if I was dragging it somewhere else and my hand slipped off the trackpad?).
> but come on, I shouldn't need to reboot my computer into some sort of recovery mode to remove an attribute on a file
To be clear: you need to reboot your computer once, and from then on, sudo will always have full permission to do anything. You won't ever have to boot into recovery mode again unless you buy a new computer.
You don't have to like the default, but when it's so darn easy to make your system work the way you'd like, I find it really hard to complain.
Some guides will recommend making SIP-protected changes from within recovery mode as an alternative to disabling SIP permanently. But you absolutely don't have to do that.
I know I can turn it off, but that's not the point. The point is that, if the standard advice for when it does something completely surprising and weird is "just turn it off", then it's a bad security feature.
Granted I'm biased, because my experience with SIP has entirely been that it just breaks things and makes it impossible to fix it. Here's an example of an awful thing it did: if you happen to be using the builtin system python in an older version of macOS, and then you upgrade macOS, certain modules become unusable because the .pyc files they generated on older versions become completely locked by SIP, and the only way to fix it? Yeah, you guessed it, restart it and disable it. You might say, well, don't use the system python. Sure, but python installation is a mess on macOS anyway, and it wasn't my machine this was causing issues with, it was everyone else in the department. I just had to fix it. Also, it's frustrating as hell because essentially I'm disabling it to prevent macOS from screwing with itself. Which is entirely antithetical to the purpose of the feature in the first place (protecting os files).
If the solution is always "turn it off", that's what people are going to do, and the entire feature becomes a frustrating waste of time.
Also, I would argue there's nothing "easy" about having to reboot into a recovery mode to do this. It may not be hard, but it's a pain in the ass and totally pointless.
My main issue with this is that it makes testing/debugging of certain classes of bugs very tedious.
The only way to reliably simulate what happens on a typical user machine (with SIP enabled) is to start with SIP enabled, run a test scenario that involves these permissions, then reboot and disable SIP, clear the permissions that changed, then reboot and enable SIP again before running the test scenario again.
Btw for the cowardly downvoters, are you really happy that Apple is reserving privileges on your computer for themselves? SIP isn't about giving you control, since you have no control over SIP, SIP is about Apple locking down what you can do with your own computer. You can have security without doing that (see: linux, bsd, etc.). There's no legitimate technical reason for the way SIP is implemented, it's a business decision by Apple. You should be irritated with Apple for this non-functional garbage.
I don't want to become a broken record, but how is it an example of Apple "locking down what you can do with your own computer" when they also give you the ability to remove the lock?
I truly don't understand why this appears to upset you so much. I realize that disabling SIP is a very minor inconvenience but you've spent considerably more time writing these comments than it would take to disable SIP.
Now, if we were talking about the T2 chip and replacing your SSD, I'd be taking a totally different tune here. There's also a certain other Apple operating system that is considerably more locked down, in very annoying and non-optional ways...
This isn't really user hostile. The opposite really, stuff just works.
It's possibly security hostile though, and admin hostile, especially since these out of band permissions are apparently very difficult to revoke. As an admin nothing gives me heartburn more than when the system does some sort of magic to make something just work, because I know whatever that process is will be poorly documented and difficult to manage.
It still doesn't make sense. If this is the better approach, then why isn't it the default.
Why is it enabled (in a way that is extremely difficult to reverse) by copy pasting something in the Terminal?
I'm pretty sure this is a bug that will likely be resolved in a couple of minor updates.
Edit: Also, this is extremely user hostile. And it is security hostile. The user and securoty hostile bit isn't giving access when the user pastes a location in the terminal. The hostile bit is completely ignoring the users action when they subsequently try to disable access through the File Access dialog. And it's security hostile because the OS is making it difficult to remove access, not enable access.
It's enabled by copy and pasting because the pasteboard carries capabilities.
I agree it's a bug that these cannot be removed, but it's not user hostile. You wouldn't say Linux is "extremely user hostile" because it allows Terminal to access ~/Downloads without prompting.
This is an extra layer, it alone does not give the app access. Apps which retain this permission are still governed by sandboxing and filesystem permissions, just like on Linux and Windows and Mojave.
It's hard to imagine how this could be accidental behavior. It is much too specific. The only way I could see it being a bug (or incomplete feature) is if maybe there's supposed to be a prompt asking if you want to grant permission first that isn't working. And maybe some sort of permission manager that lets you add/revoke said permissions without a laborious reboot into a diagnostic tool.
Assuming it's true - I've no reason to think otherwise but I'm not using a Mac at the mo so can't try it - this is fascinating. It has the ring of a quick workaround for something: in desperation we bung in a whole new capability for this, let's hope nobody notices, we'll do it properly in a later release. Is there a better explanation? If it is a grubby workaround, what would it have been a workaround for?