> Can I just give the same permission to iTerm? Nope. We are not worthy of that power, and must re-affirm permissions every 30 days for all non-Apple software.
Not sure what permission you're referring to or what your curl script is trying to do but `/opt/homebrew/opt/curl/bin/curl http://www.google.com` works just fine on Tahoe from both iTerm2 and ghostty. Looking through the various permission grants, the only one they both have in common is "App Management". They share some file permission grants, but where as iTerm has full disk access, ghostty only has Downloads and removable media. In the past I've found I've needed to add terminals like iTerm to the Developer Tools permission, but ghostty isn't in there currently and curl is still working just fine. And in none of these cases have I ever needed to re-affirm the permission every 30 days.
Any chance you have "disclaim ownership of children" setting enabled in iTerm? Maybe if iTerm is not allowing child processes to use its own permissions, you're having to re-authorize curl specifically (and it's getting updated about once every 30 days?)
> And if you don't accept them, they are silently denied.
This is IMO the correct behavior. If something asks for permission and it's not explicitly granted, then the default should always be denied.
> Not sure what permission you're referring to or what your curl script is trying to do but `/opt/homebrew/opt/curl/bin/curl http://www.google.com` works just fine on Tahoe from both iTerm2 and ghostty.
Mwwahahaha. Yep. Curling something neutral like google.com worked fine for me as well. That's how I was verifying that everything was OK.
So I thought that might be the dialog you're talking about which is why I thought it was weird that ghostty didn't have it and curl seemed to work just fine. I also could swear that it did show you rejected apps in the list just with the permission turned off.
After experimenting a bit, it seems like:
1) You're right that it doesn't show the rejected apps in the list. Seems like the only way to find that is to query the tcc sqlite db.
2) The permission does apply equally to the built in `curl` as it does to the homebrew installed curl.
3) What it doesn't apply to apparently is the gateway address on your network, regardless of which app you use.
4) It also doesn't apply to all "private" IP space addresses, just ones that are on your subnet. So for example, I have an IOT subnet on my network on its own VPN with a route in the gateway for accessing it from some specific devices on the primary LAN. Without the permission, I can ping and curl (with both the built in and homebrew versions) all of the devices on the IOT subnet. But I can't ping or curl (again with either version) any of the devices on the LAN subnet. Turn the permission on and I can hit everything on the local subnet fine from all the devices.
5) I also validate that the above rules are true even for an application (alacritty in this case) that had never been given permission (in case setting and then removing the permission did something odd)
> The keyword is SILENTLY. The permission requests should be logged and made available in a central location, where they can be reviewed.
This I agree on, the rejected apps should show in the privacy permissions, even if in a collapsed tab/pane so that you can review later. I could swear it used to do this, but maybe I'm thinking of iOS which does do that.
> 2) The permission does apply equally to the built in `curl` as it does to the homebrew installed curl.
I think this might have been fixed? `codesign -dvvv /usr/bin/curl` no longer prints anything about permissions. I definitely remember investigating this particular point.
> 3) What it doesn't apply to apparently is the gateway address on your network, regardless of which app you use.
Doesn't work for me. I can't ping or HTTP into my gateway from a terminal app that doesn't have this permission.
Edit: apparently pinging the gateway works if you're on WiFi. But not with wired Ethernet. Wow.
This permission is so weirdly named and scary, and the applications never tell you why they're requesting it... on iOS it would be against the developer guidelines...
> Humanity had no problem coordinating massive projects over IRC and mailing lists.
Humanity has plenty of problems coordinating over IRC and mailing lists. That we have succeeded some of the time does not imply we would succeed all the time or that there aren't significant downsides. These discussions often bring up Linux as an example and sure, the remote development of the Linux kernel is indeed a testament to what you can accomplish with remote teams and strong coordination. On the other hand, we could note that despite how successful that remote coordination has been, the (arguably) most successful Linux OS business (Red Hat) decided they needed offices and in person work long before they were owned by IBM. Likewise the SuSE Linux folks have offices around the world. There must be some benefit they're getting from that to have decided to take what was a fully remotely coordinated project and centralize some of it.
> even IF in office was better for the employer (even though all data says it’s not in terms of productivity) it is unequivocally better for the employees life to work remote as much as humanly possible.
That might be true for some people, but it is not true for all. If you'd asked me before COVID if I wanted a 100% remote job, I would have told you yes. I'd even applied for a number of (the far more limited at the time) remote jobs like Gitlab. And then COVID hit and I spent 2-3 years working from the single spare 10x10 space in my home. In that time I lost precious living and hobby space to having a dedicated working location (approximately 20% of my home). I increased my personal utility costs without compensation. My mental state deteriorated due to a lack of mental and physical separation from work and home. I found it far more difficult to accomplish my work due to a number of at home distractions. I struggled heavily to keep up with things happening across my team as it was difficult to both keep up with the async chat discussions without also burning massive amounts of time and energy context switching. I found that I personally need some form of a "commute" in order to switch my mental state from home to work and back again. I had to allow corporate devices filled with corporate spyware on my personal network. I had to isolate parts of my house from my spouse at various times. I had to allow strangers and colleagues a video view into my personal and private home. Working 100% remote was unequivocally worse for me as an employee.
By contrast, now that I'm back in the office most days, I have an employer provided dedicated working space. I have free coffee, tea and fruit. I have a space where I can be focused on working on something and still keep an ear on other things happening within my team, allowing me to context switch when my attention is needed without needing to switch just to find out if my attention is needed. I have a free gym on site that allows me to exercise with equipment that I don't have at home and wouldn't have the space for even if I could afford it. I don't have to allow corporate devices on my home network anymore. I have a cafeteria which serves reasonable and healthy food at reasonable prices when I don't feel like making my own lunches. I have access to high quality and private video conferencing systems when I need to coordinate with other remote individuals and I no longer have to allow strangers visibility into my home in order to conduct interviews. I get to eat meals with co-workers and colleagues and have social engagement during my breaks. I can get away from home distractions and more easily focus on the work I have at hand. I have a reasonable commute that's just long enough to allow me a mental switch without being oppressively long, and takes me past a number of locations that I would have needed to go to any way each week.
Which isn't to say it was 100% bad. To this day I have a hybrid situation which affords me benefits that I would not have with a 100% in office position, and for which I am eternally grateful and fortunate. I also recognize that I work for a very good company that provides a number of perks that aren't available to everyone who works in an office. But that's the point. In office work doesn't mean just one thing, and neither does remote work. Both are highly subjective experiences and to say that remote work is "unequivocally" better for everyone is just wrong.
And people not wanting others to indirectly force them to subsidize their employers by devoting unpaid portions of their limited living space, utility bills and personal equipment to their work is also a "critical fight". Remote work inherently blurs the line between "company property" and "personal property" in ways that can impose heavy burdens on employees. Confidentiality and privacy requirements might require employees to allow spyware laden devices on their personal home networks. It might require them to create secure, isolated parts of their house that lock out other family members. It might require them to allow surveillance devices into their homes. Even if your employer buys you a secure safe or locking cabinet to keep confidential materials in, you still have to devote some of your limited floor space to having that item in your space, and you become liable for ensuring that item is secured in ways that you don't have to worry about when confidential materials are stored at a central office. Everything in life is a trade off and remote work is no different in that respect.
That sounds like a "them" problem. The cost of gas and the time I don't have because of commuting is material. I used to lose 2-3 hours of every. Single. Day. To commuting. All because the places where I can find jobs were either too expensive for me to afford to live in or, get this, I didn't want to uproot my family every single time I got a new job.
The cost of my utilities? Listen, I don't know know much electricity costs where you are, but the cost or running an extra computer is pennies a day. The cost of internet is set for me. We might talk about the increased cost of heating and cooling, but I was never one of those people who turned their system off when gone because that doesn't make literally any sense with my utility's time based pricing. It's literally cheaper to let it run as it is than than do that.
As for space and confidential items. I'm not sure ahat to say. I don't have thieves coming in and out of my house and I have a password good enough to defend the casual nosy child or relative. I have an office now because I have a house, but I have worked remotely in smaller spaces and it was never any problem. At least not compared to commuting 1 hour being, although I have commuted up to 2 hours on bad traffic days which were not particularly rare occurrences. And this is just have it is in all the cities I have lived. Perhaps not all cities, but the two metropolitan areas I have lived within and in the suburbs of. Living within the city didn't even guarantee me a reasonable commute.
If the trade off is a company getting a corner or a room I wasn't using anyway plus a few dollars of electricity subsidy and I get several hundred dollars in my time measure by my pay rate not commuting plus a couple dollar in not spending it on gas, I am happy to trade that. I'm also capable of putting my computer away and safely like literally anything else I own.
I also don't worry about the isolation that people mention (although not you here) because I have a vibrant social life. As someone who was never the typical demographic of the field, I have neve depending on socializing with my coworkers in office for social fulfillment. I still somehow maintain the correct level of social comraderie via digital means. Remote work doesn't mean not interacting with your coworkers at all.
Why are the problems of people who don't own large houses with spare rooms they can afford to dedicate to their company for free, or living arrangements otherwise conducive to working remotely a "them" problem, but your failure to live within walking distance of your jobs not a "you" problem? Perhaps the truth here is that your experience isn't a universal experience, and just because remote working works out exceptionally well for you doesn't mean that applies universally. Maybe people who want to work in offices legitimately find that to be a better way to work.
Remote work is different than in-office. And it's better for some people and worse for others. For example, while I personally find it useful to be able to type out an example when talking to someone about things, I also find that in order to get things done effectively at home, I have to ignore the notifications (or will ignore even if they're not explicitly ignored) when I'm "in the zone" as it were. The problem with this is that someone might have asked a question that I have the answer to, or even asked me a question directly, or someone else on my team may have been going down a rabbit hole that I could have stopped them from going down. But because using a chat system inherently requires context switching and your full attention, the only way to be in the loop is to be continually breaking out of where you are to go look at chat.
By comparison, in the office with my team around me, I can keep one ear open to the conversation that's happening in the air around me. My screen, my literally single focus within the computer and my fingers can all be occupied working on something, and I can use my additional sense of hearing to keep up with other things going on. When someone needs me specifically, they can (with varying degrees of forcefulness) grab my attention, where as online they have one and only one way, and it has the same priority as any other notification both in my conscious and unconscious mind unless I specifically read the notification (and again, context switch).
Video calls still to this day suffer from latency issues. We all, continually have the "What about - sorry - what if - sorry you go - do you want me to go?" conversation in video calls. That's objectively a worse experience than just having everyone in the same room. Even when people in the room start talking over each other, that can be resolved much faster in person than on the video call.
It's also really easy to get into the habit of not paying attention in conference calls/video calls. Because of the scheduling issues, remote work tends to include a lot more "just in case" invitations to meetings and discussions. Sometimes you really do need to be there, other times you don't. So you often get courtesy invites, and you might go, and while you're listening you might do that "identifying the ancillary information", or just keep trucking on whatever you were doing before the call started because you can just listen in. And slowly over time you and everyone else starts to build up the habit of not paying attention at all. It takes conscious effort and specific behaviors to not let yourself get distracted by the big distraction box sitting in front of you while you're having your meetings. There's a reason we generally consider it rude to be on your phone or computer in an in person meeting without specific need.
Perhaps more telling though is the fact that even remote work people acknowledge the importance of having dedicated working space. Even if you don't have other people in the space with you, almost everyone can agree that having dedicated space for working is important. But remote work puts the burden of paying for and subsidizing that space on each individual employee. For some of us, that's not a significant burden and for others, it's quite significant.
I'd also ask if 100% remote work was objectively better for all people and all things, I'd ask why co-working spaces exist? Why do remote workers congregate in coffee shops? Why, even though the internet and online communities are "remote first" groups, do we still have conferences, meet-ups and conventions? Why do we bother with these expensive and difficult to coordinate in person gatherings if everything we would do with them we could do better remotely?
In the end, remote work isn't one thing, it's many different things for each individual person and how you experience it is highly subject to your personal circumstances and your work environment as a whole. It should be entirely unsurprising that people are different and experience remote work differently and that as a result, plenty of people will genuinely prefer working in office to working remotely.
As a "half way" point, modern (21+) java brings pattern matching switch statements to the language, but you'd probably construct the F# version in Java using a sealed "marker" interface. Something like
sealed interface Result permits ValidationError, SearchQuery, UserProfile {}
Along with the specific implementations of those ValidationError, SearchQuery and UserProfile classes and then a switch statement like:
Result res = db.query(input);
return switch(res) {
case ValidationError ve -> context.renderError(ve);
case SearchQuery sq -> context.redirect("Search", Map.of("q", sq));
case UserProfile up -> context.redirect("Profile", Map.of("user", up));
};
The sealed interface gives you compile time checks that your switch statement is exhaustive wherever you use it.
Before that pattern matching, I might have used a Function<Context, R> instead in the Result interface. This is off the top of my head without the benefit of an IDE telling me if I've done a stupid with the generics and type erasure but something like:
interface Result<R> {
public R handleWithContext(Context c);
}
class ValidationError<RenderedError> {
public RenderedError handleWithContext(Context c) {
return c.renderError(this);
}
}
class SearchQuery<Redirect> {
public Redirect handleWithContext(Context c) {
return c.redirect("Search", Map.of("q", this);
}
}
etc. In either case though I think you're right that an empty interface is something that should be examined closer.
If you download GPL source code and run `wc` on its files and distribute the output of that, is that a violation of copyright and the GPL? What if you do that for every GPL program on github? What if you use python and numpy and generate a list of every word or symbol used in those programs and how frequently they appear? What if you generate the same frequency data, but also add a weighting by what the previous symbol or word was? What if you did that an also added a weighting by what the next symbol or word was? How many statistical analyses of the code files do you need to bundle together before it becomes copyright infringement?
The argument that GPL code is a tiny minority of what's in the model makes no sense to me. (To be clear, you're not making this argument.) One book is a tiny minority of an entire library, but that doesn't mean it's fine to copy that book word for word simply because you can point to a Large Library Model that contains it.
LLMs definitely store pretty high-fidelity representations of specific facts and procedures, so for me it makes more sense to start from the gzip end of the slope and slide the other way. If you took some GPL code and renamed all the variables, is that suddenly ok? What if you mapped the code to an AST and then stored a representation of that AST? What if it was a "fuzzy" or "probabilistic" AST that enabled the regeneration of a functionally equivalent program but the specific control flow and variable names and comments are different? It would be the analogue of (lossy) perceptual coding for audio compression, only instead of "perceptual" it's "functional".
This is starting to look more and more like what LLMs store, though they're actually dumber and closer to the literal text than something that maintains function.
It also feels a lot closer to 'gzip' than 'wc', imho.
> LLMs definitely store pretty high-fidelity representations of specific facts and procedures
Specific facts and procedures are explicitly NOT protected by copyright. That's what made cloning the IBM BIOS legal. It's what makes emulators legal. It's what makes the retro-clone RPG industry legal. It's what made Google cloning the Java API legal.
> If you took some GPL code and renamed all the variables, is that suddenly ok?
Generally no, not sufficiently transformative.
> What if you mapped the code to an AST and then stored a representation of that AST?
Generally no, binary distribution of software is considered a violation of copyright.
> What if it was a "fuzzy" or "probabilistic" AST that enabled the regeneration of a functionally equivalent program but the specific control flow and variable names and comments are different?
This starts to get a lot fuzzier. De-compilation is legal. Creating programs that are functionally identical to other programs is (generally) legal. Creating an emulator for a system is legal. Copyright protects a specific fixed expression of a creative idea, not the idea itself. We don't want to live in the world where Wine is a copyright violation.
> This is starting to look more and more like what LLMs store, though they're actually dumber and closer to the literal text than something that maintains function.
And yet, so far no one has brought a legal case against the AI companies for being able to extract their copyright protected material from the models. The few early examples of that happening are things that model makers explicitly attempt to train out of their models. It's unwanted behavior that is considered a bug, not a feature. Further the fact that a machine is able to violate copyright does not in and of itself make the machine itself a violation of copyright. See also Xerox machines, DeCSS, Handbrake, Plex/Jellyfin, CD-Rs, DVRs, VHS Recorders etc.
> Specific facts and procedures are explicitly NOT protected by copyright.
No argument there, and I'm grateful for the limits of copyright. That part was only for describing what LLM weights store -- just because the literal text is not explicitly encoded doesn't mean that facts and procedures aren't.
> Copyright protects a specific fixed expression of a creative idea, not the idea itself.
Right. Which is why it's weird to talk about the weights being derivative works. Weird but perhaps not wrong: if you look at the most clear-cut situation where the LLM is able to reproduce a big chunk of input bit-for-bit, then the fact that its basis of representation is completely different doesn't feel like it matters much. An image that is lossily compressed, converted to a bitstream, and encoded in DNA is very very different than the input, but if an image can be recovered that is indistinguishable or barely distinguishable from the original, I'd still call that copying and each intermediate step a significant but irrelevant transformation.
> This starts to get a lot fuzzier. De-compilation is legal.
I'm less interested in what the legal system is currently capable of concluding. I personally don't think the laws have caught up to the present reality, so present-day legality isn't the crucial determinant in figuring out how things "ought" to work.
If an LLM is completely incapable of reproducing input text verbatim, yet could become so through targeted ablation (that does not itself incorporate the text in question!), then does it store that text or not?
I'm not sure why I'm even debating this, other than for intellectual curiosity. My opinion isn't actually relevant to anyone. Namely: I think the general shape of how this ought to work is pretty straightforward and obvious, but (1) it does not match current legal reality, and more importantly, (2) it is highly inconvenient for many stakeholders (very much including LLM users). Not to mention that (3) although the general shape is pretty clear in my head, it involves many many judgement calls such as the ones we've been discussing here, and the general shape of how it ought to work isn't going to help make those calls.
> An image that is lossily compressed, converted to a bitstream, and encoded in DNA is very very different than the input, but if an image can be recovered that is indistinguishable or barely distinguishable from the original, I'd still call that copying and each intermediate step a significant but irrelevant transformation.
Sure as a broad rule of thumb that works. But the ability of a machine to produce a copyright violation doesn't mean the machine itself or distributing the machine is a copyright violation. To take an extreme example, if we take a room full infinite monkeys and put them on infinite typewriters and they generate a Harry Potter book, that doesn't mean Harry Potter is stored in the monkey room. If we have a random sound generator that produces random tones from the standard western musical note pallet and it generates the bass line from "Under Pressure" that doesn't mean our random sound generator contains or is a copy of "Under Pressure", even if we encoded all the same information and procedures for generating those individual notes at those durations among the data procedures we gave the machine.
> If an LLM is completely incapable of reproducing input text verbatim, yet could become so through targeted ablation (that does not itself incorporate the text in question!), then does it store that text or not?
I would argue not. Just like a xerox machine doesn't contain the books you make copies of when you use it to make a copy, and Handbrake doesn't contain the DVD's you use when you make a copy there.
I would further argue that copyright infringement is inherently a "human" act. It's sort of encoded in the language we use to talk about it (e.g. "fair use") but it's also something of a "if a tree falls in the middle of the woods" situation. If an LLM runs in an isolated room in an isolated bunker with no one around and generates verbatim copies of the Linux kernel, that frankly doesn't matter. On the other hand, if a Microsoft employee induces an LLM to produce verbatim copies of the Linux kernel, that does, especially if they did so with the intent to incorporate Linux kernel code into Windows. Not because of the LLM, but because a person made the choice to produce a copy of something they didn't have the right to make a copy of. The method by which they accomplished that copy is less relevant than making the copy at all, and that in turn is less relevant than the intent of making that copy for a purpose which is not allowed by copyright law.
> I'm not sure why I'm even debating this, other than for intellectual curiosity.
Frankly, that's the only reason to debate anything. 99% of the time, you as an individual will never have the power to influence the actual legal decisions made. But a intellectually curious conversation is infinitely more useful, not just to you and me but to other readers, than another retread of "AI is slop" "you're just jealous you can't code your way out of a paper bag" arguments that pervade so much discussion around AI. Or worse yet another "I used an LLM for a clearly stupid thing and it was stupid" or "I used an LLM to replace all my employees and I'm sure it's going to go great" blog post. For whatever acrimony there might have been in our interchange here, I'm sorry, because this sort of discussion is the only good way to exercise our thoughts on an issue and really test them out ourselves. It's easy to have a knee jerk opinion. It's harder to support that opinion with a philosophy and reasoning.
For what it's worth, I view the LLM/AI world as the best opportunity we've had in decades to really rethink and scale back/change how we deal with intellectual property. The ever expanding copyright terms, the sometimes bizarre protections of what seem to be blindingly obvious ideas. The technological age has demonstrated a number of weaknesses in the traditional systems and views. And frankly I think it's also demonstrated that many prior predictions of certain doom if copyright wasn't strictly enforced have been overwrought and even where they haven't, the actual result has been better for more people. Famously, IBM would have very much preferred to have won the BIOS copyright issue. But I think so much of the modern computer and tech industry owes their very careers to the effects of that decision. It might have been better for IBM if IBM had won, it's not clear at all that it would have been better for "[promoting] the Progress of Science and useful Arts".
We could live in a world where we recognize that LLMs and AIs are going to fundamentally change how we approach creative works. We could recognize that the intents of "[promoting] the Progress of Science and useful Arts" is still a relevant goal and something we can work to make compatible with the existence of LLMs and AI. To pitch my crazy idea again, we could:
1) Cut the terms of copyright substantially, back down to 10 or 15 years by default.
2) Offer a single extension that doubles that term, but only on the condition that the work is submitted to a central "library of congress" data set.
3) This could be used to produce known good and clean data sets for AI companies and organization to train models from, with the protection that any model trained from this data set is protected from copyright infringement claims for works in the data set. Heck we could even produce common models. This would save massive amounts of power and resources by cutting the need for everyone who wants to be in the AI space to go out and acquire, digitize and build their own library. The NIST numbers set is effectively the "hello world" set for anyone learning computer vision AI stuff. Let's do that for all sort of AI.
4) The data sets and models would be provided for a nominal fee, this fee will be used to pay royalties to people whose works are still under copyright and are in the data sets, proportional to the recency and quantity of work submitted. A cap would need to be put in place to prevent flooding the data set to game the royalties. These royalties would be part of recognizing the value the original works contributed to the data set, and act as a further incentive to contribute works to the system and contribute them sooner.
We could build a system like this, or tweak it, or even build something else entirely. But only if we stop trying to cram how we treat AI and LLMs and the consequences of this new technology into a binary "allowed / not allowed" outcome as determined by an aging system that has long needed an overhaul.
So please, continue to debate for intellectual curiosity. I'd rather spend hours reading a truly curious exploration of this than another manifesto about "AI slop"
And distributing an AI model trained on that text is neither distributing the work nor a modification of the work, so the GPL (or other) license terms don't apply. As it stands, the courts have found training an AI model to be a sufficiently transformative action and fair use which means the resulting output of that training is not a "copy" for the terms of copyright law.
> And distributing an AI model trained on that text is neither distributing the work nor a modification of the work, so the GPL (or other) license terms don't apply.
If I print an harry potter book in red ink then I won't have any copyright issues?
I don't think changing how the information is stored removes copyright.
If it is sufficiently transformative yes it does. That’s why “information” per se is not eligible for copyright, no matter what the NFL wants you to think. No printing the entire text of a Harry Potter book in red ink is not likely to be viewed as sufficiently transformative. But if you take the entirety of that book and publish a list of every word and the frequency, it’s extremely unlikely to be found a violation of copyright. If you publish a count of every word with the frequency weighted by what word came before it, you’re also very likely to not be found to have violated copyright. If you distribute the MD5 sum of the file that is a Harry Potter book you’re also not likely to be found to have violated copyright. All of these are “changing how the information is stored”.
In light of the fact that the courts have found training an AI model to be fair use under US copyright law, it seems unlikely this condition will have any actual relevance to anyone. You're probably going to need to not publicly distribute your software at all, and make such a condition a term of the initial sale. Even there, it's probably going to be a long haul to get that to stick.
One of the craziest experiences in this "post AI" world is to see how quickly a lot of people in the "information wants to be free" or "hell yes I would download a car" crowds pivoted to "stop downloading my car, just because its on a public and openly available website doesn't make it free"
I've pitched this idea before but my pie in the sky hope is to settle most of this with something like a huge rollback of copyright terms, to something like 10 or 15 years initially. You can get one doubling of that by submitting your work to an official "library of congress" data set which will be used to produce common, clean, and open models that are available to anyone for a nominal fee and prevent any copyright claims against the output of those models. The money from the model fees is used to pay royalties to people with materials in the data set over time, with payouts based on recency and quantity of material, and an absolute cap to discourage flooding the data sets to game the payments.
This solution to me amounts to an "everybody wins" situation, where producers of material are compensated, model trainers and companies can get clean, reliable data sets without having to waste time and energy scraping and digitizing it themselves, and model users can have access to a number of known "safe" models. At the same time, people not interested in "allowing" their works to be used to train AIs and people not interested in only using the public data sets can each choose to not participate in this system, and then individually resolve their copyright disputes as normal.
Fighting corporates seems like a loser move: the corporate winners (from long copyright terms) would deploy income to politically prevent disadvantageous change.
> You can get one doubling of that by submitting your work to an official "library of congress" data set
Needs to be done at day 0 and made available at day 0 for usage. Maybe with standardised availability for usage e.g. licencing X,Y or Z or non-standard call-us-for-pricing.
The world moves fast and time really matters e.g. look at how the wait for patents to expire affects outcomes
Not sure what permission you're referring to or what your curl script is trying to do but `/opt/homebrew/opt/curl/bin/curl http://www.google.com` works just fine on Tahoe from both iTerm2 and ghostty. Looking through the various permission grants, the only one they both have in common is "App Management". They share some file permission grants, but where as iTerm has full disk access, ghostty only has Downloads and removable media. In the past I've found I've needed to add terminals like iTerm to the Developer Tools permission, but ghostty isn't in there currently and curl is still working just fine. And in none of these cases have I ever needed to re-affirm the permission every 30 days.
Any chance you have "disclaim ownership of children" setting enabled in iTerm? Maybe if iTerm is not allowing child processes to use its own permissions, you're having to re-authorize curl specifically (and it's getting updated about once every 30 days?)
> And if you don't accept them, they are silently denied.
This is IMO the correct behavior. If something asks for permission and it's not explicitly granted, then the default should always be denied.
reply