i wonder why the rest of the world doesn't just come together for a while and stop trading / providing raw materials to the u.s. for let's say a year or so.
People don't realize the magnitude of the information that they're giving away. They also don't read the TOS/Privacy Policies so they don't realize they're often giving ownership of their data.
I think it's important to remember that people can say one thing and intend to act in accordance with that, but do otherwise out of ignorance. Of course, however, that's not always the case.
> what people say and what they do are usually 2 different things.
True. Progress in that area can be a factor. I know some people who understood right away the risk from submitting DNA and some people who took time to absorb it.
Alternatively, it can be 2 different people. I know people who dismissed the risk as inconsequential. The folks I'm thinking of also maintained eagerness to aid LEO.
They added custom instructions to Apple silicon to more easily emulate x86 behavior (e.g., https://developer.apple.com/documentation/virtualization/acc...). They may have now removed them because their analytics say that Rosetta2 use on new devices is minimal.
Virtualizing older macOS on M4 hardware has nothing to do with Rosetta 2. And it would be ridiculous for Apple to remove hardware features that Rosetta 2 relies upon before they're actually ready to retire Rosetta 2—that would force Apple to invest more software engineering effort in updating Rosetta 2 to work without those features.
i guess there will be always a previous processor supporting something which the most recent doesn't.
but when the bug report is regarding supporting a software version which they don't support themselves anymore, personally i don't think they will give it any priority
escape life, or the point of life (if there's one)?
i don't think there's a point of life. from a nature perspective it should be reproduce and assure balance in the ecosystem, but as there's way too many humans that ship has sailed.
from the economic perspective you're a pawn to contribute to a "healthy" economy so we all can enjoy our luxuries.
and then there's the individual perspective, where the point probably is what you want it to be.
after realising there's no point at all, for me the point pretty much is "do what you want and what you're comfortable with and try not to worry too much"
In addition to what others posted here, sometimes it is nice to put generated files under version control with the source code that generated them. For example, simulation results, deep learning models, graphs that you need to include in your LaTeX documents (which can be considered partly source code, partly generated content).
Also, deep learning training data often consists of large image files, and can also be considered "source code", and in any case it can be very useful to put these under version control.
And finally it can be useful to put external dependencies as tar-files into your source tree.
I appreciate you laying this out because it's something I have struggled with and thought I just didn't know the right way to handle it.
For writing tests in a deep learning code base, rather than simply including a native data file (image, CSV, whatever), I've taken to writing a fake data creator class. It always feels like overkill when an alternative solution is including a native data file or two that already exists.
I want to use it for files not just source code. For instance including graphics, the pdfs generated by latex or just anything. Or storing lots of directories. Currently, I use dropbox for that, but if git could do that...
though i agree that there isn't a proper vcs out there for large files (adobe bridge was a nice attempt) git wasn't designed for that and one might wonder whether you want git to be _that_ multi purpose.
If you have a significant project, you want to store at least a reasonable amount of media with it. Images, documentation. Git doesn't necessarily have to be the best system for handling high-gigabyte sized binaries, but should at least deal gracefully, with small and medium sized binary files. I am also not sure, why not more effort was spent making Git support large files even well.
By the way, if you're storing large files under version control in Git it is often useful to use the "--depth=1" flag when cloning or pulling repositories. That way you only download the stuff you really need and leave the rest of the history on the server until you need it.
I recently had to deal with a PowerPoint document which is slightly larger than 50 megabytes, which in todays terms, isn't very much. Before, I had kept it in SVN and that has no issues storing larger files. It is a bit shocking that Git has issues with not so tiny files.
It doesn't matter actually. Tools should be easy to learn and as free of edge cases as possible. An example happening is constant propagation in Rust: it's the same feature, but with every release it can cover more of the code base.
Because it is broken. Merges and rebases rarely work without manual intervention, a patch-oriented tool that is almost as fast as a snapshot-oriented tool, git. Something like Pijul looks promising.
It doesn't work because your changes conflict? How would anything else handle out of order modifications to files that cannot be resolved automatically? There will always been a requirement to do this manually - otherwise the VCS would be able to program for you (understand what is right/wrong).
I've been using git submodules for years but there are some real problems with them:
* Changing from a subdirectory to a submodule breaks lots of things like git reset and git bisect.
* Having to remember to git module init, update etc. I always have to look up the commands and never remember what the difference is.
* I don't care that there are unlisted files in a submodule, either don't bug me about this in status, or integrate commands in such a way that they work transparently across the main module and submodule.
* Related to the previous: Coordinating a single logical change across submodules involves several manual steps and has plenty of scope to go wrong.
isn't this all just a huge pile of bluff poker?