Hacker News new | past | comments | ask | show | jobs | submit | wickedOne's comments login

i wonder why the rest of the world doesn't just come together for a while and stop trading / providing raw materials to the u.s. for let's say a year or so.

isn't this all just a huge pile of bluff poker?


A lot of people have jobs dependent on business as usual. You don't want to just fire the workers for a year.


the u.s. is not the only country other countries trade with so firing people should not have to be the result as long as you divert?


this is what BRICS floated trying to remove the USD as reserve currency. now they gone all quiet when trump threatened 100% tariffs.

plus remember trump could threaten nuclear attack on the ringleader.


The USD will be removed at some point if trump keeps his dumb antics.


Because their economies would suffer heavily? That's why they are upset about the tariffs in the first place.


they don't have TDS


The only people who suffer from TDS are those who voted for him.


> stop trading to the u.s.

That makes as much sense as a hunger strike over food rations.


because the u.s. is the only country the rest of the world can trade with?


keeps surprising me that, in a time where everybody seems to be concerned about privacy, people willingly "give away" their dna profile...


If you don't want to keep being surprised, you should internalize that what people say and what they do are usually 2 different things.


People don't realize the magnitude of the information that they're giving away. They also don't read the TOS/Privacy Policies so they don't realize they're often giving ownership of their data.

I think it's important to remember that people can say one thing and intend to act in accordance with that, but do otherwise out of ignorance. Of course, however, that's not always the case.


> what people say and what they do are usually 2 different things.

True. Progress in that area can be a factor. I know some people who understood right away the risk from submitting DNA and some people who took time to absorb it.

Alternatively, it can be 2 different people. I know people who dismissed the risk as inconsequential. The folks I'm thinking of also maintained eagerness to aid LEO.


official support for any macos version < ventura has been dropped (monterey support ended 2024-09-16)

so i wonder whether apple will consider this a bug...


Seems like a bug if it only affects certain processor cores.


They added custom instructions to Apple silicon to more easily emulate x86 behavior (e.g., https://developer.apple.com/documentation/virtualization/acc...). They may have now removed them because their analytics say that Rosetta2 use on new devices is minimal.


Virtualizing older macOS on M4 hardware has nothing to do with Rosetta 2. And it would be ridiculous for Apple to remove hardware features that Rosetta 2 relies upon before they're actually ready to retire Rosetta 2—that would force Apple to invest more software engineering effort in updating Rosetta 2 to work without those features.


i guess there will be always a previous processor supporting something which the most recent doesn't.

but when the bug report is regarding supporting a software version which they don't support themselves anymore, personally i don't think they will give it any priority


nicely done!

would be even better if modifications to the actual select part would somehow persist when building the query


Will think about improving the ui in this aspect...


done :)



escape life, or the point of life (if there's one)?

i don't think there's a point of life. from a nature perspective it should be reproduce and assure balance in the ecosystem, but as there's way too many humans that ship has sailed.

from the economic perspective you're a pawn to contribute to a "healthy" economy so we all can enjoy our luxuries.

and then there's the individual perspective, where the point probably is what you want it to be.

after realising there's no point at all, for me the point pretty much is "do what you want and what you're comfortable with and try not to worry too much"


git has a steep learning curve for sure, but eventually scenarios described at oshitgit should be common knowledge for anyone using it.

personally i think putting out sites like that is more helpfull than dumbing down git


what would be a typical large file you want to have in your codebase? usually this shouldn't be a consideration


In addition to what others posted here, sometimes it is nice to put generated files under version control with the source code that generated them. For example, simulation results, deep learning models, graphs that you need to include in your LaTeX documents (which can be considered partly source code, partly generated content).

Also, deep learning training data often consists of large image files, and can also be considered "source code", and in any case it can be very useful to put these under version control.

And finally it can be useful to put external dependencies as tar-files into your source tree.


I appreciate you laying this out because it's something I have struggled with and thought I just didn't know the right way to handle it.

For writing tests in a deep learning code base, rather than simply including a native data file (image, CSV, whatever), I've taken to writing a fake data creator class. It always feels like overkill when an alternative solution is including a native data file or two that already exists.


I want to use it for files not just source code. For instance including graphics, the pdfs generated by latex or just anything. Or storing lots of directories. Currently, I use dropbox for that, but if git could do that...


though i agree that there isn't a proper vcs out there for large files (adobe bridge was a nice attempt) git wasn't designed for that and one might wonder whether you want git to be _that_ multi purpose.


If you have a significant project, you want to store at least a reasonable amount of media with it. Images, documentation. Git doesn't necessarily have to be the best system for handling high-gigabyte sized binaries, but should at least deal gracefully, with small and medium sized binary files. I am also not sure, why not more effort was spent making Git support large files even well.


By the way, if you're storing large files under version control in Git it is often useful to use the "--depth=1" flag when cloning or pulling repositories. That way you only download the stuff you really need and leave the rest of the history on the server until you need it.


Git LFS can do that.


Git LFS is basically the CVS of binary file version control.

It is a pile of garbage, but it's better than nothing.


I recently had to deal with a PowerPoint document which is slightly larger than 50 megabytes, which in todays terms, isn't very much. Before, I had kept it in SVN and that has no issues storing larger files. It is a bit shocking that Git has issues with not so tiny files.


It doesn't matter actually. Tools should be easy to learn and as free of edge cases as possible. An example happening is constant propagation in Rust: it's the same feature, but with every release it can cover more of the code base.


don't fix what ain't broken?

i think as long as git keeps evolving and there are no major pitfalls for any programming language: why should it be replaced?


Because it is broken. Merges and rebases rarely work without manual intervention, a patch-oriented tool that is almost as fast as a snapshot-oriented tool, git. Something like Pijul looks promising.


It doesn't work because your changes conflict? How would anything else handle out of order modifications to files that cannot be resolved automatically? There will always been a requirement to do this manually - otherwise the VCS would be able to program for you (understand what is right/wrong).


been using submodules for years without any issues other than the routine before "deinit" was introduced was somewhat tricky.

could you elaborate?


I've been using git submodules for years but there are some real problems with them:

* Changing from a subdirectory to a submodule breaks lots of things like git reset and git bisect.

* Having to remember to git module init, update etc. I always have to look up the commands and never remember what the difference is.

* I don't care that there are unlisted files in a submodule, either don't bug me about this in status, or integrate commands in such a way that they work transparently across the main module and submodule.

* Related to the previous: Coordinating a single logical change across submodules involves several manual steps and has plenty of scope to go wrong.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: