> Unfortunately I couldn't read the linked study because I was stuck in an endless CAPTCHA loop of trying to find an image of a refrigerator among a varying set of only helicopters, ships, and avocados
I got through on the first try, and I block third party JavaScript with a whitelist
The one I got told to me find refrigerators and then never showed me a refrigerator. I clicked "skip" many times and it just kept going until I gave up.
I contemplated this last night as I sat and stared at the wall while waiting for my family to wake for their morning routine. After much deliberation and calculation and running many simulations I arrived at the conclusion that I am most probably not a robot.
you seem to be projecting your own experience rather than giving an accurate representation of developer productivity - 15k is not the norm at all or even close to it.
Even more precisely would be that it's a requirement for the default rustls backed on some platforms for some features. aws-lc-rs is a pain, but fortunately ring is just a feature flag away.
Sorry, it's not the point of what you were saying. You can tell I've been stung supporting users with this.
It won't be long until AI makes spelling, grammar and usage errors, if it's trained on things like message boards where people still don't know the difference between were, we're and where, lose vs. loose, affect vs. effect and wary vs. weary.
There’s probably a smart way to rule out a lot of cases so you only have to check a relatively small number of candidates. It would be good to know what it is.
for i in range(1, 10**10):
for k in range(1, 5):
s = str(pow(2, i, 10**(10**k)))
if '1' in s or '3' in s or '5' in s or '7' in s or '9' in s:
break
else:
print(2**i)
It's really easily to parallelize, I was able to run it up to 10**8 in about 15min, so you would be able to run it up to 10**10 in a few hours with parallelization.
Checking about 10^10 of them is just about doable as vhcr correctly showed. (I mean it wasn't optimal, but 'leave this running for 400 hours' is far from impossible)
It is 10^10 cases, checking numbers up to 2^(10^10). The numbers themselves are pretty big (~9 gigabytes each if you want to write full binary representation), but nothing that modern computers can't handle.
I don't have to be worried that a 3rd party library without dependency begins to have 30 transitive dependencies which now can conflict with other diamond dependencies.
I need my dependency tree to be small to avoid every single factor of friction.
Language specific package manager is exactly what encourage the exponential explosion of packages leading to dependency hell (and lead to major security concerns).
>Language specific package manager is exactly what encourage the exponential explosion of packages leading to dependency hell (and lead to major security concerns).
Package managers are such an odd thing from a social perspective.
You'll see cases like NPM and to a lesser degree Cargo where projects have hefty dependency graphs because it is so easy to just pull in one more dependency, but on the other side you have C++ that has conan and vcpkg but the opinions on them are so mixed people rely on other methods like cmake fetch package instead.
I appreciate having tools that let me pull in what I need when I need it, but the dependency explosion is real and I dunno how to have one without the other.
If you require end users (and possibly libraries? IDK) to manually specify every transitive dependency of a dependency (but not hard-code/vendor it), this should act as a forcing function to reduce transitive dependency explosion in libraries (because it would degrade user experience). I'm not sure if users should have to update every dependency by hand (this discourages updates which can cause security bugs to persist, but automatic updates makes supply-chain attacks easier; AUR helpers generally diff PKGBUILDs before committing them, which partly protects against PKGBUILD but not source attacks, and even distros did not protect against the xz attack).
Another factor is that updating C++ compilers/stdlib tends to break older libraries; I'm not sure if this is any less the case in Rust (unclear? I mostly get trouble with C dependencies) or Python (old Numpy does not supply wheels for newer Python, and ruamel.yaml has some errors on newer Python: https://sourceforge.net/p/ruamel-yaml/tickets/476/).
This is optimizing for the wrong metric, IMO. If I look at the dependency tree of a fairly hefty project in rust, mostly what I see is the same amount of code as an equivalent project in C/C++, just split into multiple packages instead of bundled up into one source tree. Which ironically means packages tend to be able to pull in the minimal amount of excess code through transitive dependencies. All that you'll do with this kind of incentive is push packages into effectively vendoring their dependencies again.
To the best of my knowledge (I only dabble in Rust) there aren't often too many breaks unless code accidentally relied on soundness bugs which Rust makes 0 promise of retaining to keep code working.
For recreational programming purposes (and sometimes professional depending on the domain), they really are a distraction.
The existence of a package manager causes a social problem within the language community of excessive transitive dependencies. It makes it difficult to trust libraries and encourages bad habits.
Much like Rust has memory safety benefits as a result of some choices that make it difficult to work with in some context, lack of a package manager can have benefits that make it difficult to work with in certain contexts.
These are all just tradeoffs and I'm glad "no package manager" languages are still being created because I personally enjoy using them more.
I'd rather have my languages focus on being a language and use something non-language-specific like nix or bazel to situate the dependencies.
Sure, the language maintainers will need to provide some kind of api which can be called by the more general purpose tool, but why not have it be a first class citizen instead of some kind of foo2nix adapter maintained by a totally separate group of people?
There's no need to have a cozy CLI and a bespoke lockfile format and a dedicated package server when I'll be using other tools to handle those things in a non-language-specific way anyhow.
Big feature for me. In frontend dev, 3k dependencies in a hello world app is considered normal. In systems, a free-for-all dependency graph is a terrible plan, especially if it's an open ecosystem. NPM, Cargo, etc are good examples.
This is also why systems people will typically push back if you ask for non-official repos added to apt sources, etc.
> That's just "don't create a non-static method if you don't use 'this'" in other languages with classes
Go doesn't have static methods. Regardless if you name the receiver, you have to provide an instance of the type to all methods. Maybe you should check your own knowledge before criticizing others
I got through on the first try, and I block third party JavaScript with a whitelist
reply