My best time working with project management on a team used this method. Well, we used the back of a whiteboard but the post-its were the real killer feature.
You could just pin those domains as you come across them. I’ve definitely done that with one or two things that I hit this issue with.
Funny you mention Datadog, I specifically struggle to keep their blog/promotion/etc official results out of the way when digging around for documentation and troubleshooting. If I’m trying to find a solution to something weird with Datadog I’m often facing the opposite of your problem and wish Datadog’s site would get out of the way.
Do you have any source or further reading on this topic? The only thing I can readily find is you making similar comments on HN for the last decade, and I’d like to learn more.
This seems like quite a lot of setup and hassle for what could be handled some other way with less fuss, like chamber[0] or Doppler[1]. Heck, even the classic .env seems like a better choice in every way.
What are the advantages to a configuration like this? Seems the HTTP interface with non-encrypted cache and separate agent situation isn’t something secure enough to satisfy most companies these days.
I think the audience for this is someone who is already using AWS Secrets Manager, but wants to reduce their API usage (perhaps due to cost).
Chamber uses SSM Parameter Store, which for many cases is similar, but some people might have a preference for Secrets Manager. For example, a team might like the automatic RDS password rotation for Secrets Manager and decide to put everything there for consistency.
For Doppler, well maybe someone doesn't want to pay for it, or they'd rather control access to their secrets via IAM instead of through a separate tool.
Normally Boto uses the current account context to get secrets, but if we run a lambda as a local build, it uses this library to pull secrets from the actual dev AWS account.
This makes it easier to onboard new developers, reduces problems of figuring out what secrets to get for each lambda, etc.
Also if secrets are rotated in dev, local stacks get them automatically.
I am curious to see if this tool is remarkably different.
Its no joke that AWS Secrets Manager calls add up. At my medium-size US web company, for our data lake account last month, KMS is the second highest line item after s3 service cost. S3 at 94% of total, KMS at 4% of total with Tax and Kinesis the remaining sizable components.
Chamber can also use S3 + KMS as a backend, which reduces the API costs to ~0 and massively improves the scalability (since SSM has annoyingly low rate limits, or at least it did a few years ago when we last tried it).
> The Secrets Manager Agent provides compatibility for legacy applications that access secrets through an existing agent or that need caching for languages not supported through other solutions.
I was going to say you can rotate secrets in secrets manager without redeploying all your services. But this caches the secrets so you'll still get stale results for up to 5 minutes by default. Not sure what the point is then.
> even the classic .env seems like a better choice in every way
That's a pretty thorough misunderstanding of the value that secrets management services provide. We can start with the idea of never storing secrets in files.
I think most companies also understand the difference between plain HTTP localhost loopback and transmitting secrets in plaintext over the network. There are many services that rely on localhost loopbacks for handling all kinds of sensitive data.
Chamber is great but generally relies on transmitting secrets via environment variables to the enclosed process and assumes that they will remain valid for the lifetime of that process. Part of the point of this tool is to provide a secrets cache with a TTL.
This sounds an awful lot like an internal Amazon tool that predates AWS secret manager. It was actually really nice to use; the advantage comes if you always can rely on the daemon being available and you can just say "these machines have access to this secret." If you had to set up and configure the VM, maybe pointless, but it's intended for situations where you're deploying 1000s of VMs with many teams and some centralized team is preparing the machine images you're using.
I’m biased about this as a boot camp grad -> staff engineer who has worked with a lot of interns that went on to be very successful engineers from the boot camp I attended, but yes it can definitely help.
Presuming my understanding of persistent sessions lines up with yours, set `terminal.integrated.enablePersistentSettings`. You may also want to change the value of `terminal.integrated.persistentSessionScrollback` to something higher than the default (1000? not sure)
I was worried about the 1000 search limit, and it has so far (since beta) proven not to be a problem. I use Kagi for all my searches on all machines and my phone and average 800 or so a month. I haven’t modified my behavior at all, and consider myself a heavy search engine user.
I have a theory that I would actually need to do twice as many searches with Google or DuckDuckGo or whatever, since the SEO spam would force me to do more term refinement. With far less of that (and a tiny tiny bit of settings tuning) I get better results much quicker. I’d test it but I have a job to do and can’t swallow the idea of going back to how bad the other viable options really are.
That sounds appealing, but there are so many $10 and $15 monthly warts on my balance that the burden of adding another (with even a minor overcharge fee) feels pretty high. I wish their lower tiers had double the current limits.
> I have a theory that I would actually need to do twice as many searches with Google or DuckDuckGo or whatever
FWIW, I counted searches I made with DDG (by parsing my FF history export) before joining the beta, and it was slightly over 1k, with Kagi my searches are in the 700-800 range.
> I have a theory that I would actually need to do twice as many searches with Google or DuckDuckGo or whatever
This is a great point that should really be highlighted more in their marketing. It was obvious once I read it and shows two obvious benefits: 1. There is probably zero issue with fitting in to the 1000 searches per month. 2. It saves time for many of the searches we’re already doing.
> 1. There is probably zero issue with fitting in to the 1000 searches per month.
You are never limited to 1000 monthly searches, this is a great misunderstanding about Kagi. After passing your threshold, you are charged 1.5c per query.
I think that’s clear enough on the details of the plans. What I meant is fitting in without incurring extra charges (no point in arguing that 1.5c is small, that’s not the point)
> One question is what if 10/11 gets more than ten items, so there is an 10/11/11, 10/11/12?
I’m not following this (and thus, I think, your entire point). I think you might be slightly misunderstanding something, the files inside a category(11-core in the example) would never have a prefix other than the category - 10/11/11 is the only option - 10/11/12 would be breaking the system.
Once you’re inside a category, there is no division into 10 anymore. The 11 category would allow documents from 11.01 to 11.99. And as I believe is mentioned in the spec, if you need more than .99 you likely have too broad of a category or area.
For what it’s worth, I’ve used this system at work and in my own notes for around 2 years and haven’t run into this problem (yet).
OK. So if the 11 category can go to 99, why can't the top level just go from 00 to 99 as well without being broken into batches of 10 requiring another level.
Because most of the time the 11.99 is in a date series. So, the 11.01 item is probably not referred to much, and it's easy to zip down to stuff you're doing now. But 100 subfolders are too much to actually go looking through, especially if you're trying to `ls` the folders.
Not a professional rower, just looked into form when I was using a rowing machine.