not for every user or use case. when developing of course i run claude —-do-whatever-u-want; but in a production system or a shared agent use case, im giving the agent least privilege necessary. being able to spawn POSIX processes is not necessary to analyze OpenTelemetry metric anomalies.
It's one infinitesimally small data point that can't be expected to move the needle.
Maybe if this becomes the standard response it would. But it seems like a ban would serve the same effect as the standard response because that would also be present in the next training runs.
I'm not sure that's true. While it obviously won't impact the general behavior of the models much If you get a very similar situation the model will likely regurgitate something similar to this interaction.
> though IMO that should be a reason to switch ISPs, not a reason to stop using DoT
If you have that choice, there's many countries that really want to control what their citizens see and can access at this point. If we had DoH + ECH widely adopted it would heavily limit their power.
I think you are misunderstanding what cloudflare provides if you think Anubis is an alternative. Even if we only consider bot protection they are completely different solutions.
CDN are by essence proprietary because they are infrastructure vendors. They can be built with open source software but what they are selling isn't software, it's physical servers. The alternative is going on-prem which is impossible for CDN if you aren't google or meta.
Rust (not the language) is another good exception that is mostly powered by DLCs and skins today. Continuous updates with balance changes keep the game fresh, ensuring you maintain your playerbase that will in turn buy DLCs.
The whole codebase & tools at whatever company I ever worked at was using 99% legacy stuff. Its wild...
Often times it would have been easier to rebuild the whole project over trying to upgrade 5-6 year old dependencies.
Ultimately the companies do not care about these kinda incidents. They say sorry, everyone laughs at them for a week and then after its business as usual, with that one thing fixed and still rolling legacy stuff for everything else.
The company that bought mine spent two years trying to have Team A rewrite a part of our critical service as a separate service to make it more scalable and robust and to enable it to do more. They wanted to do stupid things like "Lets use GRPC because google does!" and "Django is slow" and "database access is slow (but we've added like six completely new database lookups per request for uh reasons)"
They failed so damn bad and it's hilariously bad and I feel awful for the somewhat competent coworker who was stuck on that team and dealt with how awful it was.
Then we fired most of that team like 3 times because of how value negative they have been.
Then my coworker and I rebuilt it in java in 2 months. It is 100x faster, has almost no bugs, accidentally avoided tons of data management bugs that plague the python version (because java can't have those problems the way we wrote it) and I built us tooling to achieve bug for bug compatibility (using trivial to patch out helpers), and it is trivially scalable but doesn't need to because it's so much faster and uses way less memory.
If the people in charge of a project are fucking incompetent yeah nothing good will ever happen, but if you have even semi-competent people under reasonable management (neither of us are even close to rockstars) and the system you are trying to rewrite has obvious known flaws, plenty of time you will build a better system.
For all their boasting, I can't help but wonder how their response would have been different if the attackers actually had gotten their hands on sensitive data.
It's fine for this project since google is probably not in the business of triggering exploits in yt-dlp users but please do not use deno sandboxing as a your main security measure to execute untrusted code. Runtime-level sandboxing is always very weak. Relying on OS-level sandboxing or VMs (firecracker & co) is the right way for this.
But YouTube is the only one that yt-dlp uses Deno for. No other website on yt-dlp's list has put up enough of a fight to merit an external JS runtime; only YouTube.
From the September announcement:
> The JavaScript runtime requirement will only apply to downloading from YouTube. yt-dlp can still be used without it on the other ~thousand sites it supports
I would not put it past them. And I'm not sure I trust the yt-dlp team to implement sandboxing securely. The codebase is already full of shortcuts that lead to vulnerabilities like file extension injection.
I mean, this gives me pause:
> Both QuickJS and QuickJS-NG do not fully allow executing files from stdin, so yt-dlp will create temporary files for each EJS script execution. This can theoretically lead to time-of-check to time-of-use (TOCTOU) vulnerabilities.
It used to be 100% runtime-level and it was the golden age of browser exploits. Each of your tabs are now a separate process that the OS sandboxes. They can only access a specific API over IPC for anything that goes beyond js/rendering (cookie management, etc...). An exploit in V8 today only gives access to this API. A second exploit is needed in this API to escape the sandbox and do anything meaningful on the target system.
Yes, but browser sandboxing is an absolute marvel of software design that also cost millions and millions of dollars in developers salaries and CVE bounties to develop. Neither Deno nor yt-dlp have anywhere close to millions of dollars to spend on implementing secure JS sandboxing.
Yes, and it's only reasonably secure because of years of exploits being found and fixed by some of the best (and very well-funded) software security engineers out there.
That's not true. It's secure because they are stacking OS-sandboxing on top, forcing attackers to find a chain of exploits instead of a single issue in V8
While you benefit from the V8 fixes it lacks OS-level sandboxing (see above). Chrome is safe because it stacks security layers. Runtime sandboxing is just one of them and arguably the weakest one.
I'm assuming it's the render engine that is in pure CSS. You could display a static map in CSS but things like the tools to modify the terrain definitely need JS.
The top right button has a "Download code" which gives you a .zip file. That .zip file doesn't have any JS in it, and renders the terrain just like in the online editor, except you can turn off JS and it still works.