Hacker Newsnew | past | comments | ask | show | jobs | submit | arbll's commentslogin

It might be the wrong place to do security anyway since `bash` and other hard-to-control tools will be needed. Sandboxing is likely the only way out


not for every user or use case. when developing of course i run claude —-do-whatever-u-want; but in a production system or a shared agent use case, im giving the agent least privilege necessary. being able to spawn POSIX processes is not necessary to analyze OpenTelemetry metric anomalies.


Technically it will since this interaction will be commented a lot online which will feed back in the next models training runs


It's one infinitesimally small data point that can't be expected to move the needle.

Maybe if this becomes the standard response it would. But it seems like a ban would serve the same effect as the standard response because that would also be present in the next training runs.


I'm not sure that's true. While it obviously won't impact the general behavior of the models much If you get a very similar situation the model will likely regurgitate something similar to this interaction.


> though IMO that should be a reason to switch ISPs, not a reason to stop using DoT If you have that choice, there's many countries that really want to control what their citizens see and can access at this point. If we had DoH + ECH widely adopted it would heavily limit their power.


Ah yes because both of those alternatives are non-profits right ?


You can sponsor Anubis right now and start supporting alternatives.


I think you are misunderstanding what cloudflare provides if you think Anubis is an alternative. Even if we only consider bot protection they are completely different solutions.


> I think you are misunderstanding what cloudflare provides if you think Anubis is an alternative...

Do you have an open source alternative especially if you are donating to the project?

Because I would to see Anubis and other alternatives thrive much more than closed ones like Cloudflare.


CDN are by essence proprietary because they are infrastructure vendors. They can be built with open source software but what they are selling isn't software, it's physical servers. The alternative is going on-prem which is impossible for CDN if you aren't google or meta.


Rust (not the language) is another good exception that is mostly powered by DLCs and skins today. Continuous updates with balance changes keep the game fresh, ensuring you maintain your playerbase that will in turn buy DLCs.


> The attackers gained access to a legacy, third-party cloud file storage system.

I think the answer is ok but the "third-party" bit reads like trying to deflect part of the blame on the cloud storage provider.


The whole codebase & tools at whatever company I ever worked at was using 99% legacy stuff. Its wild...

Often times it would have been easier to rebuild the whole project over trying to upgrade 5-6 year old dependencies.

Ultimately the companies do not care about these kinda incidents. They say sorry, everyone laughs at them for a week and then after its business as usual, with that one thing fixed and still rolling legacy stuff for everything else.


All stuff is legacy the moment you deploy it.

All work created by a company decays, it's legacy code within months.


Yea it shouldn't be this way. Its only happening due to lack of standards and the software world essentially being the wild west.


> Often times it would have been easier to rebuild the whole project

Sure buddy, sure


The company that bought mine spent two years trying to have Team A rewrite a part of our critical service as a separate service to make it more scalable and robust and to enable it to do more. They wanted to do stupid things like "Lets use GRPC because google does!" and "Django is slow" and "database access is slow (but we've added like six completely new database lookups per request for uh reasons)"

They failed so damn bad and it's hilariously bad and I feel awful for the somewhat competent coworker who was stuck on that team and dealt with how awful it was.

Then we fired most of that team like 3 times because of how value negative they have been.

Then my coworker and I rebuilt it in java in 2 months. It is 100x faster, has almost no bugs, accidentally avoided tons of data management bugs that plague the python version (because java can't have those problems the way we wrote it) and I built us tooling to achieve bug for bug compatibility (using trivial to patch out helpers), and it is trivially scalable but doesn't need to because it's so much faster and uses way less memory.

If the people in charge of a project are fucking incompetent yeah nothing good will ever happen, but if you have even semi-competent people under reasonable management (neither of us are even close to rockstars) and the system you are trying to rewrite has obvious known flaws, plenty of time you will build a better system.


but the issue wasn't python or django, RPC or REST

it was the ORM and the queries themselves


I inherited a few codebases as solo dev and I am confident in my abilities to refactor each of them in 1-2 months without issues.

I can imagine that in a team that might be harder, but these are glorified todo apps. I am well aware that complete rebuilds rarely work out.


For all their boasting, I can't help but wonder how their response would have been different if the attackers actually had gotten their hands on sensitive data.


It's fine for this project since google is probably not in the business of triggering exploits in yt-dlp users but please do not use deno sandboxing as a your main security measure to execute untrusted code. Runtime-level sandboxing is always very weak. Relying on OS-level sandboxing or VMs (firecracker & co) is the right way for this.


> It's fine for this project since google is probably not in the business of triggering exploits in yt-dlp

yt-dlp supports a huge list of websites other than youtube


But YouTube is the only one that yt-dlp uses Deno for. No other website on yt-dlp's list has put up enough of a fight to merit an external JS runtime; only YouTube.

From the September announcement:

> The JavaScript runtime requirement will only apply to downloading from YouTube. yt-dlp can still be used without it on the other ~thousand sites it supports


I assumed they only use this setup for youtube, that might be wrong


Is there a full list? I struggled to find one



Thanks!


There's a supportedsites.md file in the base directory of the git repo.


Thanks!


I would not put it past them. And I'm not sure I trust the yt-dlp team to implement sandboxing securely. The codebase is already full of shortcuts that lead to vulnerabilities like file extension injection.

I mean, this gives me pause:

> Both QuickJS and QuickJS-NG do not fully allow executing files from stdin, so yt-dlp will create temporary files for each EJS script execution. This can theoretically lead to time-of-check to time-of-use (TOCTOU) vulnerabilities.

https://github.com/yt-dlp/yt-dlp/wiki/EJS

TOCTOU from temporary files is a solved problem.


i wonder if it would be legal if they did, as an anti-circumvention counter-measure.


> Runtime-level sandboxing is always very weak. Relying on OS-level sandboxing or VMs (firecracker & co) is the right way for this.

... Isn't the web browser's sandboxing runtime-level?


It used to be 100% runtime-level and it was the golden age of browser exploits. Each of your tabs are now a separate process that the OS sandboxes. They can only access a specific API over IPC for anything that goes beyond js/rendering (cookie management, etc...). An exploit in V8 today only gives access to this API. A second exploit is needed in this API to escape the sandbox and do anything meaningful on the target system.


Yes, but browser sandboxing is an absolute marvel of software design that also cost millions and millions of dollars in developers salaries and CVE bounties to develop. Neither Deno nor yt-dlp have anywhere close to millions of dollars to spend on implementing secure JS sandboxing.


Yes, and it's only reasonably secure because of years of exploits being found and fixed by some of the best (and very well-funded) software security engineers out there.


That's not true. It's secure because they are stacking OS-sandboxing on top, forcing attackers to find a chain of exploits instead of a single issue in V8


Great news! Deno uses the same runtime as chrome, so you benefit from all those found exploits.


While you benefit from the V8 fixes it lacks OS-level sandboxing (see above). Chrome is safe because it stacks security layers. Runtime sandboxing is just one of them and arguably the weakest one.


I'm assuming it's the render engine that is in pure CSS. You could display a static map in CSS but things like the tools to modify the terrain definitely need JS.


You might not need it using the new :has() and different inputs as modifiers. Though that's a lot of :has() and probably would kill performance.


I wanted to check if your assumption is correct but I couldn’t find the source code.

Why do you think the renderer is pure css and not e.g. mostly css?


The top right button has a "Download code" which gives you a .zip file. That .zip file doesn't have any JS in it, and renders the terrain just like in the online editor, except you can turn off JS and it still works.

Edit: someone else wrote basically the same an hour ago: https://news.ycombinator.com/item?id=45814791


Looks like it’s a “(css-only terrain) generator” - a generator that lets the user create and download a css only terrain.

As opposed to a “css-only (terrain generator)” - a terrain creation studio built with css only.


GP linked an example of a similar project that allowed you to modify the terrain without any JS at all


It is based on the impact on Datadog's customers, not on synthetic queries / pings


A single region that is a SPOF for global AWS services*


Is us-east-2 services impacted today? which ones?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: