Hacker News new | past | comments | ask | show | jobs | submit | ahmedtd's comments login

If they are using multitenant Docker / containerd containers with no additional sandboxing, then yes, then it's only a matter of time and attacker interest before a cross-tenant compromise occurs.


There isn't realistic sandboxing you can do with shared-kernel multitenant general-workload runtimes. You can do shared-kernel with a language runtime, like V8 isolates. You can do it with WASM. But you can't do native binary Unix execution and count on sandboxing to fix the security issues, because there's a track record of local LPEs in benign system calls.


GKE does ship with both Ingress and Gateway controllers integrated, they set up GCP load balancers with optional automatic TLS certificates.

I think you need to flip a flag on the cluster object to enable the Gateway controller.


That page seems to be a community wiki, and I think the original authors are somewhat confused on that point.

If you salt and hash the password on the client side, how is the server going to verify the password. Everything I can think of either requires the server to store the plaintext password (bad) or basically makes the hashed bytes become the plaintext password (pointless).

There are password-based solutions that work like this --- PAKEs like Secure Remote Passwords: https://www.ietf.org/rfc/rfc2945.txt

They have low uptake because they don't really offer any security beyond just sending the plaintext password over a properly-functioning TLS channel.


Good catch on the wiki authors.

But I think the point of salting + hashing the password isn't quite the same as what TLS offers. It's not necessarily to prevent MITM eavesdropping, but to help protect the user from credential re-use from leaks.

What was I taught is that your server should never have the user's cleartext password to begin with, only the salted hash. As soon as they set it, the server only ever gets (and saves) the salted hash. That way, in the worst case scenario (data leak or rogue employee), at most your users would only have their accounts with you compromised. The salted hashes are useless anywhere else (barring quantum decryption). To you they're password equivalents, but they turn the user's weak reused password (that they may be using for banking, taxes, etc.) into a strong salted hash that's useless anywhere else.

That's the benefit of doing it serverside, at least.

Doing it clientside, too, means that the password itself is also never sent over the wire, just the salted hash (which is all the server needs, anyway), limiting the collateral damage if it IS intercepted in transit. But with widespread HTTPS, that's probably not a huge concern. I do think it can help prevent accidental leaks, like if your auth endpoint was accidentally misconfigured and caching or logging requests with cleartext passwords... again, just to protect the user from leaks.


In Washington, cyclists can treat stop signs as yield signs, as long as there is no other traffic approaching the intersection.

Obviously, though, if you almost hit a pedestrian then you aren't properly yielding.


Related (though not a sator square). A while back I made an implementation of 5x5 word squares, following an example from Knuth: https://row-major.net/articles/2020-05-12-interactive-word-s...


Disclosure: I work in GCP engineering, thoughts are my own and not Google's, etc.

My impression is that Anthos is probably not what you need if your use case is deployment of a managed product into customer GCP projects (or AWS accounts).

Instead, copy the P4SA architecture that GCP uses for managing its own services in your project. Create one service account per customer, and have the customer grant that service account whatever permissions your control plane needs to manage the resources deployed into the customer project.

You can package those permissions into a Role for easier use.

You can see how this works by looking at Google's existing P4SA permissions in one of your cloud projects. They show up in your cloud IAM console if you remove the filter for "Google-Managed Grants".


the goal was really to stand up stuff via Config Sync / Config Controller, then hook it into Private Service Connect endpoints which are exposed to the customer's cloud. as far as I know, that's how Elastic and similar companies do it (at least from the developer's angle, we get a provisioned GCP project and/or PSC endpoint).

you're right that we don't need Service Mesh, perhaps most of the Anthos suite, but Config Management from Git is pretty slick (if it only worked as advertised).

anyway this is good guidance and i will see if i can wiggle out of anthos, but that was our intent/understanding in trying it.


Don't the bits come off the wire one at a time at the server as well? Any ability to read() from multiple sockets coming over the same interface is enabled by the kernel reading the data serially and placing it in buffers.


Not when the counter overflows back to 0. If it's a 3 bit counter, 0 is A again, not C.


The comment says a 32-bit signed int. Where is the 3-bit assumption coming from?


The comment starts with that assumption for the sake of a concise example. do you expect them to write the whole sequence out using a 32bit counter? :-D


When you divide 2^32 across 3 machines, you get 1431655766, 1431655765, 1431655765. They're nearly identical, just an itsy bitsy bit different. This isn't going to cause one machine to have noticeably less load than the others.


That's not what's being discussed here though.


Then what is being discussed? I thought this thread was about 3 servers, of which one has lower load due to an int32 overflowing.


Then it's irrelevant. The point is that the remainders are evenly distributed as the integer is incremented, with at most a single round being short. That's just how the math works.


"A single round being short" is exactly what they're talking about.

Edit to add: whether that's a big enough effect for the use case they're talking about, I don't know. This sort of thing is definitely significant in cryptographic code, though.


Not really the point of your comment, but...

The GKE equivalent of EKS IRSA is GKE Workload Identity.

It's pretty much the same user experience:

* Enable Workload Identity on your cluster

* Create a GCP service account

* Grant your Kubernetes service account permission to act as the GCP service account.

It's a bit more seamless because you don't need to upgrade your client libraries. Instead there is an on-node metadata server that provides access tokens to workloads.

Disclosure: I work on this


Thanks. I may have to work on this pretty soon!


I'm pretty sure v2 and v1 functions are totally separate. If you run `gcloud functions list --log-http`, you can see that gcloud makes separate calls to the v2 and v1 apis in order to present a unified list of all functions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: