If I can build the nitro image myself, I would just self host it.
If this is “self hosted ML” then that’s great, and it is secure!
…
That obviously isn’t their business model.
(Or is it? Did I totally misunderstand the offering?)
import blindai
blindai.api.Completion.complete("I love AI…
...I dunno, I feel like they'll say something like:
- You have a client side encryption key
- You encrypt the data before you submit it
- The encrypted data is put on the image along with our Magic Sauce
- In the secure enclave, the data in encrypted and processed
- The result is encrypted
- The encrypted result is returned to you
- You decrypt it locally!
The problem with that is there's a whole lot of magic hand waving about how the secure key that theoretically never leaves your local side, ends up inside the container image without them having access to it.
The client side SDK we provide manage the security for you. As the client side is open-source, as well as the server you can verify the claims I will make. In those simple lines of code we:
- The enclave uses primitives from AWS that we cannot fake to create a certificate containing a hash of the code loaded, a public key to exchange keys, and other security information. This certificate is signed by a hardware derived key from AWS, and therefore we cannot forge such certificates. Here I talk about AWS but it can be Intel or AMD depending on the solution you choose.
- You receive the certificate, check it's valid locally using AWS public key.
- Once you know locally on your machine that we are using a secure enclave without backdoor that will handle your data properly (because it is a valid enclave and you can check that the hash you see in the certificate is the same as the audited open-source code), you can finish setting up the TLS channel using the public key inside the certificate
- Data is encrypted locally and sent through this TLS to the enclave
- Data is decrypted there, where we cannot peek due to isolation
- AI model is applied inside
- Output is encrypted inside enclave
- Output is sent back to you for you to decrypt it
If this is “self hosted ML” then that’s great, and it is secure!
…
That obviously isn’t their business model.
(Or is it? Did I totally misunderstand the offering?)
...I dunno, I feel like they'll say something like:- You have a client side encryption key
- You encrypt the data before you submit it
- The encrypted data is put on the image along with our Magic Sauce
- In the secure enclave, the data in encrypted and processed
- The result is encrypted
- The encrypted result is returned to you
- You decrypt it locally!
The problem with that is there's a whole lot of magic hand waving about how the secure key that theoretically never leaves your local side, ends up inside the container image without them having access to it.
How does it actually work?