Hacker News new | past | comments | ask | show | jobs | submit login

It looks awesome! I have a clarification question on this one:

> Using the SDK, your program prepares API requests describing pipelines to run, then sends them to the engine. The wire protocol used to communicate with the engine is private and not yet documented, but this will change in the future. For now, the SDK is the only documented API available to your program.

Does it mean the sdk is making a round trip to the dagger API remotely somewhere, or is the round trip to a locally running docker container?




> Does it mean the sdk is making a round trip to the dagger API remotely somewhere, or is the round trip to a locally running docker container?

The short answer, for now, is: "it's complicated" :) There's a detailed explanation of the Dagger Engine architecture here: https://github.com/dagger/dagger/issues/3595

To quote relevant parts:

> The engine is made of 2 parts: an API router, and a runner. > - The router serves API queries and dispatches individual operations to the runner. > - The runner talks to your OCI runtime to execute actual operations. This is basically a buildkit daemon + some glue. > The router currently runs on the client machine, whereas the runner is on a worker machine that will run the containers. This could be the same machine but typically isn’t.

> Eventually we will move the router to a server-side component, tightly coupled and co-located with the runner. This will be shipped as an OCI image which you will be able to provision, administer and upgrade yourself to your heart’s content. This requires non-trivial engineering work, in order to make the API router accessible remotely, and multi-tenant.


Hi, on the locally running engine.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: