Hacker News new | past | comments | ask | show | jobs | submit | abatilo's comments login

I had the pleasure to briefly work with the author of this post within the last few years. Jeff was one of the most enlightening and positive people I've ever learned from. He was refreshingly honest about what challenges he was having, and delightfully accessible for mentorship and advice.


Depot also does remote docker builds using a remote build kit agent. It was actually their original product. If you could feasibly put everything into a Dockerfile, including running your tests, then you could use that product and get the benefits.


I actually didn't know this. We've had some teething issues _building_ in docker, but we actually run our services in containers. I'm sure a few hours of banging my head against a wall would be worth it here.

> including running your tests, "thankfully", we use maven which means that our tests are part of the build lifecycle. It's a bit annoying because our CI provider has some neat parallelism stuff that we could lean on if we could separate out the test phase from the build phase. We use docker-compose inside our builders for dev dependencies (we run our tests against a real database running in docker) but I think they should be our only major issues here.

But, Thanks for the heads up.


>It’s simple to create a Dockerfile that containerizes your app. It’s even easier to create one that has terrible build performance, chalked full of CVEs, is north of 20GB in size, and whatever foot gun you trip over when using Docker

It's been years that writing Dockerfiles has been fairly common. Years. And yet it's still so common that people write such poorly optimized Dockerfiles.

I think we should definitely start thinking about admitting that it's too much over head for people to learn how to write a Dockerfile.

That being said, I've known Kyle for a while now. The team at Depot have consistently shown the deepest possible understanding of the container ecosystem. I'm very excited to see what else they do.


Awesome work! Congratulations on the launch. This reminds me a lot of https://depot.dev

I'm not officially affiliated with them at all. But I'm a big fan of their product.

It appears that one difference though is that Depot is more focused on just docker builds and y'all are more generalized runners Is that right?


That's true. We have a slightly different focus - CI workloads. However, the goodness of depot.dev comes from buildkit remote builders and remote cache. That'll be natively integrated into our runners in ~2 weeks.

So you'll get that goodness when running CI with zero changes to your actions needed.


As opposed to keeping all of your servers independent of each other, super computers are used any time you want to pretend the entire computer is one computer.

In other words, they're used when you want to share some kind of state across all of the computers, without the potential overhead of communicating to some other system like a database.

Physics simulations and like, molecular modeling come to mind as common examples.

In the case of ML training, model parameters and broadcasting the deltas that get calculated during training are that shared state.


https://youtu.be/z3hmfSVmyqg?si=eLPZ0D6ug3D6PreI

As of 2 months ago, they had at least 7000 up and running, fwiw


The meat starts here:

https://youtu.be/z3hmfSVmyqg?feature=shared&t=3328

I'm curious why it is so hard for them to deploy the compute. They seem to be fairly behind schedule.


Congratulations on getting a hug of death!


I'm about 90% through this book that I picked up based on some other HN comment I had read. I would highly recommend it


Just here to say that's a GREAT name


My wife came up with that (I said more elsewhere in thread).

She was also the brains behind our startup together and vastly better coder than me!


In my experience, running a plan is much less likely to catch a bad value than the AWS provider.

Subjectively, the AWS provider will at least validate that fields have valid values during the plan step. The Google provider doesn't seem to validate actual values until apply, and then you get a failure


I can’t help but feel… sad about this. I only recently picked up Terraform and am astounded that this is what goes as coding in the infrastructure world. I was coming from Ansible so there was only improvement to be had, but man did Terraform let me down so far.

It (well, the provider) doesn’t validate fields until apply. That’s just so… sad. How is that acceptable? It’s like a car without a steering wheel, and people just go along with it.


It's not really Terraform's fault. Terraform provides the capability to do all kinds of validations before running an apply, but it's up to the providers to implement the validations. If the provider doesn't implement the validation, then it's not there.

It gets hairier when you delve into the details. The provider is typically an official provider that wraps some company's API, so that company ought to have a good set of validations, since it's their own API, right? Wrong. The team that writes the Terraform provider is typically different from the team that creates API methods, and the API methods themselves don't typically expose "dry-run" style functionality, so there's little for the team writing the Terraform provider to check. Meanwhile, the business doesn't care - the Terraform provider checkbox is already checked and validations/dry-running isn't a feature that affects revenue.


Do you know how hard/tedious/pointless it is to write client side evaluations for everything you do on the server? The documentation for the Google Cloud provider is shit though and absolutely should be improved.


You do a dry run first


How is a terraform plan different from a dry run? I always mentally mapped terraform plan == dry run to validate what changes will be made. Your comment throws a gauntlet into that understanding..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: