Hacker News new | past | comments | ask | show | jobs | submit login

people dramatically overestimate how difficult it is to write a program that controls docker for you. This is one of those things where you can write like two pages of Python and ignore... all this:

> Tealok is a runtime we’re building for running containers.

If you have one machine and docker-compose is falling short, really, just write a Python script with the official docker Python package, you'll be fine.




It's more than just a runtime for running containers, from the main landing page it looks like they're really specifically targeting self-hosting for the barely-technical [0]. In that context this article makes some degree of sense: their target audience will probably be legitimately overwhelmed by that pihole example and wouldn't know how to write a Python script to save their life.

If they can pull it off more power to them, but I'm skeptical that what the non-technical self-hoster needs is a TOML DSL that abstracts away ports, volumes, and even database implementations behind a magic text file. At some point you just have to bite the bullet and admit that your audience needs a GUI (like Sandstorm [1]).

[0] https://tealok.tech/

[1] https://sandstorm.io/


I'm actually a fan of Sandstorm, and think it got a lot of things right. I'd love to be able to talk to Kenton Varda about why he thinks adoption on it was weak. Personally I think that it put a bit too much burden on application developers since it required them to develop applications specifically for sandstorm.

> I'm skeptical that what the non-technical self-hoster needs is a TOML DSL that abstracts away ports

I fully agree, the end user would not be writing TOML DSL files. The end user would get something much closer to an app store, or what Sandstorm did, with one (or a few) click installs. The TOML DSL would be written by developers familiar with the application and stored either in a separate database, or ideally in the applications source control like a Dockerfile.


> I'd love to be able to talk to Kenton Varda about why he thinks adoption on it was weak.

Oh hai.

Honestly I'm not sure I'm a reliable source for why we failed. It's tempting to convince myself of convenient excuses.

But I really don't think the problem was with the idea. We actually had a lot of user excitement around the product. I think we screwed up the business strategy. We were too eager to generate revenue too early on, and that led us to focus efforts in the wrong areas, away from the things that would have been best for long-term growth. And we were totally clueless about enterprise sales, but didn't realize how clueless we were until it was too late (a classic blunder). Investors really don't like it when you say you're going to try for revenue and then you don't, so we were pretty much dead at that point.


Oh, hey, holy shit! You're one of my heroes. I've read through your internal discussions on protobuf within Google, you did amazing work there, and held your own against a difficult political environment.

It sounds like you have no criticism of the technical approach, then, but rather just the business mechanics? That's eye-opening, given how much has changed in self-hosted deployment since Sandstorm started. If you started something similar today, ignoring business needs, would you build something technically similar?


Oh, well, thank you!

You should probably take me with a grain of salt: of course, I, the technical architect of Sandstorm, still think the technical architecture is great. ;) Whereas I never claimed to be good at business so I'm not adverse to reporting I am bad at business.

But yeah I still think it's the right architecture. Still believe in what's written here: https://sandstorm.io/how-it-works

But I do think there is a lot of work needed to get to a baseline of functionality that people will use. Bootstrapping a new platform is hard and needs a long runway. Could you find investors willing to back it? I don't know. I hated fundraising, though.

Of course, there are many lower-level details I would change, like:

* Consider using V8 isolates instead of containers. Sandstorm had a big problem with slow cold starts and high resource usage which V8 isolates could do much better with. It would, of course, make porting existing apps much harder, but ports to Sandstorm were always pretty janky... maybe would have been better to focus on building new apps that target Sandstorm from the start.

* Should never have used Mongo to store platform metadata... should have been SQLite!

* The shell UI should have been an app itself.

* We should never have built Blackrock (the "scalable" version of Sandstorm that was the basis for the Oasis hosting service). Or at least, it should have come much later. Should have focused instead on making it really easy to deploy to many different VPSes and federate per-user instances.


I'm not dev-ops-familiar with Docker, so you might be more familiar with the problem space, but it seems like "You can just write a Python script to do what you want" is the sort of thing people say to justify not giving people the config infrastructure they need to solve their problems without having to write executable code. Like, sure, you often can make up for not having a zoom lens by walking towards what you're photographing, but that doesn't mean zoom lenses aren't useful to most, and critical to some.


> you often can make up for not having a zoom lens by walking towards what you're photographing

Nitpicking: You can't really make up for not having a zoom lens as your field of view will stay the same. Hitchcock's famous "dolly zoom" demonstrates that quite nicely.


I know— I work in film. The distinction is meaningless in this analogy.


> to solve their problems without having to write executable code.

Some people would rather write 1000 lines of yaml than 100 lines of python, and I really don't understand why.


It brings a new layer of complexity, which means more surface area for bugs and vulnerabilities.

It’s often easier for developers to write a script using standardized tooling than dig into deep configuration docs for a complex application you’re not familiar with, but that’s where the benefits stop. Configuring built-in functionality makes sense for the same reason using an existing framework/os/etc authentication system makes sense. It often seems like way more effort to learn how to use it than rolling your own simple system, but most of that complexity deals with edge cases you haven’t been bitten by yet. Your implementation doesn’t need to get very big or last very long before the tail of your pipeline logic gets unnecessarily long. Those features don’t exist merely because some people are scared of code.

Additionally, if you’re just telling the application to do something it already does, writing code using general purpose tools will almost certainly be more verbose. Even if not, 10 to 1 is fantastically hyperbolic. And unless you write code for a living— and many dev ops people do not— defining a bunch of boolean and string values to control heavily tested, built-in application functionality (that you should understand anyway before making a production deployment) requires way less mental overhead than writing secure, reliable, production-safe pipeline code that will then need to be debugged and maintained as a separate application when anything it touches gets changed. Updating configuration options is much simpler than figuring out how to deal with application updates in code, and the process is probably documented in release notes so you’re much more likely to realize you need to change something before stuff breaks.


This is a hilarious take given the overwhelming number of outages that are caused by "bad config".

If you can't code, then yeah, I bet config is easier. But as a person who codes every day, I much prefer something that I can interact with, test, debug, type check, and lint _before_ I push to prod (or push anywhere, for that matter).


Ok… so because config outages do happen, that invalidates the points I made? No. So, to use your rhetorical technique, that argument is hilarious given the overwhelming number of outages caused by coding errors.

I’ve been writing code for about 30 years, worked in systems administration for a decade, and worked as a back-end web developer full time for over a decade. I’ve dealt with enough code as business logic, code as config, and parameter configuring to understand that errors stem from either carelessness and/or complexity, which is often a result of bad architecture or interface/api design. The more complex something is, the less careless someone has to be to screw something up. Adding logic to a config unambiguously adds complexity. If you haven’t lost hours of your life to complex debugging only to find it was the wrong operator in an if statement or the like, you’re not a very experienced developer. That’s that. You can have all the machismo you want about your coding skills but those are unambiguous facts.

Developers doing their own ops and systems work have perennially been a liability for system-level architecture, stability, and security from multiuser computing’s inception. That’s why mature organizations have subject matter experts that take care of those things who know that a flat file of parameters and good docs is a whole lot more palatable when the pager goes off at 2am than a 100 line script the mighty brain genius developer made because they wanted to do it “the easy way that verysmart people know how to do” rather than learning how the config system worked.


Ha. Just did almost exactly that, but with a Go script--I wanted my Docker Compose to auto-update when I built on my CI server.

I found Watchtower, but polling just struck me as the Wrong Answer. Both too much overhead to keep pinging for the latest builds, and too slow to actually download the latest build. So I took some hints from Watchtower as to what I needed to do (mount the docker sock as a volume) and wrote a tiny Go server that, when pinged with a shared secret, would cause it to run `docker compose up -d --pull always`.

Probably took me an hour.

Then I added the ability to purge images before each update, because my tiny VM kept running out of disk space in the Docker partition. Oops. Scripts FTW.

I was already using the suggestion in the article about having a single reverse proxy server to redirect different paths (and different domains) to different servers hosted in the Compose file. Seemed like the obvious answer.

And I've configured k8s for my day job, so I could be using that. But I'm using Compose because I know how much of a pain k8s can be, especially around upgrades, where it has a habit of deprecating older versions of various interfaces. I'll put in the work for someone who's paying me to do it, but I'd rather work on my side project, not on configuring k8s.


> I found Watchtower, but polling just struck me as the Wrong Answer. Both too much overhead to keep pinging for the latest builds, and too slow to actually download the latest build. So I took some hints from Watchtower as to what I needed to do (mount the docker sock as a volume) and wrote a tiny Go server that, when pinged with a shared secret, would cause it to run `docker compose up -d --pull always`.

Is the code for this public? I had the same desire when setting up the server for my personal project but as you mentioned I eventually decided it was OK to just poll every two minutes and I would rather work on the project than on configuring docker. What I would like though is cleaning up the older images that are no longer needed.


> If you have one machine and docker-compose is falling short, really, just write a Python script with the official docker Python package, you'll be fine.

… aand there is even Ansible playbook for that! Maybe overkill.


Do you have some reference that you can link to? Would be curious to see something about this


Here’s the python module you would use: https://docker-py.readthedocs.io/en/stable/


IME, people dramatically overestimate how hard it is to write any program.


IME people overestimate how hard it should be to write a program, but underestimate how hard it actually is after they're done overengineering the problem to death.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: