For me it's about the ROAC property (Runs On Any Computer). I prefer working with stuff that I can run. Running software is live software, working software, loved software. Software that only works in weird places is bad, at least for me.
Docker is pretty crappy in most respects, but it has the ROAC going for it.
I would love to have a "docker-like thing" (with ROAC) that used VMs not containers (or some other isolation tech that works). But afaik that thing does not yet exist. Yes there are several "container-tool, but we made it use VMs" (firecracker and downline), but they all need weirdo special setup, won't run on my laptop, or a generic Digitalocean VM.
Docker is "Runs on any Linux, mostly, if you have a new enough kernel" meaning it packages a big VM anyway for Windows and macOS
VMs are "Runs on anything! ... Sorta, mostly, if you have VM acceleration" meaning you have to pick a VM software and hope the VM doesn't crash for no reason. (I have real bad luck with UTM and VirtualBox on my Macbook host for some reason.)
All I want is everything - An APE-like program that runs on any OS, maybe has shims for slightly-old kernels, doesn't need a big installation step, and runs any useful guest OS. (i.e. Linux)
docker is your userspace program carries all its user space dependencies with it and doesn't depend on the userspace configuration of the underlying system.
What I argued in my paper is that systems like docker (i.e. what I created before it), improve over VMs and (even Zones/ZFS) in their ability to really run ephemeral computation. i.e. if it takes microseconds to setup the container file system, you can run a boatload of heterogeneous containers even if they only needed to run for very shot periods of time). Solaris Zones/ZFS didn't lend itself to heterogeneous environments, but simply cloning as single homogeneous environment, while VMs suffered from that problem, they also (at least at the time, much improved as of late) required a reasonably long bootup time.
I had to use eclipse the other day. How the hell is it just as slow and clunky as I remember from 20 years ago? Does it exist in a pocket dimension where Moore's Law doesn't apply?
I think it's pretty remarkable to see any application in continuous use for so long, especially with so few changes[0] -- Eclipse must be doing something right!
Maintaining (if not actively improving/developing) a piece of useful software without performance degradation -- that's a win.
Keeping that up for decades? That's exceptional.
[0] "so few changes": I'm not commenting on the amount of work done on the project or claiming that there is no useful/visible features added or upgrades, but referring to Eclipse of today feeling like the same application as it always did, and that Eclipse hasn't had multiple alarmingly frequent "reboots", "overhauls", etc.
[?] keeping performance constant over the last decade or two is a win, relatively speaking, anyway
I agree, that you've pointed it out to me makes it obvious that this is not the norm, and we should celebrate this.
I'm reminded of Casey Muratori's rant on Visual Studio; a program that largely feels like it hasn't changed much but clearly has regressed in performance massively; https://www.youtube.com/watch?v=GC-0tCy4P1U
Java's ecosystem is just as bad. Gradle is insanely flexible but people create abominations out of it, Maven is extremely rigid so people resort to even worse abominations to get basic shit done.
Maybe I'm just grumpy that once you line up the support windows, it's impossible to get new software on old hardware, even though the "oomph" is there
Maybe my next big hobby project should be emulating bleeding-edge Linux on some old 686 hardware. Like that guy who booted Ubuntu on an 8-bit AVR in a matter of mere days
Wouldn't work here, they have software on each VM that cannot be reimaged. To use Packer properly, you should treat like you do stateless pod, just start a new one and take down the old one.
Sure then throw Ansible over the top for configuration/change management. Packer gives you a solid base for repeatable deployments. Their model was to ensure that data stays within the VM which a deployed AMI made from Packer would suit the bill quite nicely. If they need to do per client configuration then ansible or even AWS SSM could fit the bill there once EC2 instance is deployed.
For data sustainment if they need to upgrade / replace VMs, have a secondary EBS (volume) mounted which solely stores persistent data for the account.
Making machine images. AWS calls them AMIs. Whatever your platform, that's what it's there for. It's often combined with Ansible, and basically runs like this:
1. Start a base image of Debian / Ubuntu / whatever – this is often done with Terraform.
2. Packer types a boot command after power-on to configure whatever you'd like
3. Packer manages the installation; with Debian and its derivatives, this is done mostly through the arcane language of preseed [0]
4. As a last step, a pre-configured SSH password is set, then the new base VM reboots
5. Ansible detects SSH becoming available, and takes over to do whatever you'd like.
6. Shut down the VM, and create clones as desired. Manage ongoing config in a variety of ways – rolling out a new VM for any change, continuing with Ansible, shifting to Puppet, etc.
This is nice in its uniformity (same tool works for any distro that has an existing AMI to work with), but it's insanely slow compared to just putting a rootfs together and uploading it as an image.
I think I'd usually rather just use whatever distro-specific tools for putting together a li'l chroot (e.g., debootstrap, pacstrap, whatever) and building a suitable rootfs in there, then finish it up with amazon-ec2-ami-tools or euca2ools or whatever and upload directly. The pace of iteration with Packer is just really painful for me.
I haven’t played with chroot since Gentoo (which for me, was quite a while ago), so I may be incorrect, but isn’t that approach more limited in its customization? As in, you can install some packages, but if you wanted to add other repos, configure 3rd party software, etc. you’re out of luck.
Nah you can add other repos in a chroot! The only thing you can't really do afaik is test running a different kernel; for that you've got to actually boot into the system.
If you dual-boot multiple Linux systems you can still administer any of the ones you're not currently running via chroot at any time, and that works fine whether you've got third-party repositories or not. A chroot is also what you'd use to reinstall the bootloader on a system where Windows has nuked the MBR or the EFI vars or whatever.
There might be some edge cases like software that requires a physical hardware token to be installed for licensing purposes is very aggressive, so it might also try to check if it's running in a chroot, container, or VM and refuse to play nice or something like that. But generally you can do basically anything in a chroot that you might do in a local container, and 99% of what you might do in a local VM.
I think the thread is more about how docker was a reaction to the vagrant/packer ecosystem that was deemed overweight but was in many ways was a “docker like thing” but VMs.
I would love to have a "docker-like thing" (with ROAC) that used VMs not containers (or some other isolation tech that works). But afaik that thing does not yet exist. Yes there are several "container-tool, but we made it use VMs" (firecracker and downline), but they all need weirdo special setup, won't run on my laptop, or a generic Digitalocean VM.