Docker isn't as hard to learn as the author makes it out to be. You can learn it in less than 5 hours (take a 10 hour course at 2x the speed). Golang takes significantly longer to learn. This may sound obvious, but the speed of learning depends on what you already know.
A docker container is instantiated from a docker image, it's similar to object oriented programming: an object is instantiated from a class. Want to manipulate containers? Use 'docker container ...'. Want to manipulate docker images? Use 'docker image ...'.
There's also a differentiation from production vs. development environments. How do you onboard new developers? If you use docker, you can put a docker-compose.yml file in the repo and when you onboard a new developer, it's as simple as 'git clone ${the_repo} && docker-compose up'. If you want to use docker in production, you're probably using Kubernetes, and I would agree that there's an increased cost in complexity. Want to scale the docker containers across multiple servers? The logical choice is to use Kubernetes to orchestrate the containers. If you're scaling vertically, sure, use your current method. Golang is fast.
As your company grows & scales, I think you'll run into deployment issues using scp for deployments. What happens if two developers deploy at the same time? Ideally you put every deployment through a single "pipe" where if two people deploy at the same time, one deployment/testing process would run after the other in a serial manner, causing the second deployment to fail if there were a conflict.
Arguing about tech turns into a religious debate at a certain point. Use docker/Kubernetes if you need to, and if you can get by without using them, don't use them. Docker is awesome for solving versioning problems and onboarding new employees. Kubernetes is awesome at scaling and deployments. If your employees all agree to use the same version of software, there's no need to use docker or Kubernetes.
But hey, at least docker/Kubernetes give you the choice to think freely and if you see some cool library that you want to use written in some obscure language or version, it's easy peasy.
This is an oversimplification and Docker is easy from a basic usage standpoint like most tutorials are. The hard part is when you start configuring and configuring to your own requirements. What baseimage should you use? How about logging? How about PID 1 problem? Do I need an init system? How about SSH? Remote Docker containers cannot be accessed remotely for ex AWS Fargate. How about restarting servers? I need to do it gracefully. How about migrations? One-off scripts?
Personally, I love using Docker but it’s those small things that make it hard and all the complexity around setting them up.
Just replying to your comment, not trying to start an argument but just throwing out some thoughts. Please let me know if I'm off track in my response.
What baseimage should you use?
- You can always create your own docker base images. I do agree that it gets confusing when you pull a docker image and it's not using the linux flavor that you were expecting. Building a docker image is quite simple though.
How about logging?
- If using Kubernetes, you can use a 'side car' pattern with a log exporter.
How about PID 1 problem?
- I'm a little confused what you mean, but I think you are referring to killing PID 1, which would kill the docker container. If you're using Kubernetes in production, then Kubernetes would solve the problem of routing to docker containers that are up or down.
How about SSH?
- You would have to SSH into the host with the docker container present, and drop into the docker container thereafter.
Remote Docker containers cannot be accessed remotely for ex AWS Fargate.
- I haven't used AWS Fargate before, no comment.
How about restarting servers?
- Handled by Kubernetes, totally ok to restart the servers and requests won't be routed to dead (5xx) containers.
How about migrations?
- Deployments are simple using Kubernetes (built in rolling deployment, but also easy to do blue/green deployments). If you want to migrate a database, then it'd be the same process as non-container deployments.
"Docker is easy from a basic usage standpoint like most tutorials are"
This applies to basically any software or tooling once you get away from the initial tutorials. They wouldn't be tutorials if they covered all sorts of specific use cases particular to your own situation.
"The hard part is when you start configuring and configuring to your own requirements. What baseimage should you use? How about logging?"
Either the same distro you would use on a bespoke server, or you can optimize with alpine or scratch. Logging can be as simple or complicated as you want it to be, regardless if you use docker or not.
"How about PID 1 problem?"
We use s6-overlay [0] to manage this. When a service dies, a script gets executed, giving us control over how to handle it.
"How about SSH?"
For us, we use ansible to provision/set up kubernetes with ssh keys, then access pods and containers through kubernetes' cli. It isn't particularly -nice-, but it does work.
"How about restarting servers? How about migrations? One-off scripts?"
This is managed through kubernetes. You can restart servers and make deployments.
A docker container is instantiated from a docker image, it's similar to object oriented programming: an object is instantiated from a class. Want to manipulate containers? Use 'docker container ...'. Want to manipulate docker images? Use 'docker image ...'.
There's also a differentiation from production vs. development environments. How do you onboard new developers? If you use docker, you can put a docker-compose.yml file in the repo and when you onboard a new developer, it's as simple as 'git clone ${the_repo} && docker-compose up'. If you want to use docker in production, you're probably using Kubernetes, and I would agree that there's an increased cost in complexity. Want to scale the docker containers across multiple servers? The logical choice is to use Kubernetes to orchestrate the containers. If you're scaling vertically, sure, use your current method. Golang is fast.
As your company grows & scales, I think you'll run into deployment issues using scp for deployments. What happens if two developers deploy at the same time? Ideally you put every deployment through a single "pipe" where if two people deploy at the same time, one deployment/testing process would run after the other in a serial manner, causing the second deployment to fail if there were a conflict.
Arguing about tech turns into a religious debate at a certain point. Use docker/Kubernetes if you need to, and if you can get by without using them, don't use them. Docker is awesome for solving versioning problems and onboarding new employees. Kubernetes is awesome at scaling and deployments. If your employees all agree to use the same version of software, there's no need to use docker or Kubernetes.
But hey, at least docker/Kubernetes give you the choice to think freely and if you see some cool library that you want to use written in some obscure language or version, it's easy peasy.