I'm 60, and going strong. Most important is bring not only depth and breadth of knowledge but also wisdom to the work. As a graybeard, you're expected to add value because of your years. For example, when my office was going nuts about Kubernetes, devops, etc., it got me a raise to compare this new tech to the original Unix philosophy and point out some lessons learned in Unix design that would help with our containers. Everybody reinvents everything, every 5 or 10 years; use your history to add value, and you have a unique selling point that can't be duplicated.
The longer I'm in technology, the more I realize that there's a lot of value in just having been in the industry for a while and paying attention to what's going on. This is particularly relevant in troubleshooting. So much of being able to diagnose a problem quickly is just a result of having diagnosed a LOT of them before. Our brains are optimized for pattern matching, which is perfect for this - you start looking at something and go "You know, I've seen something like this before, where the server lost connectivity intermittently. I should look and see if the network drivers are out of date. Yep, that's the problem."
Ever deploy the same software to multiple servers, and it maddeningly behaves slightly differently on different servers, and you're not quite sure why?
This is the problem docker solves. Install exactly once, when you build the image. Then deploy the exact same code, down to the bit, as many times as you want, to as many servers as you want. Or quickly roll back to the Docker image for the previous version.
So this is how Docker helps you scale quickly and deploy as often as you want, with confidence.
Kubernetes, meh. Just some code for spinning up more or fewer Docker containers in an automated fashion. At least that's my impression, I haven't had to get my hands dirty with it yet.
One reason to use containers over VMs is that building, updating and extending containers is far easier and more maintainable than full virtual machines.
Containers give you the ability to layer the pieces you need on top of each other, so you are only responsible for the parts that you maintain. No need to rebuild an entire VM image every time one piece of the stack is updated.
Distribution of containers is also far more efficient than full virtual machines (especially important in a highly distributed environment).
I don't find the Docker API particularly intuitive. For my use-cases so far I've found it much simpler to spin up a VM if I want to duplicate a particular environment.
I think where Docker tends to shine is when you are operating at scale. For larger players using VMs to recreate a computing environment is expensive and eats into your bottom line. In such situations there's value in having some kind of tooling or API that partitions and simulates specific environments for processes running in the same kernel space. For better or for worse, in 2019, Docker is the best solution to this problem.
Comments like these are so infuriating! You get to shit on a huge community with zero reasoning, and zero consequences for being wrong.
Your negativity is only rewarded if somehow you're right (which is more a result of broken clock syndrome than anything else), so you and people like you are incentivized to keep sniping from afar.
Is there not some part of you deep down that looks back at the last decade of technology and feels "well I'm glad we're not doing things that way anymore, it was totally wrong for our situation"?
It just seems that that is constantly the case. We might well be another 3 or 4 decades away from software development technology hitting real maturity in terms of really streamlined tooling, frameworks, languages that help us work at the right level of abstraction at the fastest speed.
I don't think it's an entirely unwarranted comment. I think it's wise to constantly be thinking about and looking for "what awfulness are we accepting as a trade-off to get the great benefits of this particular way of configuring our process?"
By all means get into the hype, dive in, be excited. But don't forget to shit on yourself because you know the shit is coming at some point. If not in 5 years then in 10. It'll help you see it coming.
What actually ends up happening is people like you get left behind. Jobs become harder to get as you struggle to keep up, your ambition hardens into a self defensive ego shield, and you're stuck telling yourself that you were right to stop moving forward because things only got downhill from the moment you chose to stop learning.
The rest of us reap the benefits of innovation, standing in the sun, while you rot in some forgotten basement where the last self-hosted server rack at a fortune 10 company is kept. You'll have cursed yourself to the menial task of caring and feeding the pets surrounding you, the last vestiges of what you've tricked yourself into believing was a simpler, saner time.
Pessimism loses against the unyielding light of positivity. It may feel good to take shots from the shadows, but you're developing a hunched back, hiding from view.
What actually ends up happening is people like you make outlandish truth claims about the dispositions, aspirations and futures of strangers on the internet who they have no context for. And get it it oh so wrong.
People often misconstrue my pragmatic hyper-realism for pessimistic defeatism. It's nothing of the sort.
Nice rhetoric though! I really pictured the rotting corpse at the computer in the basement of the fortune 10 and partying on the beach with models and bottles making the promo video for Fyre festival. You really took me there.
The big companies self-host for legal reasons. It is easier to put up a legal fight with self-hosting. With other hosting, your data can be silently grabbed. You won't even know your data has been compromised.
It's also just cheaper and more reliable at that scale.
Hard to do in a comment, but the real value of containers, etc., is really microservices and micro-environments.
If you dig in, there are dozens of similarities to both the Unix shell kit and the Unix kernel design. The translatable wisdom starts with principles — do one thing well, smaller is better, and hugely, let data be the interface that assembles individual tools into a process / flowchart / blockchain / supply chain / etc ad nauseum. Let those principles guide your design and how your containers envelope the problem. Like I said, hard to explain in a post, but the gist is that “everything old is new again, except I invented it!”