I personally like Google Cloud, but this blogpost feels a bit like a advertisement, it seems pretty clear that Google Cloud has offered a better pricing deal than Azure could and that is why the are moving.
> GCP offered the performance and consistency we needed. A second reason is that we believe Kubernetes is the future
Azure does a lot of cool stuff as well, not really a valid reason why you would move (other than you got a better deal).
If they're investing in kubernetes it makes complete sense to switch to GKE over AKS. We used AKS and it gave us tons of trouble while GKE seemed a lot more polished.
Considering Brendan Burns works for Microsoft now...I have doubts that the differences are that staggering at this point. That being said, I generally avoid containers because they are overkill for most workloads, imo, so I can't speak from much experience on the topic.
Who is this "Brendan Burns" dude and why is everyone treating him like a deity? I've never heard of him even though I spent nearly a decade at Google, some of it in Google Cloud. Much of what Kubernetes is is basically a "lightweight" take on Borg. It even comes through in naming: borglet <-> kubelet, borgmaster <-> kubernetes master. The story of Kubernetes is very much about standing on the shoulders of the giants, and most of those giants (members of the Borg/Omega team) are still there, at Google.
I feel like that's like working at Microsoft and having never heard of Alex Kipman. The creators of Kubernetes are very open about it being based around Borg's structure. That's no secret.
Sure, but to say that just because Burns works at Microsoft now is a good reason to think that MS is a containerization leader is a total non-sequitur. Google, however, is _the_ containerization leader, having created CGroups and having run everything in containers for a decade before they became "cool".
FWIW, Google certainly led on cgroup development for a while, but the cgroupv2 maintainer works at Facebook, and we're active in upstream container development in both linux and systemd.
I don't necessarily think you're wrong; I just think your argument, as stated, is weak.
Microsoft is undoubtedly A containerization leader at this point, I never claimed they were THE containerization leader. The fact is that there is likely not a huge difference at this point between the big 4 (I'd include IBM since they bought RedHat.) Kubernetes is no longer that new and all 4 big cloud providers have had plenty of time to more or less be on par with each other. I'm sure Google has some other insight, but I doubt it's so much that many people are flocking to their service for it anymore.
> Microsoft is undoubtedly A containerization leader at this point
If that was remotely true then what product/service does Microsoft offer that is remotely comparable with Docker/Kubernetes and is neither Docker or Kubernetes?
Now, I happen to think Kubernetes is better, but you asked. Service Fabric was available to the public in 2016 but has been used internally since 2011.
The author of the first version of Kubernetes. Author as in the person who wrote the code and bootstrapped the project. If memory serves well. Haven’t been at Google for a while, but in early 2010s would have been unrealistic to attempt to externalize Borg/Omega. I still remember a high level architecture diagram of Borg covering a whole wall.
Why would it be "strange"? At the time I left Google it was 60K+ people. You'd need to be someone of Mike Burrows' or Jeff Dean's stature to even register on the radar.
I work at Google, I neither know Brendan Burns nor Mike Burrows. According to his LinkedIn Brendan was a Senior Staff Engineer at Google, i.e. Level 7. That is not a particularly high position.
My previous manager was a Principle Engineer (Level 8). I am pretty sure no one here ever heard her name.
Most people who do follow the development of Kubernetes in the community do know him though. What matters is not whether people know him or not, just that Azure has done lot of cool things around k8s in the last few years and lot of top folks who are part of the community are working at Azure / Microsoft.
Mike Burrows keeps a very low profile but he was actively involved in the creation of Chubby along with the more... subtle... aspects of synchronization primitives. He also co-invented Burrows Wheeler encoding (bzip2 uses it). Google would not have been able to build its empire without his contributions.
I might agree that full on kubernetes infrastructure may be overkill... but there are a lot of smaller options for using containers. Simplest that I've used was 3 dokku servers behind a load balancer and all apps were deployed to all three servers in the same way. DB managed outside the containers.
I don't even do development in containers, but run services I'm not actively working on in them locally... this lets me have them nearly always on in the background and work on the piece I need to work on in the foreground. It's simply a matter of automation with a fast reset to zero.
The code that gets exercised the most is the code you can trust the most... by automating it once, you can repeat it. By automating with a container, you can isolate it. You can ensure the boundaries are proper, well defined and enough detail/documentation to repeat again and again. I do containerize the DB locally as well. I can reset and spin up the application stack I'm working on in under a minute total. Generally only resetting the db (around 18 seconds) or api (around 5) when those parts of the application are changed. I can work on the front end locally and just have the background in the background. I can tweak the background without touching shared environments. I can tear it all down and work on a different project without borking my host/local environment.
Yeah, orchestration setup and config in a cluster for hosting can be hard to learn (still learning current kubernetes myself) ... that doesn't mean that containers themselves are overkill.
Both Azure and AWS's managed Kubernetes offerings launched in mid-2018, while GCP has been running it since at least 2014. So there's no denying that Google has more experience, but that gap is only going to close as time goes on.
That discussion seems to indicate that 1) it was a console/UI issue (not a service outage), and 2) it was actually resolved within the day but the final incident update came 3 days later.
Putting aside valid thing that article was written by person with role at gitlab 'Content Marketer', one thing struck me:
"This Pingdom graph shows the number of errors we saw per day, first in Azure and then in GCP. The average for the pre-migration period was 8.2 errors per day, while post-migration it’s down to just one error a day."
This was measured independently by pingdom, not by gitlab.
While I have absolutely no love for Azure, it's worth pointing out that doing a migration, any migration to a new environment can often be an opportunity to clean up and fix old junk that has not gotten attention, and that issues in a new environment often end up being treated as a problem caused by the migration and given attention and fixed, where the same issue might have been ignored in the old one because "it's always been there".
As a result it's often really hard to tell if all the differences are down to differences in quality between the two providers vs. things done as part of the migration.
But of course this also does not mean the providers aren't different in terms of quality, just that it takes more than a graph like that to tell.
I feel like those posted uptime/reliability numbers are far lower than either GCP or azure is capable of providing. I suspect the majority of the errors/failed requests are probably code bugs or deployment issues in gitlab...
Perhaps they haven't been doing enough chaosmonkey testing?
Hi, GitLab employee from "marketing" here. I take a little offense to
> Putting aside valid thing that article was written by person with role at gitlab 'Content Marketer'
A lot of us in marketing have technical backgrounds and are GitLab code contributors, that's what made us competitive for the positions we were hired for. We just write a lot of the blog posts so the product teams can focus on their own work. They're also usually very collaborative.
Sorry if you meant no offense, it's just a bit of a button-press issue for me!
I did not write anything about author having/not having technical skills. You're reading it between lines, and in my opinion it shows (again: for me) kind of insecurity. I suspect that emphasis of 'being competitive for the positions we were hired' also shows it. But it's my opinion.
What I wrote (and what I stand for) is that it is good to know that this article was written by 'content marketer' - meaning person with a SPECIFIC agenda for writing that article (by definition of 'content marketing' itself).
Thats always good to know. If anybody praises something I always prefer to know if there is something (e.g. salary?) which may created/influenced/PAID such opinion. We (hackernews) derided it on many ocassions. To give different example: I personally ask all bankers if they have commission on me (for specific recommended product), or not. And I consider it healthy to know and ask.
To make it clear, I think that there is a good content marketing, and that there is a a bad content marketing. The bad for example can be seen easily with searching google for "blog 10 best sleeeping bags" and similar.
Hi, the content marketer here. I'm just quoting the presentation we gave at Google Cloud Next '19 almost exactly. Instead of making you watch a 30+ minute video, I outlined the core points from the presentation so that you could still be in the loop.
While I don't take offense to your comment, I think you made it for a reason. It's an ad hominem fallacy. These are our staff engineer's words almost to the letter, if I had posted it in his name would you have taken it at face value? I think it's important for us to ask these questions of ourselves and analyze why we form certain opinions.
That said, I do appreciate the feedback, and thank you for your comments. I'll try to keep your opinions in mind with future posts as I certainly never want to come off as having an "agenda." We're not running some BuzzGitFeedLab mill here :) .
I thought Andrew gave a wonderful presentation, and yes, we do use Pingdom as a tool to measure these things.
Azure is currently experiencing DNS issues in all regions which is actively causing downtime for my GitLab repos[1]. Whether it feels like advertising or not, their numbers seem to support their claims that GCP is more reliable.
Unfortunately Azure AKS is quite hacky and shouldn’t be used in production (heard that from MS people). They will be releasing a more production like K8s distro in near future but it will have different caveats. On the other hand if you look at Google GKE - it has been production ready for multiple years with auto upgrades, self healing and just great ui/cli :) so yeah, if they use K8s then GKE makes total sense.
Hi! I cannot stress enough AKS is ready for production, we have many many MANY customers using it. I am EXTREMELY biased, but I do have some background as I was the first/lead PM on GKE for several years. :)
If there's something we can do to help, please let me know!
Disclosure: I work at Azure on Machine Learning (but not AKS)
I would expect that most companies that both a) are big enough for the cloud providers to bother talking to and b) have proven they have the ability to run their workloads on multiple clouds are getting discounts from whatever cloud provider they end up using. One condition of these discounts might be that they can't publish exactly the prices they are getting, or any information that would let you infer the price.
I don't see how Azure might be designed causing delays like that. A simple operation of deleting a VM should do an RPC from the API endpoint to a machine-manager type service, which in turn does an RPC to the actual machine.
I would expect all that to happen typically within 1 millisecond, and perhaps up to 100 milliseconds at the 99th percentile latency when the VM machine is overloaded/overheating/otherwise unhealthy.
So what design did Microsofts engineers pick that can ever take 10 minutes to do anything?
Adjusting your marketing in light of an industry-shaking acquisition like that is hardly growth hacking. You would be absolutely and completely insane not to execute it in exactly the way they did.
Growth hacking is a silly buzzword, it's just marketing. And good marketing involves reacting quickly to position your product when the competitive landscape changes. Sales is the most important thing in the success of a company so this was definitely the right call to make, and I doubt it really affected their product any.
GitLab marketing employee here, just popping in to say that a majority of us focus on both (and are encouraged to). While we aren't specifically on the "product" team, we're still code contributors like any other community member. That was one of the biggest perks for me joining a FOSS team.
To me gitlab already provides the absolute best CICD service. It's like they need work to provide a better service than any of their direct competitors.
I've just gone through the process of selecting a new CI/CD provider, and I think GitLab CI is being let down by its tie in to GitLab.
I've heard great things about GitLab CI, but we aren't looking to move our version control and there doesn't seem to be a way to have hosted CI from GitLab without it.
Can I ask where you're hosting your code today? We offer first-class support for external repositories stored on GitHub with GitLab CI/CD for GitHub [1]. In addition, you can do similar CI/CD integration with any git repository by URL as well [2]. We see both of these as "minimal" integrations and we're hoping to add more first-class support for external repositories this yeah - but would love to know what you'd focus on first if you were Product Manager for a day :smile:
Since netflix knows they will always be encoding, why would they pay for cloud provider overhead at all? It would seem to be far cheaper to have a dedicated fleet.
Because buying cheap surplus capacity (spot instances) is likely cheaper than running your own (Amazon has a reason to sell them at any value above the marginal cost of running stuff on them, which is basically power).
They might have a dedicated fleet for some capacity. They might also have a less predictable encoding load (e.g. not be sure how much content they will acquire), or able to make trade-offs (encoding is expensive today? let's do a fast encode and wait until spot instances are cheap with the better one).
They might also be buying for different prices/conditions than what you and I see on the web site.
The fact that not even the largest and most compute-intensive software companies manage their own servers now should really end the cloud computing cost argument at his point.
Netflix has long been open about the fact that their costs for content dwarf their IT costs, and encoding is a bursty CPU demand that perfectly fits cloud economics.
Note I'm not including their networking costs to get to the last mile, where they place a lot of their own boxes in ISP facilities pretty close to users, that would be about the same however they were doing core IT.
IME cloud _can_ easily be more expensive and more work than bare metal, in house or colocated. If course it depends upon the workload and the tools/resources available to manage it. Even with clouds there is still server management to do.
More expensive - maybe, depending on how good you are at managing runaway costs.
More work - definitely not. No matter how hard managing your AWS workflows is, bare metal will always be all that same work plus everything else related to hardware, cooling, power management, ISP and more.
At Netflix scale it’s probably cheaper to leverage cloud solutions than to maintain (build, equip, set up, maintain, upate, upgrade) their own data center.
If your load is fairly constant, maintaining your own servers is cheaper from the moment where you need two physical servers. And even at Netflix scale "maintain their own datacenter" might mean renting a few racks in a data center of your choice.
The arguments for cloud look different: services offered by the cloud providers, higher flexibility, no capex (though at small scale you can rent and at Netflix scale capex is presumably not an issue) or better scaling in highly variable loads.
Netflix encoding servers probably have very variable load since releases vary depending on season (like the flood of films for Christmas).
I'v e found that "your own servers" vs. "rent a cloud" are a bit weird as a company grows. First one is cheaper, then the other, then the first one again, then the other again, it continues like this a few times.
At one point it's quite possible that a combination of the two is the cheaper option.
And yes, you're completely right, load variability plays a huge role in all this.
That is very odd and doesn't mesh with my experience at all. Usually the server cost is linear with growth (more or less) so if the per unit cost is fixed (more or less) then how on earth would such a swap occur.
Depends on the size. When operating your own servers, you need to recreate all that cloud providers do for you:
- maintain the physical machines
- build or rent data centers
- have people to operate, maintain, upgrade the machines
- set up, build, maintain, update, upgrade infrastructure for your projects to work on your machines
All these costs are not insignificant when you need more and more machines. And as it was mentioned above, they don't do well when demand is variable: when you no longer need as many machines, you can't just decommission them. When demand spikes, you can't install new servers instantly.
Hybrid approach brings its own problems: your software and your infra has to work on your own servers and in the cloud, which is a harder feat to pull off than it seems.
Seriously HN?! Strategic decisions aren't made like this. They take a ridiculous amount of scrutiny & especially one like this that impacts engineering day to day not to mention the entire infrastructure roadmap.
On the contrary, there's probably decent product alignment with Gsuite (proven by Microsoft's acquisition of Github) & opportunity for Gitlab potentially getting acquired by Google.
Most kube focused customers move to GKE. It really is the best platform and Google is all in. Azure/AWS/etc see it as a commoditizing platform they don't control like Google does, so they'll never be the best.
GKE gives a sweet deal to encourage it because they see the long tail revenues.
It's a pretty easy decision for anyone based on k8s. I see this more a sponsored ad for kubernetes than I do for Google.
Do you have some sort of citations to back this up? Out of the 20-ish or so fortune 500s I work with directly and indirectly, not a single one has a gcp presence and every one of them uses kube somewhere in the business.
For some reason it seems Google Cloud folks also swarm this threads. If commenters have Google stock, they probably should disclose that on threads like this.
I'll back this up, I work with enterprise customers to adopt kubernetes. I frequently have customers do installs on both AWS and Azure. I have never had a customer as much as consider GCP without me asking whether it was an option. Even then, it was uniformly dismissed. Google has a cool platform, but they've got major mindshare issues.
Gitlab's Series D was $100 million dollars for a valuation of $1 billion; settled several months (Sept '18 [0]) after Microsoft announced their plan to purchase Github (June '18 [1]) and in the month before the Microsoft-Github transaction was settled (Oct '18 [2]).
If the pitch deck to investors didn't mention Microsoft buying Github, I would be surprised.
Of interest, looking at their pricing page[3], Gitlab lists CI minutes as the top differentiating item for each level. I wonder if that's a signal of how the market has responded to segmentation ... or if it's as simple as "it looks better".
Disclosure: I work for Pivotal. We compete at the fringes and I expect there will be more as Gitlab expands its product boundaries.
IBM would make more sense than Google. Despite the topic of TFA, almost all of GitLab's ARR comes from their on-prem offering not their SaaS offering. IBM is much better positioned to sell and support on-prem software than Google. IBM is also a long-time player in SCM, and despite being legacy their Rational products still have a big footprint in enterprise. With some integrations and migration tools an IBM-owned GitLab would have a huge installed base to go sell Git modernization to.
Another possibility would be Broadcom, as CA has a number of adjacent products for the software development lifecycle.
I'd put IBM and SAP as outside chances. The Red Hat purchase makes perfect sense for IBM: they're buying a competitor which has deep roots in the enterprise. Gitlab is less so.
On the other hand Google is in a struggle to catch up with even Azure, so Gitlab looks attractive as a foil against Github.
Oh god please no. I want to be using Gitlab 5 years from now, not planning a migration back to Github or whatever when Google loses interest in another toy.
If Google acquired GitLab, they'd offer it as a managed platform for GCP, which they're unlikely to shut down since contracts and SLA require a heads up of about a year before they can remove a service that is GA from their platform.
Just look at Google+: it still lives on for G-Suite customers while the public version has been shut down.
I don't expect GitLab to be of much interest to Google though: A lot of their stuff competes with services Google has on GCP, Ruby/Rails doesn't fit that well into their Python/Golang/Java stack and a remote-only company wouldn't fit into their office-bound culture.
The whole reason I picked Gitlab is that it doesn't matter what Gitlab does, if they pick a weird direction I can just self host and do what I want.
Gitlab is big enough that should they ever be killed by a Google acquisition (I don't think that's likely), a strong enough community would sprout and keep maintaining it.
Do you have examples of open source software in a similar situation (one company providing coordination and final approval in the previous project; similar complexity level to Gitlab, whatever that is), where that happened successfully?
I'm not challenging necessarily, I legitimately am interested in examples.
Hmm what about the Blender project, would that one count?
I admit it's difficult to come up with examples, though I don't know of any failures either. RethinkDB has been sort of a failure, but it was never very popular to begin with.
You can't compare that as it's a paid enterprise on-prem version not something you can self-host and keep running as long as you want. You won't get community fixes for it like it would be the case if Google were to abandon Gitlab if an acquisition would be the case.
Is it concerning to anyone else that Google doesn't release revenue figures on their SEC filings for GCP? I feel like if GCP doesn't get as much traction as other google services it might go the way of Google Plus or incorporated into G Suite.
"The fundraising values the startup at just over $1 billion, a company spokesman said, making GitLab the latest unicorn in the booming market for digital operations management. Alphabet Inc.’s GV, Iconiq Capital and Khosla Ventures participated in the financing round."
In their other post you can read that they have just added a few gitaly worker nodes and got better disk IO by not using raid 5. I don't see why the cloud provider is relevant here, could all been done on azure.
Unpopular opinion: The third reason why GitLab decided on the Google Cloud Platform: political correctness, not only that GitLab doesn't want to nurture their enemy GitHub, which in turns is acquired by Microsoft causing a political exodus, and GitLab also fears the ties with Azure will make people give up to BitBucket or such again. Therefore it’s a pretty good PR stunt pulled to polarize the situation.
Well, despite that, sadly consider the fact that it is "politically correct" to critique Microsoft as "M$", "FUD machine", "evil corp" and "attempted murderer of free software" in the tech world even to this date.
> GCP offered the performance and consistency we needed. A second reason is that we believe Kubernetes is the future
Azure does a lot of cool stuff as well, not really a valid reason why you would move (other than you got a better deal).