I was really optimistic about GitLab's vision and really liked how they shipped a ton of really good features, but they reached feature creep and I'm pretty sure they will never get "usably fast" because maintaining all these is a huge burden.
Also if you have tons of features like these, I think it's impossible to get all of them right and at the end, you have a lot of mediocre/poor features and few good ones.
The other thing is user experience. Who will know all these features, especially when setting some of them is already hard and need experts...
If you want to be everything for everyone, it's mostly impossible or you get a very complicated, unusable and/or slow thing.
I'm optimistic. They added tons of features yes and ended a bit in a (ui) mess. But they will cleanup I'm quite sure. Let them experiment, it's a (good) sign for me that they are courageous and didn't become totally enterprise-y. So far I think they have been quite able to tame the features.
> they reached feature creep and I'm pretty sure they will never get "usably fast"
Feature creep only really results in poor performance if both a) the product is poorly architected to handle the added feature creep or b) the product is correctly architected but is poorly deployed, and is struggling with the limited resources provided to it.
The first step is to define clear interfaces: design a plugin architecture, and migrate user-facing features to use it. This can allow users to disable features which they are not interested in, and see immediate performance and UX benefits. For users who are interested in enabling all the features in a product, migrate plugins to run in their own networked nodes, so that enabling features requires horizontal scaling instead of vertical scaling.
Architecting for horizontal scale when the project is new, unproven, and simple is just overengineering, and should be avoided in the outset, but as the product matures and more and more usecases are supported, not architecting for horizontal scaling just turns into technical debt.
To the Gitlab folks reading this: remember that some of your users are lusers. You will have customers running on-prem 10,000 user instances off a single server with a couple of cores and less RAM than the workstation that you donated to your local grade school five years ago. Yes, that's their fault and not yours for grossly under-provisioning. But their users aren't going to say "damn these servers are slow," they're going to say "damn, Gitlab is slow." They're going to think it's your fault and not their infrastructure team's fault. So how can you combat this misunderstanding?
Build in applicative monitoring. Define what "responsive" means, give end users a page with a list of browser-side tests to check responsiveness by, and if something fails, maybe provide some context ("looks like memory usage is really high and it's going to swap, Gitlab probably needs more RAM!", "looks like the database is slow to respond", "Go talk to whoever set up your Gitlab instance for you about these results"). If you really stand by your product and think that it's responsive, well, prove it - don't ask your users to prove it.
Apart from what the community already outlined, we're always looking to improve performance of GitLab itself, you can check out our ongoing efforts in [1].
Apart from that, we'll also working on improving performance of GitLab.com. Here's a list of issues we're working on [2]
If performance is crucial to you, we invite you to try out a self-hosted CE instance. It should be fairly easy. We even have ready-made Docker containers [3]. As always, if there's any issues just ask away and we'll do our best to help.
really? i've heard that selfhosted gitlab tends to have terrible performance, even on pretty strong VMs. you're actually the first person that wrote positively about gitlabs performance i've come across.
I don't have any experience on the matter though. my previous job used bare SSH-Repos and my current one uses Stash/Bitbucket.
Here gitlab runs on a separate Qemu/KVM instance on a ~small Hetzner server. Performance is no topic.
(Though can't tell about the last two versions - we skip versions if not impressed and/or when we wait until something is finished (like e.g. the new ui now). We never update to x.x.0 as there is always an improved x.x.3 or so version worth waiting to).
it has. I never seen that any application (not even java) would take 2gb of memory (well site loading time and memory usage got better the last two releases, so it was worse) for a really really small instance. 3 users, 25 projects (only three are regulary updated).
some sites do load really really slow (we only assigned two real numa cores to it). but projects load in like 1 second (which I think is really bad, when we browse it in our local lan 1gbit..)
we were on jira, stash/bitbucket server before and they did take the same amount of memory (well you could trim it to use less). Well bitbucket server did not have the features and updating jira/stash was way worse than gitlab. (need to copy server.xml), but the performance was way way better, even on worse hardware.
problem was jira > gitlab issues, gitlab > stash, however we only used 1% of all jira features, so we gave gitlab a try and we started to use it before they got crazy with their features.
For a really small team with 6 developers and maybe 3 CI builds per hour, I had to give the Docker container 7GB ram to run well. (not including the CI runner)
It's ridiculous that it would need that much for a small setup, but any less and page load times get very slow.
Our monitoring functionality leverages Prometheus (https://prometheus.io) to capture and monitor systems and applications. There is support for a wide variety of common apps (https://prometheus.io/docs/instrumenting/exporters/), and developers can always further instrument their own code as well.
In the near term, our GitLab and Prometheus integration is focused on detecting performance changes on key metrics like latency, throughput, and error rates. Then leveraging our knowledge of the code base and CI/CD, funneling that feedback back to developers and the changes that introduced them.
This will likely not replace a dedicated monitoring solution like New Relic or Nagios, but can augment them and surface performance analytics within the tool developers are already using daily.
We'd love it if you give it a try and pass over feedback!
The new beta navigation is so much better than the current navigation model. The navigation context is now easier to see and I find the colour quite attractive. :)
I can't wait until it is enabled by default in future versions.
Is this a Enterprise edition only feature? From reading it seems this is the case which is a shame because it feels like a pretty basic feature but also really useful to small CE users.
Also if you have tons of features like these, I think it's impossible to get all of them right and at the end, you have a lot of mediocre/poor features and few good ones.
The other thing is user experience. Who will know all these features, especially when setting some of them is already hard and need experts...
If you want to be everything for everyone, it's mostly impossible or you get a very complicated, unusable and/or slow thing.
I'm curious for counterarguments to these points.