Hacker News new | past | comments | ask | show | jobs | submit login
GitLab 10.4 released (about.gitlab.com)
198 points by okket on Jan 23, 2018 | hide | past | favorite | 169 comments



CI is fine and I am happy GitLab is offering it as it makes sense to integrate strongly with source control. But building a web-IDE?

Please don't go down this rabbit hole, GitLab - there will be dragons (in essence you will have to build an OS for the browser). Developers have their beloved editors that work very well for the most part (at least better than their JS counterparts).


I was at Google when they implemented something like this in their source code viewing system. If you see a typo in a comment, for example, you click the pencil icon on that line, which pops open an editor with your cursor on that line, fix the typo, and can create and submit for review a pull request right there. Less than a minute.

It is fantastic.

I'm so glad Gitlab is providing competition to Stash and Github. Nice work.


Someone in my twitter feed was just asking for this feature last night. I don't want to open an editor, checkout, fetch, edit, commit, push, open a PR to a PR when I can just do it in an editor. I don't use Gitlab but I agree, it's a brilliant feature.


Thanks! There is a place for web editors (quick edits, new projects) and local editors (working on the same codebase for a longer time).

As warned against we don't want to fall in a rabbit hole of spending all our time on this instead of the other parts of our complete DevOps vision https://about.gitlab.com/2017/10/11/from-dev-to-devops/ (logs, roadmaps, create project on push, binary repository, incremental rollouts, tracing). Therefore we based the web IDE on the awesome VScode. This means that it is a doable effort to maintain it.


Hi!

It would be nice to see burndown charts in CE along with EBS[1], just like pipelines and runners.

[1] http://help.fogcreek.com/7676/evidence-based-scheduling-ebs


See our Stewardship page[0] for how we determine what goes in EE vs. CE, for now Burndown Charts will stay in EE. As for EBS, feel free to open an issue for it[1] since I couldn't find one :)

[0]: https://about.gitlab.com/stewardship/ [1]: https://gitlab.com/gitlab-org/gitlab-ee/issues


This is not so much an argument for a Web IDE.

It is an argument against the cumbersome Git workflows, and the even more cumbersome GitHub Pull Request workflow on top of that.

Why can't I fix the typo in my editor, hit "upload", enter some commit message and be done with it?


> Why can't I fix the typo in my editor, hit "upload", enter some commit message and be done with it?

You can do this, if you have permission from the repository and have a local up-to-date clone already. I suppose it would be nice to be able to create a PR to someone elses repo without needing to fork first. I don't see any other reasonable way to make this easier.


I'd respectfully disagree. In a monorepo the size of Google's (or even Square's, where I work now), there would be a whole schlep to open the file in your editor. Much better to fix right there where you're browsing.

Also, at Google, it's not even Git, or a Git/Github workflow.


Please respect the context of a comment, and refrain from strawman arguments.

This was not about whether or not this feature is good for the Google repo (of course it is, nobody disputed that!). This was about whether and how to apply this to other projects.


Please respect the fact that some of us actually read comments and reply to them thoughtfully. Throwing around trite accusations of “strawman” arguments doesn't help anything.

The comment I replied to was claiming that it was the git workflows (I assume for submitting, but am open to correction) that make this cumbersome. I was pointing out that in an enormous repository (and I did mention my current employer as an example too -- and we use Git, not Google's internal tooling), just getting to the point of having the latest version of some far-flung file that you usually never touch open in your editor is cumbersome enough to make this feature a win, regardless of whether the workflows for submitting the change disappear altogether.


Thanks for the clarification.

I would still see this as an instance of unnecessary complexity in the Git workflow - mostly the workflow for submitting, but not just that.

Of course, one could also argue that this is by design. Still, I don't see why a lightweight client that is feasible for the web would not also be feasible for the command line.

For example, it would be great to be able to have a checkout with reduced history. You could reduce in time and/or space, i.e. just a subtree and/or just the last 3 months. Even a distributed VCS doesn't need all history and all files as long as it has the relevant checksums.


I totally agree about the unnecessary complexity of the git workflow.

If you haven't seen it before, check out gitless: actual research-backed simplification of the git commandline workflow.

Reduced views of files and history are super useful, but unfortunately don't work in the case where you've browsed your way to a part of the code that you seldom work on, since you're less likely to have loaded them into your narrow slice: it's still nice to be able to fix spelling mistakes far afield, and the web editing helps nicely for that.

What would help with the cases you describe, though, is a synthetic filesystem view, cache-faulted in as needed, like the one Microsoft is building. I'm pretty excited about where that's heading. https://github.com/Microsoft/GVFS for the project, https://blogs.msdn.microsoft.com/devops/2017/11/15/updates-t... for the announcement they're going to be working with Github. Pretty cool stuff.


We're not trying to replace your local editor. We know you love them. But there are still lots of reasons and situations where a web IDE is perfect and beneficial.


Please do replace my local editor.


GitLab Frontend Engineer here. I was previously working at Koding.com which is an online IDE and I was the author of Koding's collaborative IDE. Koding's ultimate goal was moving your development workflow to the cloud. To do so Koding's IDE is capable of doing most of the things which Sublime Text can do. Beyond that it has massive collaboration feature built-in. We were thinking that we did the best online IDE and we were ready to replace it with our local IDEs or text editors. We tried really hard to do so but we realised that, it's almost impossible to replace pro users local and powerful stuff with something online. Been there done that before, it didn't work.

Hopefully, GitLab's idea is not replacing your local IDE. It is just for making things easier and we are doing our best to give you the most powerful tool out there. Although, it's currently in Beta state, I believe it's very useful when I want to go and fix a typo or change a simple thing in multiple files and do a quick commit.

One last thing is, please don't forget that, not only developers use GitLab. It's kinda easy for us to checkout the target branch and do some commits but this is a golden for not so technical folks.


> Please don't go down this rabbit hole, GitLab

"No yak goes unshaven."


It looks like that GitLab got the Featuritis. Instead of adding tons of unready and half baked feature it should focus on stability and performance.


This may be completely unfair, but I think it's a drawback of the business model they have. When you have an "open core" that you don't charge for, you have to have something that you can charge for. If you make the "enterprise edition" performant with good UX and the "community edition" slow and clunky then you threaten to kill off your potential user base. On the other hand, if you spend all your time improving the core, then there is nothing for people to buy. The prudent thing to do is to do as little as possible on the "core" because it is essentially a marketing expense. You need to invest as much as possible in the extensions that bring in revenue.

Personally, I'm not a big fan of "open core" systems for this reason. I'd really prefer that companies like GitLab concentrated on actual services rather than trying to sell software. Having an "open core" can in some ways poison the core for outside development because you usually have to allow your code in the enterprise versions (or maintain your own forked copy). This is one of the reasons why Ghostscript never got the outside help that it really deserved (let's face it -- who uses a free software system and doesn't use Ghostscript?) The fact that nobody pays for it -- or even contributes -- was at one point a pretty sore issue for the author.

I love the fact that GitLab contributes useful free software to the world. I am disappointed that their business plan relies on selling proprietary software. I honestly believe they would be in a better place if they took a different approach, but they have always very politely disagreed with me when I've mentioned it ;-).


I'll try to very politely disagree :)

We tried charging for services: donations, paid feature development, and paying for support. None of them scaled and we moved to open core which allowed us to spend much more time on performance, security, installation, and dependency upgrades.

We want to make sure that the open source version of GitLab is just as performant and has an equally good UX as the enterprise version. There is no difference in the UX and there are no proprietary performance optimizations in the enterprise version.

There are some things that we see as a feature but that you could see as a performance item. An example is the SSH lookup in a database that used to be in enterprise and landed in the open source version in this release.


I was a fan, and we got an Enterprise license when it was the only tier offered. Now there's two tiers above EES with insane price jumps. I can only assume there will be even more tiers introduced so we decided not to upgrade. We use GitLab as a platform for the whole company, but only a handful will use EEP/EEU features. An upgrade would be inhibitivly expensive or we must reduce the number of licenses to a fraction of the employees.


Hey, you guys are running a business and I'm running a commentary :-) This stuff is hard enough that what-ifs and naysayers are going to crop up. One of these days I should just put my money where my mouth is. I think the scaling issue is definitely a huge problem and even Cygnus said they found it extremely difficult. I'm not sure it's possible to get the ROI you need if you accept VC, so given your current position, it's not really fair for me to criticise.


Hey there! Jacob, Frontend leads here. At GitLab we've put together a team of 5 Frontend Engineers (including myself) to focus solely on performance and stability issues. We are focusing on reducing the size of our JS and CSS. We are currently focusing on splitting our one giant JS file into many smaller JS files. Here's our current issue for code splitting the JS. https://gitlab.com/gitlab-org/gitlab-ce/issues/41341

Here is a link to our current CSS Refactor plan which will reduce the size of our CSS significantly and reduce render times. https://gitlab.com/gitlab-org/gitlab-ce/issues/42325.

We've got a lot in the pipeline. In the process we won't be shipping any new features, unless you call blazing speed a feature.


What real-world speedup will you see? There's no data in that issue to support the idea that this is worth doing. Surely someone hacked up a split version and ran some benchmarks, right? Maybe you could include that data somewhere as the closest thing to that I can find is:

> Benefits: We will have files separated. It’s going to be better.


I'll add our benchmarks into that issue shortly. Our biggest slowdowns are parsing, layouts, and memory. Without this splitting every file is imported, functions are instantiated but not used. With our new method only files needed are bundled and most of the dispatcher.js file is removed. Removing error prone string matching from switch statements to having it automatically done by webpack.

The plan is:

1. Split up the files.

2. Get rid of the as much of the dispatcher.js as we can and have web pack do the routing dynamically. That would eliminate most of 1 large confusing file (dispatcher.js). JS is still cached for the pages you visit but it isn't 1.5mb of JS it would be ~20kb of JS.


Jacob that stuff really has no effect on the stability of the CI system and private runners, which is the real problem.


Could you point to problems and ideally issues that you are facing?

Kamil, GitLab CI/CD Lead


A few that come to mind for me:

Docker socket timeouts have been floating around for a long time with no resolution: https://gitlab.com/gitlab-org/gitlab-runner/issues/2408

Intermittent HTTP auth errors causing build failures: https://gitlab.com/gitlab-org/gitlab-ce/issues/30670

More generally, tuning the runner's polling interval to minimize latency is also tricky, especially with multiple independent runners, and the runner doesn't handle 429s well in my experience (getting stuck in a tight retry loop without backing off sensibly, thus continuing to exceed its request limit).

Thanks for taking the time to solicit feedback.


Yeah, and they've promised to work on performance for ages now with almost no improvement.

I run a very small installation with only a couple of projects and some hundred issues on a 4 GB machine. It eats up 2 GB (sigh!!!) - and often still feels extremely slow. I mean 2 Gigabytes!! What for? That's a multitude of all the data I have in the DB there. And then it's not even used for something useful like caching. Some pages take several seconds to load. As a developer that's totally unacceptable to me.

Is ruby really such a mess that it's impossible to run an app with reasonable memory consumption?


> Is ruby really such a mess that it's impossible to run an app with reasonable memory consumption?

Yes.

Any non-trivial Ruby app will quickly eat up 500MB, and any non-trivial Rails app will soon balloon to 1GB, with things getting worse over time due to memory fragmentation†. Since there is no parallelism your only option is to either have more unicorn workers, for which prefork and COW are hardly working to save you from duplicating memory, especially over time, or have puma threads and use JRuby, which is a memory hog of its own and often slower than MRI.

There have been arguments made that developer time trumps CPU time [0] but there are some workloads and problem domains and uncontrollable events for which this works at the beginning yet later on you find having yourself painted into a corner as suddenly things are not sustainable because you just can't throw more hardware at the issue without going belly up[1]. Once the low hanging fruits have been reaped you're being challenged just to make your app behave within established parameters with diminishing returns, which I'm sure you'd rather spend on solving actual problems for your customers. At that point you might just as well spend the money on rewriting part or all of your app in a more frugal ecosystem and mindset[2].

† Switching to jemalloc may or may not help. Over here it did not.

[0]: https://m.signalvnoise.com/ruby-has-been-fast-enough-for-13-...

[1]: https://twitter.com/migueldeicaza/status/950054181045440518

[2]: https://twitter.com/lloeki/status/950079609051152384


Thanks. I was afraid to hear something like this.

Being a PHP developer myself it's really hard to believe that resource consumption is obviously treaded with so little priority in the rails/ruby world.

And some attitudes here like "who cares? memory/cpu is cheap nowadays" are in my opinion part of the problem. I'd say, well written software should use as little resources as possible. Probably a habit that comes from my early days on a C64 back in the 80s.


> well written software should use as little resources as possible

It's about priorities. IMO if using as little resources as possible is priority number 1, then something is amiss.


Replace "Ruby" with any modern interpreted language and then get over it. With the price of memory so insanely cheap, why does it matter? Gitlab isn't trying to be a lean Go microservice-powered app. It's attempting to be the centre of your source control and deployment world. So 2gb usage seems reasonable to me.

Any cloud provider will provide a VM that can comfortably run it for a very reasonable price.


It’s not just 2GB – I’m running a Gitlab with essentially 2-3 users and activity every few days, and it completely maxes an entire core and uses 6GB RAM constantly.


So what does it do when it uses all that CPU despite the lack of requests coming in? Maybe you have misconfigured some periodic job like backup?


I'm using their official helm chart, it's just idling. Sidekiq is doing nothing, the server is serving no requests - I have no idea how it manages to spend so much time on nothing.


The helm chart is deprecated. Use their omnibus package, it's very stable.


The helm chart "gitlab" is deprecated, the helm chart "gitlab-omnibus" is currently recommended, and the helm chart "cloud native gitlab" is going to be recommended in the near future - a GitLab employee answered me a few hours ago in this thread :)

I'm using the gitlab-omnibus one.


Hi, sorry you haven't noticed any improvements. We've been chipping away at performance for the last 6 months and have made some pretty noticeable improvements in various areas of the application.

For example, the average response time of an issue page has come down from 2.5s to 750ms over the last 6 months.

We still have a lot to do, but we're getting there.


One big thing that we would love to do is moving to a multithreaded application server https://gitlab.com/gitlab-org/gitlab-ce/issues/3592 That would save a ton of memory but last time we tried we got burned because the ecosystem wasn't ready.


750ms is still a very bad value.


Maybe you are already committed to gitlab and its workflow but Gogs and Gitea are small and fast GitHub clones written in Go if you need something light-weight.


Personally I run gitbucket, which is a small and self-contained java-based "github alternative".

There are a bunch of these small project/git-hosts, and while they're easy to manage they're less featureful than the gitlab offering. Gitlab does have some great features, such as the whole integrated CI system, built upon runners & docker.

The downside is complexity, and resource-usage. I know gitlab is free, and I could install it, but the added resources and potential security issues make it a non-starter.


I'd switch to using one of those, but my team relies on the code review of gitlab.


    > Yeah, and they've promised to work on performance for ages now with almost
    > no improvement.
That's simply not true, there have been a _ton_ of improvements that we made over the past 2 years. A very simplified example:

A specific GitLab.com issue in December 2015 vs January 2018 (I can't seem to find what the exact URL is):

2015: http://stats.gitlab.com/1902794/2015/12

2018: http://stats.gitlab.com/1902794/2018/01

Apart from that you can take a look at any of the past release posts or merge requests tagged with "performance" [1][2] and you'll see that plenty of improvements have been made over time.

    > Is ruby really such a mess that it's impossible to run an app with
    > reasonable memory consumption?
Ruby is not really to blame for this, instead it's mostly Rails and all the third-party libraries that we add on top that consume so much memory.

[1]: CE improvements: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests?scope...

[2]: EE improvements (some of these may be merges from CE): https://gitlab.com/gitlab-org/gitlab-ee/merge_requests?scope...


I can only tell what I've experienced. I regularly read the release notes, especially the sections on performance. And with each upgrade I'm hoping so much that we get back to loading times below 1 second (like it was with early GitLab releases).

Unfortunately this is almost never the case: Sometimes the pages load even slower. In the best case there's not much difference. Same goes for memory consumption.

But I understand now that this will always be a problem with rails.


Do you have any examples on what kind of pages are loading slow? Are these issue detail pages, MR diff pages, etc? It's possible you're running into cases we're not experiencing (enough) on GitLab.com, or maybe we did but we haven't really looked into them yet.


I'd say 2GB is a reasonable amount of memory for any application to use. Have you seen Slack recently?


They're improving performance and UI though, the new features only go to EE* versions. The new features are only a reason for them to get new Enterprise subscribers.


Absolutely this. The amount of RAM that I need in order to run Gitlab is greater than the RAM requirements of all of the projects that I would host in a Gitlab instance combined.

Fortunately, gogs is a thing.


It's a side effect of what they are calling "Complete DevOps"

https://about.gitlab.com/2017/10/11/from-dev-to-devops/


They've been promising to fix performance and the UI for years now, so I wouldn't hold out much hope. It's a shame, but there are better open source products so it's not like there are no other options for self hosted.


For what it's worth, in the last year (since January 23, 2017) we've merged ~440 merge requests labeled "performance"[0]. It's not perfect right now and there's still plenty of work to do, but compared to when I started at GitLab almost two years ago it's night-and-day.

We've also got an entire team dedicated to porting our Git layer to Go with Gitaly[1], which has been a major bottleneck that we've started resolving over the last year or so.

[0]: https://gitlab.com/groups/gitlab-org/-/merge_requests?label_... [1]: https://gitlab.com/gitlab-org/gitaly


IMHO the UI did get better over the years.


Thanks. We do think it got a lot better last year https://about.gitlab.com/2017/07/17/redesigning-gitlabs-navi...


They got this disease long time ago. Instead of focusing 110% on UI and performance they just push more features.


Things have changed and seem to be measured. Change too much and your existing customer base won't like it


The security testing feature is definitely interesting but I wonder how it performs as that by itself is a non-trivial task. It can be done for sure but it takes a lot of resources and care to make it useful.

EDIT:

I would like to add that if this feature is in fact really useful and it works, the price for the GitLab Ultimate is justified if the license does not restrict you to the number of targets you can test. It can be used as a service and could potentially replace other infrastructure in a typical corp environment that is used mainly by the security team. But as I said earlier, this is a non-trivial task.

EDIT 2:

This is my last edit. We use GitLab CI internally and it is quite nice although it is not without issues. There are some really funky problems. But it works and I love the fact that it is integrated into the GitLab product. Except for TravisCI which makes it trivial, most other CI tools are very difficult to setup and maintain and for small teams this is yet another annoying thing to do. As a startup, I can say it is very useful.


The DAST feature uses ZAP under the covers, their integration is a thin trigger around that tool. You can already use it for free outside of GL.

https://hub.docker.com/r/owasp/zap2docker-stable/

https://docs.gitlab.com/ee/ci/examples/dast.html


I was suspecting that. Thanks for sharing this info. It will be really nice if you guys implement ways to swap the built-in DAST tool. My startup is actually going to release something really cool and useful in the next couple of weeks and it will be awesome if we can provide features to be compatible with GitLab.


Interesting. If you want to integrate it there are instructions in the docs https://docs.gitlab.com/ee/user/project/merge_requests/dast.... (use gl-dast-report.json to send results back).

If your product is as cool as you say it is we would love to add it to GitLab by default and run both ZAProxy and yours by default.


Thanks. I will have a look. Btw, the tool will be a black-box vulnerability scanner but just like ZAP, it will be able to proxy as well - all nicely packaged with minimal dependencies and no framebuffer rendering - so it should be snappy.


Is there a repo I can follow along at? Sounds interesting.


I think you just need to drop a JSON file with the right format into the `artifacts` directory.

That said, this integration isn't really adding much over just defining a build step that runs the check (unless you're using AutoDevops to have this run on all your repos). You could just provide a Gitlab/other CI config snippet for users to run the tool.


Of course, and that will work. However, what if the vulnerability artefacts need to be converted into issues in GitLab? That will require a much deeper integration then printing the details in the build log. Dropping a file will be easy. We can provide just a switch for that.


GitLab Ultimate does not restrict the number of targets you can test.

What are the funky problems you encountered?


Most of the problems are around docker not cleaning up after itself. Eventually, the discs get full and then we have to manually clean things up. Another problem is that for whatever reason CI cannot checkout the git repo so the build fails. Not sure why - we have not investigated yet. There is also an issue with setting up the variables. Sometimes it 500s on save. I will try to get someone to plug those in as actual defects in your backlog.

Btw, when I say funky, I don't mean bad. I mean mild annoyance.


Ok. So most of these problems are related to Runner and Docker Containers for builds: 1. You can make Runner to not create cache containers with Advanced Settings and `disable_cache` option, 2. As for Git Repo, you can force Runner to retry cloning, 3. 500 with variables is new to me, let us know if you manage to get a stack trace from this error.

Kamil, GitLab CI/CD Lead


Are most if not all of GitLab's future CI and CD improvements going to be focused on using Kubernetes and/or containers exclusively?

We're not using that method of either hosting GitLab or deploying our software at my company for various historical reasons, and while GitLab has been good for us overall in its basic workflow, it's a bummer how much of the future roadmap and CD/Devops improvement seems not to apply to our setup.


We're definitely doing a lot with Kubernetes, especially within CD, but it's not completely exclusive. We usually provide powerful primitives that can be used for anything, but make it easier if you're using Kubernetes.

Take monitoring for example. You can add Prometheus monitoring to (nearly) anything. But if you happen to use it with Kubernetes, we'll grab a bunch of data automatically. If not, you may have to configure it yourself.


At my company, we use Gitlab only as version control system and it works very well for around 100 programmers. We run it on an EC2 medium instance, around 200 GB standard SSD storage.

We don't use Gitlab for CI. We use Jenkins.

For task and issues management we use JIRA.

Yes, it's 3 tools to manage but since each of the tools is really optimized for its task, it works really well.


I think any of these all-in-one solutions will eventually hit a wall. I've used integrated VCS/CI/Issues for small-scale projects and static site generation and it's very handy. But any substantial enterprise project is going to need Jenkins and JIRA before too long.


Didn't even know gitlab has CI. At my office we have Jenkins plugged into our local gitlab instance.


GitLab CI is, for us, configurable in ways that CircleCI and Travis are not, by a long shot (but they both have other features that gitlab doesn't have, it's just that we don't need them)

I sure miss CircleCI step folding in the log output though.


this, step folding is definitely missing in gitlab-ci compare to travis, especially as it seems most CI product have it and (I think) it something that only require frontend developement ?


Thanks for the feedback. Here's an issue to track the request: https://gitlab.com/gitlab-org/gitlab-ce/issues/14664 and I added it to our Missing features section with https://gitlab.com/gitlab-com/www-gitlab-com/commit/ddfc9ee1....


I know you said you don't need them, but what features are we missing?


GitLab CI works really well, haven't looked back at Jenkins since I swapped


Agreed. Love it.


Same here! We love GitLab CI


I actually really like gitlab CI because it is incredibly simple and easy to get started. I'm sure if we had a more complicated CI flow we would benefit from Jenkins, but it's 1 less tool to manage. Also, IIRC, when it first came out, Jenkins didn't allow you to store the CI config in a repo file, which I much prefer.


I had to use Gitlab several times in the past, mostly because the companies I was working for wanted free private repositories.

Though I appreciate to have a Github alternative with some really solid CI tools, it just feels like Gitlab cannot compete. Discussions and the UI in general are unreadable, everything feels slow (10 seconds to populate the asignee dropdown, come on...), updates and deployments every two days in the middle of work days ("sorry, I cannot push, Gitlab is down"), and the cringiest thing of all, seeing: "deploying Gitlab CE 10.4.0-rc8" [1]. Releasing Release Candidates?!

Maybe I should try to deploy Gitlab on some server of my own.

[1] https://twitter.com/gitlabstatus/status/954491741322776582


> it just feels like Gitlab cannot compete

The GitLab hosted service is not really a competitor of GitHub at all; in fact, their (primary) marketing is based on self-hosted service.

In my opinion, this has connection to the corporate/engineering culture - GL relies on young and very sharp... but also underpaid minds, at least when compared to GH.

I speculate that in order to have a high performing and stable infrastructure, you need grey beards, who require salaries that GL doesn't offer. I highlight speculation - but I'm not surprised when the company losing 6 hours of data is one rather than the other.

If this is correct/realistic, it's not a negative judgment; it's just a different orientation.


Using gitlab.com with a group of ~20 devs - one of the main complains some of them had was 'It is slow'. From the last few months I hear more and more 'It is not so slow now, same as github'. So good job on all the performance improvements, users can feel them.

So it really depends how long in the past you are talking about :)


Thanks, glad you've noticed the improvements. We've still got more to do, but it's been an important part of every release for a while now.


seconded. I don't know whats been worked on in the past year, but its gotten a lot snappier overall.


What people don't understand about gitlab is that it's a absolutely monumental resource consumer.

I run one for a community of hobby developers and to keep my stuff out of github for ideological reasons, but it's running on what is, by _FAR_ the most beefy machine I run.

Normally I have machines that are a couple cores, couple gigs of ram, or single purpose machines with under 1G and a dedicated thread on the hypervisor.

Gitlab has a 32G DDR3/8 Physical core CPU machine to itself.

I consumes, at any given time, about 25% of that (before FS caches, which, you're going to want).

I had a friend running this before me on a VPS with 4G of memory and the thing was so annoyingly slow that we blamed the hoster and our users were turning back to bitbucket/github.

Since the upgrade, things are smooth as a babies butt.

Although it's hard to justify the SERIOUS expense of this server, it's certainly fast enough. I worry about larger deployments though.

Btw, you can get a taste for yourself if you like; https://git.drk.sc

Maybe it doesn't scale very well with many users/projects. We've only got a couple hundred projects and around 100 users. (and only a handful of CI runners)


Suggest gitbucket - https://gitbucket.github.io/

blazingly fast (a single war file - just run "java -jar gitbucket.war" to get started) and has a very nice UI. A plugin system enables you to extend the functionality (including CI) ... and a very active dev community.

https://gitbucket.github.io/


For anyone running this, can you comment? For example, one recurring criticism, before GitLab and the community managed to stamp out the expectation that it would be feasible, was that people were trying to self host a GitLab instance on a cheap SBC (e.g., a Raspberry Pi) or the smallest DigitalOcean plan and were surprised when this wasn't doable, while Gogs and Gitea handle this fine.

So to get a general idea of what sort of setup is expected in order to run a gitbucket instance, if you're running one, what are the relevant details?


Many thanks for the link, as Java/.NET dev it is surely a very good option to know about.


I love GitLab for the CI/CD which is absolutely second to none.

But I have to second the resource usage - GitLab runs on beefier machines than our production database cluster. We are throwing everything at it, and performance still sucks. Both in terms of UI responsiveness and CI builders which are around 2-3 as slow as a local docker build.

For example, we have not yet seen a good enough reason to migrate our production database to use NVMe disks. But for GitLab server and CI builders - very much so (for a modest boost in performance).

But then again, I still think I get my money's worth.


what I dont understand is that gitlab raised a very large amount of money - can it not pay a team in parallel to port it to golang or java ? Maybe the gitbucket or gitea teams are up for hire.


I tried gitea a while ago and it seemed very well featured while running much faster than gitlab. Single executable go app so it was also trivial to install.


I used both, and in my experience gitea/gogs starts out much faster than gitlab, but becomes slower and slower as repository sizes increase. At some point it became practically unusable, despite the repository not being particularly large.

On top of that we had constant issues with pull requests causing internal server errors, the merge check failing in mysterious ways, and so on. These issues may have been fixed now. We ended up switching to GitLab, and while it's quite resource intensive, performance seemed much more consistent.


Their docker image also goes up blazing fast compared to gitlab.

Also, they have repo mirroring which as I understand is not available on gitlabs CE edition.


Repository mirroring from several sources has been available in GitLab CE for several versions.


One-time, one-way mirroring has. Continuous push-for-push mirroring is still an EE feature.


> Btw, you can get a taste for yourself if you like; https://git.drk.sc

That's inaccessible over IPv6 along with what I presume to be the community's website (https://darkscience.net). As far as I can tell GitLab itself support IPv6 with ease, although gitlab.com doesn't (https://gitlab.com/gitlab-com/infrastructure/issues/645) due to limitations from Azure.


I believe this is a docker limitation (or, my version of docker limitation) I'll look into it.

Thanks for letting me know.


For lightweight use among friends, I found that gogs works much better for me. Has a tiny subset of features but reliably creates/manages repos.

https://gogs.io/


Sadly gogs for whatever reason doesn’t seem to actually use connection pooling or proper caching. It runs a new SQL query for every load of the "explore" site or a repo, which means I end up getting less requests/sec out of it than with gitlab.


Gogs is great. Gitea is a community fork of gogs if you're looking for more features as well.


Yes, the UI for issues and merge requests is atrocious with its light grey lines on a white background. I cannot skim a issue because I have to really concentrate on where a comment ends a new comment begins. Everything else also seems to float around in empty space, e.g. the emoji reactions.

Give me solid black lines for orientation, dammit.


Thanks for the feedback. I've created this issue for the UX team to discuss ways we can make this better. We agree, there are a lot of opportunities for improvement here.

https://gitlab.com/gitlab-org/gitlab-ce/issues/42331


I've been asking for this for well over a year. Nothing yet. I think they need a fairly fundamental shift in design thinking.


Hey Dan, Sarrah from the UX team here. I'd love to hear more about your thoughts here. Is there an issue you've created that you can point me to? If not, that's ok. Maybe you can tell me two or three UX changes we could make that you feel would have the most impact on MR and Issues.


Hi Sarrah, I raised it with the UX team in a feedback email a while ago.

It's not just an issue with merge requests or issues, it's more of a general thing – the colour palette and typography do not make GitLab easy to scan in the same way that GitHub is. I find it requires much more mental overhead to parse GitLab.


This, about the mental overhead. So much.

@Sarrah: I put some of my thoughts here: https://news.ycombinator.com/item?id=16214892


FWIW I agree with pretty much all of those points. Great to point out things like line length and container size.

Also possibly worth mentioning, I have the same problems with much of the marketing site and blog, not just the application.


I'll bite on the last part about the release candidates: Technically, in software, every stable release originally was the last release candidate in which no showstopper bugs were found.

Some decide to put in the extra work and rename that last working release candidate and remove the '-rcX' from the file names.


This may be, but Gitlab seem to push all of their RCs to prod.


I work for GitLab and I was responsible for introducing release changes for this cycle.

That is correct, we deploy our RCs to production. I consider RC as only a point in time snapshot of a release that will be sent to public.

One thing that is incorrect is that we push all our RCs to production. For example, this release RC1 did not get to GitLab.com because during deployment to our other environments we found an issue that could have caused a large problem at scale. However, we executed a lot of QA and FA tests against all our RCs. You can see this here https://gitlab.com/gitlab-org/release/tasks/issues?scope=all... .

If we only deployed a final release, we would most likely be overwhelmed with the amount of changes GitLab receives and it would be hard for us to monitor and limit the impact.

We are working on getting to continuous deployment to GitLab.com, and as part of this push we started right now with the tools we have at our disposal. One of the ideas to get to stable, non impactful deploys was to create many RCs. Thinking was that if we have a smaller delta between the RCs, we can more effectively check the changes and reduce the overall impact. We also wanted to make sure that we over-communicate our status updates so that we can get feedback in case we overlooked something.

Was it problem free? Nope. Did it limit the impact to users? I firmly believe so. We have a lot of work to do to get to blue-green deployments and continuous deployment on GitLab.com scale while we also continue to ship to on premise customers. I do believe that we will get there the best way we know how to, and that is iterating on changes.


> One thing that is incorrect is that we push all our RCs to production

I hadn't recorded them or looked it up, hence the use of the word "seem", but this is at least good to hear.

> Was it problem free? Nope. Did it limit the impact to users? I firmly believe so.

You may be right. It might have been worse, and perhaps releasing more frequent low-impact changes helped, but I'm somewhat unconvinced. Orchestration was known not to be stable; given that, I would typically try and minimise deployments until orchestration had been more comprehensively tested and/or staging/canary environments had better parity with prod.

Additionally, the deployments were done in the middle of work days (as has been mentioned elsewhere), at highly inconvenient times (just before Christmas!), with little to no warning to users, hinting that bad internal planning must have played some part. When queried, the reply was literally that there is no schedule[0] . This response does not instill confidence.

> I do believe that we will get there the best way we know how to, and that is iterating on changes.

I really hope this is true—I'm still using Gitlab myself—but while I was an advocate 6 months ago, I've very much put such advocacy on hold for the moment.

Also, just to mention, I really appreciate there are GL employees on HN commenting on these threads. I've received a reply before elsewhere (on perf. and Gitaly) and it's been enlightening and informative. I really do like the transparency in this organisation, and have done my best to be as patient as possible with the stability/perf. issues up until now. But it's really become a bit ridiculous at this stage.

[0] https://twitter.com/gitlabstatus/status/943574131425140736


> Additionally, the deployments were done in the middle of work days (as has been mentioned elsewhere), at highly inconvenient times (just before Christmas!)

It's always the middle of a work day in some part of the world. It's always going to be inconvenient for someone.

Do these updates really cause significant outages or just slower response times due to invalidated caches or whatever?

The dogfooding-preleases-on-Gitlab.com idea in general seems perfectly reasonable to me. They're the ones best equipped to deal with any resulting issues so catching them before the majority of on-premise users is going to hit them seems like a very good idea.

If they wouldn't do it, do you think the possible issues would magically disappear until GA? How?


The parent's point is that every production push everywhere was, at some point, a release candidate. Most people just rename that branch to remove the -rc suffix. Gitlab doesn't.


I get that, but I was just pointing out that while releasing a release candidate is fine, releasing multiple release candidates of the same version on production is slightly less fine.


Oh, I see what you mean now. Yes, I am inclined to agree there, although maybe their RCs are more akin to bugfix releases.


Jacob from GitLab here. Would love to help you out with your assignee dropdown taking 10 seconds. Currently on GL.com I see it takes 102ms. https://imgur.com/a/vSV21.

Also, I am wondering if you were looking at a much older version of GitLab with the discussion and the UI being unreadable. We've made a ton of improvements to our UI in the past year. If you are looking at a recent version, can you tell me what is unreadable about the discussions and the UI in general? That way we can fix it.


> the UI in general?

One thing I dislike very much about the UI in general is the fixed/sticky header, which wastes precious vertical space (especially with an Ultra Wide monitor since there's mostly nothing in it). Also due to its color I find it distracting when reading code, would love I could just scroll down to make it go away.

Most of the time I'm using the left sidebar, not the top one. For the few times I want to use the top bar, scrolling up really isn't an issue. Note that GitHub also doesn't have this.


Hello Jacob, glad to see Gitlab employees around!

For the assignee dropdowm, it was roughly 6-8 weeks ago. I just went back to an old project and things seem to be much faster now.

In the past 3 years, I had to work on Gitlab a few times (for a few months every time), and at the end of every project, I had this feeling of: it was slow (push speed at the time could take 5-10 seconds when it was almost instant on Github), and basically all request were slow (dynamic dropdowns, posting a comment...). It is good to see you are focused on fixing these problems though :)

For the UI thing, a few points:

- imho, there's just too much on screen. I would say Gitlab is to Github what IntelliJ is to Atom: too much features I don't want to use/see. They just distract me. - the container in the middle is too large, there too many word per line and things get hard to read - Even with the breadcrumb, it is really hard to see "where" I am in the app (which project) - In the issue details page, it is hard to see the continuity of the comments, what is a comment and what is an action ("assignee changed from to..."). I cannot quickly go through an issue an getting on overview of the discussion easily.

I think these are mostly tiny UI things to change, and only with some typeography improvements, some borders and better contrasts, things will be a lot more clear.

Of course, all of this is highly subjective.

5mn of CSS tweaks: https://imgur.com/a/FLTJT (ofc it adds other issues/concerns, but you get the idea, I am no designer, but sensible to nice UIs)


"I think these are mostly tiny UI things to change, and only with some typeography improvements, some borders and better contrasts, things will be a lot more clear." YES! Couldn't agree more. We (the UX Team) are pushing hard to get changes like this in. I opened up a new issue to capture the comments here https://gitlab.com/gitlab-org/gitlab-ce/issues/42331. I will add your insights to the issue, much appreciated! And...LOVE that you used that example in your 5min tweak <3


I use it at work and haven't had anything to complain about, but we are using the self-hosted version.


> The gitlab Helm chart is deprecated, and will be replaced by the new cloud native GitLab chart.

On the other hand

> Cloud Native GitLab Helm Chart. THIS REPOSITORY IS UNDER HEAVY DEVELOPMENT. IT SHOULD NOT BE USED FOR ANYTHING EXCEPT DEVELOPMENT

So, what should I use now? also,

> A migration will be required to move from the current deprecated chart, to the new cloud native GitLab chart.

Yet I don’t see any explanation for how to migrate?


From https://docs.gitlab.com/ee/install/kubernetes/, the best way to run GitLab on Kubernetes today is the gitlab-omnibus chart:

https://docs.gitlab.com/ee/install/kubernetes/gitlab_omnibus...

It's in beta, with some limitations on it's production-worthiness - basically that the Postgres Helm chart that it depends on isn't configured as well as the Postgres built into regular Omnibus.

I know it's confusing that we've got several charts, but we're trying really hard to reduce that down to one as quickly as possible.


That’s the one I’m using, but based on today’s release page saying the Helm chart is deprecated and I should use the cloud-native GitLab chart, and based on what your link to the Omnibus chart says:

> This Helm chart is in beta, and will be deprecated by the cloud native GitLab chart.

I interpreted it as this being deprecated.

That said, great work on performance and startup times in the past months, for the first time my 2-user GitLab isn’t maxing a full core and using 6GB RAM anymore, but instead using 60% of a core and 6.3GB RAM, while site load times have gone down significantly, and startup now completes actually within of the first 5 minutes (before, Kubernetes’ readiness checker often killed GitLab before it managed to come up)


Hi Kuschku,

Sorry for the confusion, as noted we are trying to remedy this with a single chart as quickly as possible.

Currently we have two Helm charts that can deploy GitLab, "gitlab" and "gitlab-omnibus".

* The "gitlab" chart deploys only GitLab itself and is not recommended. This is the chart that has been announced as deprecated in the blog post.

* The "gitlab-omnibus" chart is what we recommend users to install today, and deploys everything you need for a working GitLab installation. (Postgres, Redis, an Ingress, etc.)

We still support and maintain the "gitlab-omnibus" chart, but it too will eventually be deprecated as well in favor of the upcoming cloud native charts.

The cloud native charts will have a significant number of advantages, including:

* Separation of components for improved horizontal scaling

* Improved resilience

* Faster startup time (current container runs `gitlab-ctl reconfigure` on every startup)

* No need for root access

Due to the significant architectural changes, migration will be via backup/restore.


> * Faster startup time (current container runs `gitlab-ctl reconfigure` on every startup)

Oh my god, how much I'd love that.

On the topic of cloud native charts, can I use the new cloud native gitlab chart (if I run an external prometheus, postgres and redis already separately) today? And how would I migrate?

And one thing I'd love to see is building docker containers without having to give the runner access to the host's docker. How do other CI solutions do that?


> On the topic of cloud native charts, can I use the new cloud native gitlab chart (if I run an external prometheus, postgres and redis already separately) today? And how would I migrate?

The cloud native chart is still under development and breaking changes will occur, so I would not recommend using it for anything outside of testing. For example our current sprint is focusing on storage persistence.

For migration, you would perform a backup of the current instance and restore the backup onto the new cloud native based deployment.


I still have 20 FPS when scrolling in a big commit diff.

I also wish they improve a bit the UI/UX in general. I've used Github for almost a decade and GitLab feels a bit messy.


Hi Jacob, Frontend Lead here. This is very high on my list. I have a team of 5 FE engineers (1 being me) focused on improving performance and this is one of the big things on our list of highest priority performance improvements.


After a few years on GitLab I moved to GitHub for these very reasons. GitLab's UI was messy and it was slow to display (as in input lag and low FPS on my reasonably powered desktop PC).


Has the CI/CD > Environments table styling been fixed? All the environment names are cut off and the table isn't resizable/sortable/searchable.

It's basically unusable in it's current state.


Thanks for the feedback, I looked into it and it looks like neither of these problems have been fixed as of yet. I've asked the CI and UX teams if we can work on the Environments page.

Here's the issue for the table size: https://gitlab.com/gitlab-org/gitlab-ce/issues/41594


The static/dynamic scanners are pretty cool; I just spent a couple days configuring the same functionality using the underlying open-source projects Clair (for Docker) and Bandit (for Python).

Doesn't seem worth the price bump from EES to EEU for me (which is a 4x per-seat increase), but this is a good feature for those willing to pay for it (or who already have another reason to buy the Ultimate edition).

Looks like my next task is to reimplement the DAST feature, which relies on the owasp/zap2docker-stable container under the covers.


I like Gitlab a lot, but I still struggle with things that shouldn't be hard.

For example: I have a single build box with a VM for linux, a VM for Windows, and one for OSX. When I commit, CI queues up the runners. Great. Except that there is some hard coded value of 1 hour after which the remaining jobs in the queue will get dropped. That means if a single job takes over an hour the remaining jobs will automatically fail, even if you up the job timeout limit. No error messages at all. Just failed jobs with blank logs.

You have to go in to a ruby configuration file and up the timeout to something big enough to accommodate your total time executing all runners, which means you must know that value in advance. Each update of gitlab kills that conf file so you have to log in and change it back and the stop/start the Gitlab instance.

Edit: clarification


Thanks. We are aware of this issue, and we even know what is causing that, we also have ideas how to fix that. The plan is to release a fix with 10.5: https://gitlab.com/gitlab-org/gitlab-ce/issues/38265#note_55....

GitLab CI/CD Lead


What's frustrating is that is a regression-- I used to be able to queue up many jobs and never got the job failures (which, again, give no error message whatsoever).


I started using gitlab yesterday and during one merge request cycle I managed to lose my textfield contents several times. Keeping form contents backed up to session storage is a trivial task and it's maddening that they haven't implemented it in an area where you might be spending 20 minutes writing a comment then lose it when you change some value in a select menu. Really bad UX. Changing a merge target shouldn't require a full page refresh.


My issue comment drafts are always persisted locally and not lost on refresh. Not sure why you're seeing otherwise. It is supposed to work that way. If you're seeing otherwise, please consider opening an issue.


Same here. I often force-reload issues/merge-requests because I see a popup "Press Ctrl-Enter to submit" which won't disappear, and obscures the content I'm trying to write/preview.

I've been pleased the text-content persists.


Looks like HN loves to shit on Gitlab.

I will chime in with my experience. As a small startup, we have been using gitlab.com with a private CI server for over a year now. The UI performance has _most definitely_ improved. And their deployments do not cause outages anymore.

Great work team !


I've started using gitlab for an open source project and I really like it. With respect to git repos it's indistinguishable from GitHub (or better?), plus it has CI features integrated in.

The only thing that would be neater would be if they offered an OS X and/or Windows runner in their CI. But since I'm not paying for the service, I'm pretty happy with how it works.


We do have support for macOS and Windows in Runner, though you'll have to run them yourselves. We're looking into offering hosted macOS and Windows Runners for the future, but we have no solid timeframe for those right now. https://gitlab.com/gitlab-com/infrastructure/issues/3183


Yeah, sorry if I was unclear -- it's supported by the product but no runners are available for the gratis service.

I saw that issue before but it refers to "gold subscribers" which I'm not (just another freeloader).

Definitely keep up the good work, it's an awesome product and it's awesome that you offer an open source release.


reminder: gitlab still has blatant false claims on their website.

> GitLab has 2/3 market share in the self-hosted Git market [1]

Their CEO admitted that it was wrong here many times but chooses to keep those blog post up without correction. [2]

1. https://about.gitlab.com/2017/06/29/whats-next-for-gitlab-ci...

2. https://news.ycombinator.com/item?id=15703221


Your second link says something different. He refuted the phrase "Gitlab is a software product used by 2/3 of all enterprises", which would have been a far bigger claim than "self-hosted Git market".


> "Gitlab is a software product used by 2/3 of all enterprises", which would have been a far bigger claim than "self-hosted Git market".

I would think the opposite. Generic claim of all "self-hosted" is a wider assertion than specific subset of enterprise.

Also blatant false claim of no 1 CI server is also there in the blogpost.


I stand by the claim we have 2/3 market share in the self-hosted git market. The claim is now linked to a page that contains the details https://about.gitlab.com/is-it-any-good/


buddybuild and bitrise are not representative of * all * self-hosted git market. Your claim is totally absurd.

See responses to your similar comment here https://news.ycombinator.com/item?id=15704055

Also discounting jenkins as "not modern"( whatever that means) and declaring yourself as number one is so dishonest. You "standing by it" means nothing.

Why is so important to you push these lies, I don't get it.


buddybuild and bitrise data probably has a bias in it. But we didn't find much else to go on. If you of anyone else knows of something we should use instead please let me know.

I think that since the blue ocean update to Jenkins they have added lot of the next generation features. I'll either specify the claim better or lange it later today.


> But we didn't find much else to go on.

Don't make claims based on unreliable data. I am not sure how else to express this.

see similar comments here https://news.ycombinator.com/item?id=15704407


No data is perfect, we consider the data reliable.

As promised I changed the wording of our CI/CD claims https://gitlab.com/gitlab-com/www-gitlab-com/commit/64ce89fd...


> buddybuild and bitrise data probably has a bias in it.

> we consider the data reliable.

I give up!


Bitrise seem to deal with a niche market (mobile development), so claiming their study shows a general trend in the self hosted git market is a stretch at best. I'd imagine boring old git itself and non-website backed things like gitosis are still a massive part of the overall install base. In any case, the claim that gitlab's on 2/3 market share just screams disingenuous marketing fluff to me and cheapens gitlab as a whole.


According to Google Trends https://www.dropbox.com/s/ldyvpxa0vbvelie/Screenshot%202018-... Gitosis reached 4% of GitLab's popularity in 2011 and is now at 0%.

If course Google Trends is not equal to installed base. But based on our experience I also don't think gitosis still has a massive installed base.


You are correct, Google trends is pretty much irrelevant here. You would be surprised by Gitosis (and many others you've dismissed). It's pretty much in every company I've worked in over the last 7 years. It's never well liked, but it works just fine. Most companies don't need bells and whistles that come with GUI based git installs, so they don't waste the money switching away to them.


I tried to sign up to GitLab the other day. OAuth for Google seems to be broken for me. I reached out on twitter, and it never got resolved. Just me shouting into the void, apparently. ¯\_(ツ)_/¯


Hey, sorry about that. I just replied.


Just a reminder, as a GitLab reseller, I offer a 10% discount on GitLab Enterprise Edition to the HN community. My email is in my profile.


Could you explain the reseller thing? The Gitlab people are on HN, why don't they just sell directly to the community?


Oh, they do. You can buy directly from GitLab at https://about.gitlab.com/products/

However, as a reseller and a member of the HN community, I can offer HN a discount on list price which comes out of my cut.


Thanks. Good luck.


Who are competitors of Gitlab?


Please see the CI/CD box at the top right of the Cloud Native Landscape: https://github.com/cncf/landscape#current-version

(Disclosure: I'm the author.)


Hosted: GitHub, SourceForge, BitBucket

Self-hosted: Gitea/Gogs, Phabricator, Tuleap


Github Enterprise is on-premise, so I'd call it self hosted.

https://enterprise.github.com/faq


Phabricator also has a hosted option in Phacility


All the tools that have been mentioned already and Visual Studio Team Services, which offers agile project planning, unlimited private git repositories (free for up to 5 users) and hosted CI/CD pipelines. http://visualstudio.com/team-services


I'd say Github, Atlassian (Bitbucket server), RhodeCode, Perforce (after acquisition of Deveo)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: