Hacker News new | past | comments | ask | show | jobs | submit login
New UX for Jenkins: Blue Ocean 1.0 (jenkins.io)
243 points by i386 on April 5, 2017 | hide | past | favorite | 96 comments



I am the leader of the Blue Ocean project - feel free to ask me anything :)

This is just the first release of what we hope to be many more in the coming weeks and months. The surface area of Jenkins is _huge_ and we may not have all your use-cases covered - please send us your feedback and feature requests by signing up to https://issues.jenkins-ci.org and submit a new issue under the 'blueocean-plugin' component.

A few things that are coming up soon:

- Support for Github Enterprise

- Full read/write from the Visual Pipeline Editor for any Git repository (Github is supported today!)

- Visual Pipeline Editor feature parity with Declarative Pipeline


Hi James, good to see Blue Ocean entering GA :)

It has improved a lot since I last tried it. Although my team is moving towards Travis for everything except server deployments. Pains me to say since I especially appreciate self-hosting, but Jenkins is still massively behind in everything outside of Blue Ocean. :/

For lots of things that are now basic in all CI/CD systems (email/slack/irc notifications, github login, etc), Jenkins still requires external plugins and/or a ton of setup from the really clunky UI. It's too much :(

What's the plan to resolve this? Is blue ocean going to spread to the other areas of Jenkins?


Yes, this is absolutely within our mission to resolve. We've been discussing adding "high level notifications" in declarative and the editor. You can follow along at https://issues.jenkins-ci.org/browse/JENKINS-41687.

One thing to note about Blue Ocean is that as we bring in features (For example, better slack and HipChat notifications) we will be making the required plugins dependencies of Blue Ocean. We don't want you to have to think about plugins unless you are going outside our curated experience.


The point of Jenkins is the powerful plugin ecosystem. Installing a few plugins is IMO exactly how it's supposed to be and not at all a burden.


The problem we are facing at my company is: we have a federated Jenkins setup where each team got their own instance and my team managing Jenkins updates, etc.

You don't know the pain we suffer from this plugin ecosystem. Installing a bunch of plugins which depends on a bunch of other stuff, there's no real dependency management for this and it's thoroughly painful when one of the teams decide to upgrade one of their plugins and break their whole setup because of it.

It has happened before even with basic plugins like "Git", one upgrade completely broke more than 50% of our instances (we are running about 50 instances right now) because a lot of other plugins relied on it and then needed to be upgraded, these upgrades then broke other plugins which depended on them and this started a domino effect.

Also, some plugins are just released and never taken care of, we have to maintain about 10 forks of plugins because there's no curation about them.

We came to the point where Jenkins flexibility by itself is a liability.

I know that Jenkins is wonderful for smaller teams but there's a point where the lack of governance really hurts and it's the main burden about it.

But of course, that's a given for any flexibility, too much of it and you have to design boundaries to contain problems... I'm just ranting because after this stint I'm burned about Jenkins, to the point where I don't want ever in my life to have to work with or use it.


The core plugin system is painfully broken.

It's like kernel panics: if an application causes the kernel to panic, then the more fundamental bug is in the kernel, not the application. If upgrading a plugin through the UI can cause other plugins to break because of mismanaged dependencies, then that's not really a bug in the plugin, but in the plugin management scheme.

That Jenkins does not forbid administrators from upgrading through the UI plugins which are fixed dependencies of other plugins without at least a warning message about the possible damage to the instance, is a critical defect, one that has gone unfixed practically since the creation of the tool. And so it is a defect which should immediately disqualify the tool from consideration in any large environment where the need to maintain critical updates (security etc.) while keeping uptime high is crucial.


You can manage plugins by dropping jar files in the plugins folder. It sounds like you have 50 teams with their own Jenkins which they have administrative control over, leading to chaos. This doesn't surprise me. Have you considered letting your team also manage the plugins and locking them all to a specific version?


Exactly this, we let they have full administrative power before because each team had their own little collection of plugins they needed and we couldn't manage the migration out of the "big" old master while also having to curate their plugin collections.

Our current discussion internally is about where to set their boundaries, remove administrative access and curate a set of plugins we are comfortable with maintaining (and that makes sense for most of the teams).


> You don't know the pain we suffer from this plugin ecosystem. Installing a bunch of plugins which depends on a bunch of other stuff, there's no real dependency management for this and it's thoroughly painful when one of the teams decide to upgrade one of their plugins and break their whole setup because of it.

That problem is called Java package management, Maven just plainly sucks compared to composer and npm.

At my company we solved the problem relatively easy though: a Docker image that compiles in all the plugin jars (it downloads them via mvn). Upgrading is usually easy (you just write the new version numbers from the "Available updates" screen into the pom.xml), but sometimes does takes one or two hours of wrestling when you have plugins that introduce new dependencies (which mvn cannot detect).


Why mavens sucks? What are its weakness?


Hey piva00, my name is Tracy and I'm James' colleague at CloudBees. I think the project my team is working on may help you with the pain you're describing here.

We've made a curated distribution of key integrations and plugins for Jenkins and this distribution protected our users from the breaking upgrades related to SCM API 2.0, which includes the Git plugin upgrade you mentioned here.

If you're interested, send me an email (tkennedy@cloudbees.com) and I'd be happy to talk to you more about the problems you're experiencing with the plugin ecosystem.


What is the reason for choosing this model?

I assume that the teams are still using the pre-pipeline jenkins jobs where they have to make the changes on the Jenkins server directly right? If you could switch to Jenkins pipelines with a single master, you could give that same freedom to your dev teams to manage their own build without having to manage a dedicated Jenkins master for each one.


No way! We switched to this model exactly because it's TOO painful to run a single master with thousands of jobs, you don't scale that at all, it's impossible to do any kind of upgrade because there's always someone using Jenkins for something quite urgent or important.

We have more than 500 engineers, it's seriously impossible to run a single master in this environment. We run both pipeline jobs and JobDSL (and some teams still rely on JJB).

There's nothing in Jenkinsfile/Pipeline that would help to mitigate any of the problems of scaling a master to hundreds of developers and tens of teams.

The dev teams all have control of their jobs through source control, and already had before the federation of instances.


I think the issue with a plugin ecosystem is that far too many 3rd party plugins do not respect the workflow of the core system that users expect to encounter - the UI appearance and functionality can easily confuse users, or even completely destroy the experience. Plugin integrations are often so poor that they ruin the product as a whole; particularly whenever a "top 10" plugin has serious bugs or flaws that prevent it from providing what a user really needs. Half-baked plugins that users depend on as a necessary evil are rarely a good thing. There is a reason that important functionality must be officially integrated into the platform: the core system must be presented as a unified and consistent interface.

Community plugins can be a godsend, but they can also be the exact reason why a software service falls apart.


There are many problems with the plugin ecosystem.

The big ones: * Many don't work with pipelines (scm sync config plugin for example). And you don't know this until you try * Upgrading a plugin can and often does have unintended, and hard to detect problems (only way to know is to run every job and find the failures) * It's hard to automate plugin installation


Can a Jenkinsfile completely obsolete the config.xml that each job has?

Clicking through the gui to create jobs is error-prone, so I'm using jenkins job builder at the moment to create those, but it would be great to just use the one file in tree.


Yes, this is what we are using at our my company. Each team is responsible for their own Jenkins job by writing their own Jenkinsfile (with guidelines and support)


If you are a frontend developer who loves developer tools, has worked with SPAs and React - we've got an opening on the Blue Ocean team at CloudBees https://t.co/6ZP1faehVq


There's a glaring UX issue with Blue Ocean which I'm surprised hasn't been addressed yet: whilst a build is running, the console output forces the page to always scroll to the bottom and it's extremely difficult to stop it from doing this; I end up hitting Home a bunch of times and trying to collapse the console output. It usually takes at least 5 tries to align my mouse cursor with the collapse button before I manage to hit it. Please fix this ASAP.

There are other minor issues I've come across, but since this one is extremely annoying I thought I'd mention it here.


This is a known issue and is slated to land in our 1.1 very shortly

You can watch this ticket for updates https://issues.jenkins-ci.org/browse/JENKINS-38523


Good to hear. Thanks!


Congratulations on the new release!

Jenkins: 2.46.1 Blue Ocean: 1.0.0

I click on "New Pipeline", enter the GitHub token, select organization, select repository (without Jenkinsfile in it) and then after a while this message shows up:

-- Pipeline Creation Pending... Pipelines are still waiting to be created. You may now return to the Dashboard to check for new pipelines. --

I don't see any errors in the log file or in the browser console.

Any insights?

EDIT: In the traditional UI, I see a new job was created for my GitHub organization but it has no repositories (probably due to missing Jenkinsfile?)

I had to click the "Show more" button on Blue Ocean 10x until I reached the end of my job list (when I was looking for this new pipeline that would have been created).


Hmm thats no good - would you be able to email me [1] your server log? Usually this happens when the network is slow but something else might be up.

If you delete the multibranch project and try again, does it work the second time? It would be good if you could capture a HAR file [2] of your browser session too.

[1] jdumay@cloudbees.com

[2] https://support.cloudbees.com/hc/en-us/articles/204498690-Di...


Just did, thank you!

Deleting it and following the "New Pipeline" flow again has the same result.


Glad it is GA now, but it is still has some issues:

* If i use non core parameter in pipeline, BO ask me to move to classic interface

* If i do not use Github like pipeline, interface become partially useless. It means empty branches, empty PR columns etc. Where is good old custom pipelines?

But anyway it is good move forward!

PS: pipelines itself also has some issues, like really small number of useful steps, literally only sh step used everytime. Also in case of custom pipeline it is still unclear how to abort build in code. And using groovy collections methods is veyr inconvinient (like trying to use .collect to create command like options for next sh step)


> using groovy collections methods is veyr inconvinient

It looks like Blue Ocean generates Apache Groovy DSL code for the pipeline. From their pipeline doco [1]: _In order to provide durability, which means that running Pipelines can survive a restart of the Jenkins master, Scripted Pipeline must serialize data back to the master. Due to this design requirement, some Groovy idioms such as collection.each { item -> /* perform operation */ } are not fully supported_

I wonder how feasible it is to make any edits at all to the DSL and have the new code read in successfully by the Blue Ocean UX. Perhaps the collections methods aren't the only restrictions.

[1] https://jenkins.io/doc/book/pipeline/syntax/#differences-fro...


I haven't tested Blue Ocean yet but have used Jenkins previously, I'm currently using your competitor Travis and switched mainly because it's so much easier to get started with for a small project (which usually then grows bigger).

From the pictures the new jenkins looks much more pleasant on the eyes but how would you rank it's competitiveness with Travis when it comes to getting started with your testing the first time?

Will reconsider Jenkins in my next project as it seems like you are taking UX seriously!


Can you explain how the visual pipeline editor is supposed to work, because the documentation for it is essentially non existent. The version I have installed after upgrading to blue ocean 1 seems to be .2 It doesn't seem to be able to edit pipelines in any way. If I switch to experimental version 1, I get a pipeline editor that basically allows me to view source, but I can't edit any jobs or save a job.

Just in general, the documentation for a 1.0 release seems really insufficient.


Sure, we'll make sure to get the docs done in the near future, keep in mind, while Blue Ocean is 1.0 the visual editor isn't quite there yet.

There are a few major things we need to get done with the visual editor before it could be called 1.0; the backlog in the Jenkins JIRA has much of the plan laid out, but not all of the design done to see.

Right now, the visual editor allows editing _declarative_ pipelines and only allows saving and loading from Github. Support will, of course, be added for the rest of the branch sources but that wasn't ready for the 1.0 Blue Ocean release.

In order to use it today, you'll need to set up a Github repository as a branch source and you'll get edit buttons on the branches tab as well as top of the run details.


We've recorded a short video for how to use the editor https://t.co/NS6LHiRDec


Awesome work! We've been migrating to Jenkins v2 and blue ocean over the last couple of weeks. The new experience is way way better.

Does the 1.0 release offer much that wasn't in the previous version?

Edit: Answering my own question. No, the 1.0rc4 -> 1.0 is just a version bump.


Thanks for the support and great to hear that you are taking the time to upgrade!

We've shipping a lot of bug fixes in the last few releases and the integration of the editor into the Pipeline creation process when you choose Github, which landed in rc1. I recommend upgrading your Blue Ocean ASAP :)


Yes, rc4 is the same bits as 1.0.0


How can we have multiple Jenkinsfiles in a single branch?

We want to run one job to build things, then after we deploy, we want to trigger another job that runs a bunch of integration, performance, etc. tests against the new code.


I was dealing with this just yesterday funny enough. We didn't really like the idea of using `load()` to bring in a new file. I'll prefix the rest by saying we haven't actually implemented this yet, so YMMV. Our goal was to have a separate deploy pipeline that wasn't dependent on the test step (for various reasons I won't go into).

We decided on creating a new job of type pipeline, and pointing it at a script named `Jenkinsfile.deploy`. It has a few parameters, namely `BRANCH`, that can be manually set, or passed in from another job. We can then move our deployment steps from the main Jenkinsfile into this new one, and still version control the lot.

Major caveat is that the deploy step is not compatible with organisation scans (multibranch/pr), but that's fine for our use case. We usually want to trigger our builds manually based on the branch.


I was playing around with a combination of Pipeline and Job-DSL and created this as a POC:

https://gist.github.com/HRMPW/9b2a3ccbdc370e0a7c9cb541be229d...

This is a Job-DSL seed job written in Declarative Pipeline that pulls in your Organization based on a build parameter and creates two jobs for every repo:

1. A multibranch Pipeline job that is based on the Jenkinsfile that will work across all branches and PRs.

2. A deploy job for the repo that is defined in a "jenkinsfile.deploy" file in the repo. This isn't a multibranch job but can be run on any branch based on a parameter for the branch.

This is a very simplistic implementation but could be expanded on further.


Sounds like you would be better off putting the logic in separate function of a global library that you can then call in different stages for different conditions: https://jenkins.io/doc/book/pipeline/shared-libraries/


For some things, yes, and we are planning to do this. Deployment is different between our projects though, so we'd prefer our deployment method to live within the repo beside other build steps.


Its possible to include and exclude different stages of the build with any criteria you like using the `when` block.

Referring to this example https://gist.github.com/i386/5bacc640574c6d79bb72cd9a1181a50... if I have a branch called 'feature/cool-feature' then only the stages Build and Test would be run. If the branch is called 'master', all stages in the Pipeline would be run.

'when' also supports environment variables and expressions and even multiple `when` blocks - see the docs at https://jenkins.io/doc/book/pipeline/syntax/#when


Why do you need them to be separate Pipelines? Because it is convenient or for other reasons?

Do you want everyone of these jobs in every branch that you create? In Pull Requests? Do you want all of these jobs to automatically trigger on every push the repo?

It's possible to do all of these steps in a single Pipeline but I'm curious why you would want to separate it?


We want to run our integration tests, etc. against our per-branch staging environment and then against prod when we deploy the branch there as well. With a "post-deploy" pipeline file, we can keep that logic nice and clean and easily trigger the appropriate job via the API.

With everything in one file, we would have to add the logic to our Jenkins pipeline to handle when to trigger deployments (which is currently done through a service we built that looks at GitHub commit statuses of a certain prefix/listens to Slack slash commands).


We have a massive Jenkinsfile with a whole bunch of when clauses so that it can be run in separate cases. Might be nicer as multiple Jenkinsfile's, but it works.


I believe there's a "load" command which takes a path to a jenkinsfile



It's on the roadmap but I can't give an ETA at this time.


Hi! I looked around trying to find a roadmap, and it was a bit difficult (didn't find it).

Do you plan on supporting Bitbucket Server like you are with Github Enterprise?


We are going to publish a public roadmap soon.

Yes, we do plan on supporting Bitbucket Cloud and Server but I can't give an ETA on that right now.


React, Angular or vue?


React


or Knockout?


Funny to see that Jenkins is starting to address a lot of the issues that led us to creating Pipelines [1].

I still think Jenkins is a pretty huge beast to run and configure. We're trying to keep things light (install in a couple minutes with `pip install`) and intuitive (simple YAML files to describe pipelines).

We're writing a post about our take on automation/continuous delivery and how this impacted some of our design decisions for Pipelines. I'll probably dive a bit more in this version of Blue Ocean before so.

[1]: https://github.com/Wiredcraft/pipelines


This looks amazing. I haven't used Jenkins for a long time, mainly because I remember the UI being ugly and clunky. So this is really nice.

I haven't set up an in-house CI server for a long time, and if ever needed one again, I was planning to evaluate GitLab CI. Otherwise I just use CircleCI, and TravisCI for open source.

But with this new UX, I might give Jenkins another shot.


It looks nice, but it looks like jenkins is doing way to much and locking you into using it.

IMO a build server should only be managing it's triggers (time, checkin) and then calling the build script (make or whatever) with the correct parameters for that build configuration. Instead it seems to be trying to take the role of both.


I don't know. We tried looking for alternatives when we were a bit down in the gutter with our Jenkins setup, but were even more disappointed with the other options.

Instead, we now use Jenkinsfiles everywhere, or pipeline scripts where we have scheduled stuff. That's working well enough so far for our small team.

The things Jenkins provides can be difficult to find elsewhere. For example, we spin up an EC2 slave for most builds, but sometimes need to then take the result and deploy it from another (static) node because of network restrictions. Stuff like git access, the credential store, moving stuff between nodes just works everywhere.


Do you have a build script (makefile or something else)? IE, can you perform a build from you're machine?


Most of our builds are 'basically' Node.js / npm or PHP / composer, so reasonably contained build processes. It looks simple on the surface, but once you start running CI for a while you notice all the dependencies on the environment. Versions installed of PHP, Composer, Node.js, MySQL / MariaDB, Java, Ant all tripped us at some point. (Not to mention we develop on Mac, deploy on Linux.)

Instead, we now have a very basic Ubuntu-based AMI with Docker installed, and otherwise stopped depending on the host environment as much as possible. Jenkins spins up an EC2 instance when needed, and shuts it down again after some period of inactivity. 95% of our pipeline scripts do work inside a container.


Jenkins is by far the worst CI/CD system, except for all the others. Jenkins seem to be the only multi-node job orchestrator that works well enough for a CI/CD workflow, but every time it tries to do something "smart", it seems to fail spectacularly.


Wanna see if Pipelines [1] works for you? If not, could you tell me what is missing?

[1]: https://github.com/Wiredcraft/pipelines


It's great and beautiful. However, it isn't yet as powerful as Jenkins Classic. So if you've been using Jenkins for a while, there's a good chance that Blue Ocean doesn't have the power/granularity you'll need.

It also switches your default GitHub links to Blue Ocean. If you only want to try it out, you'll have to tell your users to switch their Notification URL back to Jenkins Classic in their user configuration page. (If there's a way to do this for everyone, I'd love to hear it.)


Is there anything in particular is preventing you from moving to Blue Ocean? I'd love to know!

As for the Github links, as you mentioned a user can set their preferred UI (Blue Ocean or "Classic") in their user preferences easily. We do understand that this might be difficult for some administrators to swallow and are looking at a global option to turn that behaviour on/off [1]

[1] https://issues.jenkins-ci.org/browse/JENKINS-43205


First of all, congratulations on an excellent job with the design and UX of Blue Ocean! There's tremendous potential here.

I hate to use HN for bug reports, but since you asked about roadblocks (thank you for asking!):

For one, we use build triggers to ensure downstream dependencies are tested as well as the current project. I would have expected some kind of pipeline integration between those in BO (displaying the triggered build pipeline in the parent pipeline as a whole), but instead BO actually makes it more difficult to get to downstream builds.

Classic:

https://jzila.keybase.pub/classic_downstream_build.png

https://jzila.keybase.pub/classic_downstream_build_detail.pn...

Blue Ocean:

https://jzila.keybase.pub/bo_downstream_build.png

Note that in BO it isn't clickable, so I have to either manually type the URL or go back to Classic.

Some display bugs in the pipeline view:

- Issues with display of pipeline nodes in running builds that make what's happening pretty confusing: https://jzila.keybase.pub/running_build_display_bug.png

- Issues with display of pipeline nodes in finished builds (falsely reporting node failure): https://jzila.keybase.pub/finished_build_display_bug.png

Minor grievances:

- Nodes run in parallel don't have their hierarchy preserved: e.g. parallel(a: {}, b: parallel(c: {}, d: {})) all get collapsed into a, b, c, d parallel nodes in the BO pipeline view (as you can see in the bo_downstream_build screenshot above).

My contact info is at https://keybase.io/jzila if you need more details.


Can you elaborate? It's just a plugin at that lives at /blue. In my experience it doesn't remove functionality at all. You can still use "normal" Jenkins just the same.


And you can easily switch between Blue Ocean and the Classic UI at will. Just look for the exit button at the top of the screen to get back to the equivalent page in classic then click "Open Blue Ocean" in classic to go to the equivalent place in Blue Ocean [1]

[1] https://jenkins.io/doc/book/blueocean/getting-started/#switc...


Exactly what I was thinking. When I access Jenkins, after having installed Blue Ocean, I'm still greeted with the traditional UI and the "Open Blue Ocean" button.

I do see the option in my user settings page but I don't understand what it's for. What is this "notifications URL"?


Jenkins issues all kinds of links to itself via email, Slack, HipChat and Github to refer to runs of pipelines or the homepage of jobs. When we do this, we send you to a well known URL that will redirect you to your preferred UI - Blue Ocean or "Classic" Jenkins. Thats what this preference controls. I hope this helps!


That makes sense, thank you!


As a long-time Jenkins user (since the Hudson days), I'd like to comment in my first language - sardinian:

Minca, gi' fiada ora.

Then, in english: Wow! About time! Kudos to the whole team!


As someone who has to deal with Jenkins a lot and extensively uses Pipelines with Jenkins and has used BO since early betas, I really really like Blue Ocean.

The UX and UI is by far my least favorite thing in Jenkins, so for basic usage Blue Ocean is miles ahead, even though it doesn't nearly have the feature parity of the old UI. That said, I cannot wait for Blue Ocean to grow and get better.


We have been using Blue Ocean for a few months - We have to use the Classic UI as Blue Ocean is a buggy mess... It doesn't handle branches with slashes in them (reload or try a direct link), it handles console output very poorly with some attempt at detecting when users don't want autoscroll, UI fails often and require browser refresh for it to update data or for UI elements to start working again, ...

Uuuuuugh. It looks nice, but all my colleagues whine about how poorly it works. Then comes all the problems caused by regular Jenkins multibranch pipeline limitations...


Congrats! We recently migrated from Anthill to Jenkins, and went with Pipelines instead of freestyle jobs, and what a difference it has made in terms of being able to see/manage your build/deployment as code. We have been using Blue Ocean in Beta mode (we use Cloudbees version). Is it out of beta for Cloudbees version also?


It will be available from our update center soon and the next month or so via our verified program. The CloudBees Jenkins Enterprise team are hard at work on this as we speak :)


Blue Ocean is cool, but I think declarative pipeline is even cooler, and it also recently hit 1.0: https://jenkins.io/blog/2017/02/03/declarative-pipeline-ga/


Glad to hear! Declarative and Blue Ocean have been designed to be used hand-in-hand.


Good progress. But still lot of bugs to be used in production. Its way too slow and dashboard doesn't opens at a lot of time due to bulky/truncated response.


As a TFS user, I miss Jenkins so much. There's so much you can do in the Jenkins pipeline with 2-3 plugins that is nearly impossible, or needlessly difficult, in TFS.


I have become a wizard at MSBuild Tasks because of this very circumstance. We're using TFS 2013 (with those XAML-based build templates that are a nightmare to extend) and I end up writing .proj files for everything that I need.


We use Jenkins with TFS at work, I am not that close to how it was implemented so can't really comment on the complexity but it seems to work flawlessly. It looks like Blue Ocean is specific to Git though?


What is difficult?


come back :)


Pushing very hard for it internally.


Are there any features that could push it over the top?


If you stuck a Microsoft logo on it...

:-/


Good luck - drop by our Gitter room if you need any help convincing :)


Visually looks much better than the earlier versions of Blue Ocean that I saw (which left me a little underwhelmed), nice work!


Ahh that would be the great work of our designer Brody Maclean (https://twitter.com/brodymaclean)! Ill pass on the complement and thank you!


We are using the pipeline-as-code (i.e. Jenkinsfile). However, the new pipeline plugin does not allow diamond shaped dependencies (e.g https://i.stack.imgur.com/iQFWT.png)

Jenkins does allow 'parallel' jobs but that is the same thing we want.


Aren't environment specific builds generally discouraged? I'd rather have the same binary through each layer. Control behavior via environment variables or prop files


Is this compatible with all 2.x versions of Jenkins? Or should we be on a specific version or higher?


We support Jenkins 2.7.1 (the first 2.x LTS) and above. However, there have been quite a few security advisories for plugins/Jenkins in the meantime and I'd recommend you upgrade often (its pretty painless these days!)


Congrats to the Jenkins team!


Thanks for your support Kevin!


This is both gorgeous and well needed!


Wow... thanks for the kind complement :)


New UI is a step forward, but many people don't look at it that often. They use CatLight or similar tools that show them build status right on the desktop.


Are you affiliated with CatLight? https://news.ycombinator.com/submitted?id=aplex


You are right - people come to Jenkins when things go wrong and thats why we've redesigned the result screen so you can easily pinpoint changes without having to endlessly scroll through huge log files. You can see this in the "Pinpoint Troubleshooting" section of the blog.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: