This is great. While I use only travis for build/testing at the moment, I really appreciate a real competition between GH / GL / BB. Users of all the platforms win because of this.
Sorry if this sounds like a newbie questions, but can I use this to run my test suite every time someone pushes to a feature-branch and/or before anything is merged to master?
Yes. You can run any test framework that runs inside a Docker container. And you can specify different "steps" for different branches (either by name or by globbing).
Serious question... are you using HG because of history, or do people actively choose HG over Git for new projects still?
Have used both, they are so close, seems "odd" to go for the way less popular one, barring you have a really old HG repo and haven't bothered to switch.
I prefer mercurial over git and will choose it every time for new projects unless there's some other concern preventing that.
Mercurial has a number of features that git never implemented - in particular revsets. I also prefer the hg CLI over the git CLI. Mercurial has sane, concise online help, and a lot of work went into the design of the command-line to be consistent, composable, and made of pieces that do one thing and do it well.
Seconding this sentiment. Far, far, prefer hg to git. TortoiseHg blows away the Git UI's as well, IMO. I always feel like i'm poking around in the dark on SourceTree.
Interesting, I wonder if they always planned to launch their beta today or whether it got expedited after Gitlab's announcements over the last couple of days.
I can give some more context on our launch. Today is the start of AtlasCamp, our annual developer conference, in Barcelona. We planned the launch on that date a while ago because it's the best time for us to share this exciting news.
We've always been invested in the CI/CD market (Bamboo has been around a long time) and Pipelines was just making sense for us to do to help all Cloud teams to build great software.
What about Continuous Integration?
Though I like the many options we now have to easily set-up some CI, a lot of entreprises still rely on old-fashioned on-premise CI. I can only wonder about the impact of depreciating Bamboo Cloud and what to use next.
I'm one of the Developer Advocates at Atlassian with a focus on the CI/CD space.
For "old-fashioned on-premise" CI/CD, Bamboo Server is still a solid offering from Atlassian, with active development on new features and support for existing ones. Discontinuing Bamboo Cloud is more about being able to "right-size" our cloud offerings so Atlassian can offer a CI/CD service for a team's first microservice deployed into AWS Elastic Beanstalk, and that scales up without overhead to many services each with many instances in a more complex environment like AWS ECS. And not just for AWS but for Azure, Google, Heroku, or whatever your choice of cloud platform. I believe Bitbucket Pipelines will be that next generation solution, while Bamboo will continue to serve on-premise needs for many years to come.
With Bamboo Cloud you were able to set up a pretty convenient "intermediate" solution, with Bamboo Cloud + an agent on your servers. Will it still be possible with Pipelines?
Also I couldn't find the doc for aggregating tests results.
Not so much. One of the things that I think makes Pipelines better suited for cloud is that it's agent-less. But that does mean there's no option to run an agent on-premise to bridge pipeline execution. Indeed, if you are accustomed to Bamboo, you are likely to find Bitbucket Pipelines rather minimalist. For example, there is currently no facility for aggregating test results.
How does this compare to shippable? Specifically, does it build docker containers which can be pushed to a registry (e.g. google's) and then deployed using Kubernetes into google cloud?
Curious if there's going to be an on-prem version of this? We run Bitbucket at my company and Jenkins Enterprise as well. How does Bitbucket Pipeline compare to Jenkins Pipeline (https://jenkins.io/solutions/pipeline/)?
Bamboo is still the recommended solution for on-premises installations. The requirements and practicalities of OP vs cloud CI are different enough that they warrent different approaches.
That said, Bamboo supports scaling builds using AWS, and has 1st-class support for Docker-based build/test setups. I gave a talk on this at Atlassian's Summit last year if this sounds useful: http://summit.atlassian.com/videos/build/docker-continuous-i...
I really want a way to configure bamboo via a text configuration file though. We're using it now, but we have to copy the configuration from build job to build job and each job ends up subtly different over time. :( Plus I really want a way to say here are the steps to deploy to a server, now run those steps against these three servers. It would be even better if each server could potentially have it's own ssh key for deployments to prevent a hacker from using bamboo to access all the other servers.
I think that's where Bitbucket Pipelines comes into play. Their YAML file is similar in concept to TravisCI and others. Jenkins Pipeline has a jenkinsfile which is also similar in concept but is based on Groovy. While there's a learning curve for that, I would argue it's definitely more powerful.
I'm not so sure about that, first off the pipelines feature doesn't exist on Bitbucket Server and Atlassian recommends you use Bamboo there. Secondly, that doesn't help with managing secure access to production servers with potentially sensitive data. You'd just be moving from storing all the passwords/ssh keys in bamboo to storing them all in your version control system.
The page takes 53 seconds to load for me. Initial shell loads quickly, but the content does not appear for almost a whole minute. (Chrome on OS X with all extensions off.)
I think they're saying they run the pipeline builds using docker containers. It's how they can build easily based on any branch. Ie. what they use behind-the-scenes, not what you need to be using. Although I'm somewhat doubting myself now...
That's correct. Bitbucket Pipelines uses Docker as an execution environment for builds. However, if the goal of your build is to produce a container, whether Docker or otherwise, Bitbucket Pipelines cannot do that at this time.
You mean I should build a Docker image with both Java and Ruby? The image would be much larger than intended in that case. In production, I would have two different Docker images.
At this time, it is a limitation of the beta that Bitbucket Pipelines can only associate a single image with your build. Avoiding the complexity of chaining images helped us to ship more quickly. I expect this will change before GA. For now, you would have to effectively "merge" the Dockerfiles your self and publish the image so Bitbucket Pipelines can pull it.
It is unfortunately common to ship Docker containers to production that contain the entire tooling suite to build the thing being deployed, and it sounds like that might be what is happening in your case?
If I understand right the thing you're expected to provide to run inside "pipelines" is a container in which your build can be performed; the output of that build might also be a Docker container, but hopefully there is not a requirement that it be the same container in which you are performing the build. Or you might be shipping something completely different, totally unrelated to Docker, as your build output that gets sent onward to production. Of course all of this is just a guess, it will become more clear as some of us start to enter the beta program.
Kyle, this is correct. We use containers simply to create the environment in which we'll execute the script commands in isolation. You can start with a small container and install dependencies during the run or you can prepare a bigger container with the dependencies installed already.
I'm one of the Product Manager on Bitbucket Pipelines. The beta is indeed free, with a limitation on the number of minutes per user per month (starting at 300mins/user/month but that may change during the beta).
We haven't decided on the post-beta pricing yet. The beta will help us understand better how our customers are using it so that we can price it accordingly. We're leaning towards a model that scales well with the number of users on an account.
Do you have some details on the kind of hardware Bitbucket Pipelines will run, as that can affect the time it takes to run a workflow? More specifically, how many cores and how much memory will the containers have access to?
We will be experimenting different configurations during the beta to find the right fit. Some details about the resources available are published in our documentation [1] and will be updated as we move forward.
I got beta access, created a custom docker image and got my tests to run however lots of them are failing because I need redis. Is there a way to add Redis server?
While we don't have the "recipe" documented in an example repository, GAE does have right knobs and levers that would make it possible to use Bitbucket Pipelines with GAE. Give it a try! And, even if we haven't made an example repository yet, don't rule it out.