Seems extremely expensive, especially the data transfer part. Just a quick calculation at $0.50/GB we would end up spending $10,000s in just data transfer costs vs $0 with AWS ECR (transfer within region is free).
Guess this would work out okay for storing some development images but definitely does not seem feasible for production use.
Unless I'm missing something? There's some fine print about it being free within the context of a github action, but this doesn't really seem clear. Does my fleet of ec2 instances pulling down the image after a new deployment count as within an "action"?
Data transfer between GHCR and Actions is free. We're building out a much tighter integration between your source code and the artifacts created. The effects of this will create a stronger supply chain link and work toward better reproducibility.
We recommend using GHCR for the development and test workflows and then publishing to the "Cloud *CRs" for your production images such that they can be pulled directly from there.
No hard feelings, just pointing out the community needs more than corp speak to be convinced, especially when Microsoft is telling you "its just better ok"
I think you’re wrong about tighter integration not being a selling point. Integration makes things more accessible to a wider audience and can increase productivity when done well too because it will make moving data through a system broadly more reliable.
Really looking forward to trying this! When I tried using Actions to build a Docker image and deploy it to Kubernetes two months ago the entire experience felt rather clunky, especially when working with a cloud provider that's not one of the big three.
I don't see drastic pulling time reduction by migrating from DockerHub.
I have 2GB container image, originally pulling takes 31-44sec. After migrating to ghcr.io, it was 38sec. Image is set public and running on a private(org) repository Action.
Probably just GitHub Actions in general, although I'm not sure if it differentiates between self-hosted runners and hosted runners (which presumably run in the same Azure datacenter where transfer is free/less than $0.01/gb).
Reminder that GitLab has free private images still, capped @ 10GB (as others have mentioned). It's interesting that from a strategy point of view GitHub thought capped free private images wasn't worth pursuing, despite the obviously large stores of cash that MS has.
Nice to see the registry is free for public images, but does anyone know where I can see pricing for private images? Is it still the same as the packages pricing[1], because 2 gigs seems incredibly low storage space when it comes to Docker images, right?
To me, the price to beat is the $0.04-$0.20 a month I pay for Google Container Registry (price depending on whether I remember to go back and delete older images or not). I'm guessing it's not going to get to that level of commodity priced, but I don't see this product being competitive without giving a heck of a lot more space for cheaper.
Good callout. The GitLab.com registry has a "soft cap" of 10 GB, which seems like plenty of room for most projects, as long as you don't require a long history.
We'll be revising our free storage model in the future to accommodate more open source usage. There are a lot of base layer images people use that are built on GitHub and should be freely available. Those base images should become part of the public space and you only pay for private storage built on top.
This is good to hear, and makes sense to me. I'll also admit that, now that I've thought it over, I'm doing a bit of apples-to-oranges comparison, given the $4/mo is not just "for 2 gigs of storage," since it also includes Actions minutes and I think a few other bonus features (private wikis?). At that point, it's more fair to compare it to what e.g. GitLab offers, than what you'd get from Azure Container Registry or similar.
I'm not sure the Container Registry on its own will necessarily be attractive to people just looking for commodity-priced container storage, but GH Actions + Container Registry does make for a pretty compelling CI/CD story, I have to admit.
> Container Registry is free for private images during the beta, and as part of GitHub Packages will follow the same pricing model when generally available.
There must be some fun intracorporate debates discussing how this is positioned relative to the Azure Container Registry, also offered by a division of Microsoft.
Brand is a fine difference. Some people are not going to buy the Microsoft container thing. Other people would only buy the Microsoft container thing. I think that's part of the reason MS wanted to own github!
I doubt they really spoke to each other much. Large companies have two or three of everything going on because it's nigh-impossible for everyone to aware of everyone else.
If they did talk, I expect it would mostly be to decide on segmentation.
So far, at least from this outside perspective, it looks like Azure, Azure DevOps, and GitHub all seem to be talking to each other and where there seems to be external redundancy there seems to be internal reuse/recycling/consistency (some of which not always apparent). CodeSpaces for instance looks like it shares a lot of infrastructure no matter how you launch it (from GitHub or from Azure). GitHub Actions supposedly shares a core and a lot of code with Azure DevOps Pipelines.
Microsoft still seems to be quite coy on what the actual long term plan is, but from at least some appearances there seems to be one in this case. (I've heard lots of rumors but no easy to find reputable sources that Azure DevOps eventually gets eaten as a brand by the GitHub brand and an eventual (auto-)migration. I still don't know if I can trust those rumors, but that seems like one of the better options to me.) At the moment in the short/medium term it seems like Microsoft is trying to dual brand the same products to target different customer types.
Well yes and no, GitHub is increasingly powered by Azure (GitHub Actions and CodeSpaces as obvious examples), but like I said I've got the feeling that "Azure DevOps" is more likelier to be retired as a brand/product line than GitHub will.
You can tell that Microsoft absolutely respects GitHub as a brand and just based on raw public changelogs and available roadmaps there seems to be a lot more investment in labor and "spirit" in GitHub than Azure DevOps right now.
Again, there are nothing more than rumors at this point. Yet there was always something of an impression that when Microsoft's bharry retired the lights might go out on Microsoft's TFS/VSO/Azure DevOps legacy, and the timing of the GitHub purchase coincides with that idea that GitHub is a full replacement.
I've even heard some of how strongly Microsoft sales people are trying to encourage on-premises server installs to move to GitHub Enterprise (and away from legacy TFS products).
Many signs seem to be pointing to GitHub will be the last product standing (powered by Azure).
gTLDs are safer than ccTLDs, especially the ccTLDs that were sold off to a private company like .io was. The contracts that provide the ccTLDs are almost empty of any guarantees whereas the gTLDs have to adhere to strict requirements.
It's not immediately clear from their documentation, but do they enforce a naming scheme on images pushed to the registry?
This is an annoyance with the GitLab Container Registry, which mandates that the repository part of the image name be 3 components or less, one of which must be the GitLab project name. So your images have to be named like namespace/project/something:tag, which may not be what you want.
Hey there! Engineer on the GitHub Container Registry here.
Our only namespacing enforcement is that packages must be published under a user or organization namespace for which you have write permissions. For example, I'd be able to publish under `ghcr.io/taywrobel/redis`, or under `ghcr.io/github/redis`, but would be disallowed from publishing as `ghcr.io/saxonww/redis`.
Other than that restriction, you can have nested image namespaces with arbitrarily many segments; i.e. `ghcr.io/taywrobel/dev/redis` would be valid.
Having arbitrarily many segments is an improvement. In particular, we have some tooling that depends on a calculated image name, and we have to pre-pull and rename images locally if we store those in GitLab's registry.
I think for an Enterprise customer being able to push images that don't match the user/org pattern would be good too. While you couldn't allow just anyone to push e.g. 'ghcr.io/docker/docker:stable', I think this is a valid want for an internal/self-managed registry.
That restriction is something we will be moving away from in the coming months. I like how GitHub allows you to publish an image under a namespace. We have an open issue for that: https://gitlab.com/gitlab-org/gitlab/-/issues/241027.
I wonder how long this has been brewing and waiting for the right moment. It's hard to not contrast this with Docker's recent changes to their container registry [0] (discussion [1]).
I've been in the private beta for a couple weeks, and the service is looking great. My issue was that `containterd` couldn't pull from GitHub Packages' due to some OCI vs. Docker differences. The new incarnation fixes this :)
This should give dockerhub a run for their money. I recently received an email from Docker saying my free access to their repository is now rate limited, which would include public open source images
The new container registry should be OCI compliant, and so should work with any OCI compatible tooling, including docker, containerd, ORAS, and hopefully more tools yet to be come.
If you hit any compatibility issues, especially any with regards to the OCI spec, please let us know!
Is this another GitHub thing with only user-scoped access tokens rather than project scoped? I really don't like that and I'm surprised people are seemingly fine with it.
It would be a valid security pattern if it was created under the org scope, but it isn't.
A "service account" on GitHub is just another user account tied to a real user with that users MFA (if MFA is enabled, and since we're referring to valid security patterns, it should be).
So will pulling images via github actions from Github Container Registry within an organization count towards data transfer costs?
Today we are pulling from one repo's packages to another repo via a token when dealing with shared base images. Clunky, but also counts as a data transfer charge I believe. (maybe not if using a github token)
I'm not sure the account billing page clears things up for me:
> All data transferred out, when triggered by GitHub Actions, and data transferred in from any source is free. We determine you are downloading packages using GitHub Actions when you log in to GitHub Packages using a GITHUB_TOKEN.
Hope this works, but our experience with the GitHub registry was awful so far. We've spent >10h trying to get GitHub Nuget registry to work without success. We're still using Gemfury. You'd expect that GitHub, Microsoft and .NET should play nice, but no.
I hope you'll give this a try. GHCR is based off an all new code stack. You'll notice that now you need to associate Containers and the repositories they come from instead of publishing "into repositories". Our new model has packages / containers published into orgs which will make other services simpler as elevate them as well.
We're periodically retrying things.. (had 3 attempts with NuGet).
So does this include a redo of / incorporates the package registry? It would make sense to treat them uniformly.
I also had a bad and still have a bad experience with NuGet and GPR. I get packages rejected due to Auth (but not consistently) when building using actions. Authentication is always a problem with NuGet (GitHub, GitLab). It is not only a problem of the registry but also of the client codebase IMHO.
During the beta there will be no charges for GHCR usage.
After we go GA, we'll be charging for storage and data transfer on private images only, the same the current registry offerings. If your popular image is public, you'll incur no bandwidth charges, regardless of how often it's pulled.
Been playing around with this for the past hour. Pushed a couple images. But I can't seem to delete them. Under "Edit packages", there are only the options to "View all versions" and "Edit description". But nothing to manage or delete packages.
There shouldn't be any signing up needed, as it's an open beta.
You should already see `Container` listed as a package type in the UI, and can begin publishing packages by creating a personal access token with `read:packages` and `write:packages` scopes as appropriate - https://docs.github.com/en/github/authenticating-to-github/c...
You don’t need to sign up. It’s open to everyone with a GitHub Account. Also, you don’t need to use Actions. You can just push a container. Here’s a doc that shows you how to push a container. https://docs.github.com/en/packages/getting-started-with-git...
I see some replies from GH employees. Do you intend to support /v1/search or /v2/_catalog APIs? Would be nice to have access to at least public image listing like Docker Hub's.
Yup, we intend to support them, or at least the v2 catalog API. Given the current state and forward progression of most tooling, as well as the compatibility of docker v2 with OCI v1, there didn’t seem to be much value in implementing the legacy Docker
APIs.
The catalog functionality is just more complex given the granularity of our permission model (configurable down to the package level), so it unfortunately didn’t make the cut for the beta feature list.
I don't get the GitHub Container Registry, GitHub Package repository, or GitHub Actions. The only good thing about these products seems to be integration with GitHub. But in many ways they are inferior to other CI and artifact hosting options. The biggest problem is the latency will be significant outside of your own network. The other big issue is security. That's fine, they are new products... except they don't seem to be moving in a direction that would fix these problems. I think maybe a lot of programmers don't realize what they're missing, they're just using them because they're from GitHub.
More often than not, people want build artifacts in the same security profile / performance profile as their CI/CD pipelines. Think of this more as services to support your GitHub Action workflows - where inter-service bandwidth is free and highly performant.
Or the new competition will force Docker Inc to invest in Dockerhub features again.
Including the free tier, if they want to keep their "world's largest
library and community for container images" status. But free tiers cost money.
With RedHat going after Docker itself with their daemonless version of Docker, podman, and docker-ce's lack of cgroupsv2 support, Docker Enterprise having to compete with K8S, and there being multiple entrenched container registry companies (eg Quay) it must be tough at Docker Inc.
> With RedHat going after Docker itself with their daemonless version of Docker, podman
Nobody cares about Podman except Red Hat and their most loyal fans. I'd be surprised if podman has managed to steal more than 1% of Docker's install base.
> docker-ce's lack of cgroupsv2 support
containerd supports cgroupsv2. Docker CE is based on containerd. If the current release doesn't have it, the next release will.
> Docker Enterprise having to compete with K8S
Docker Enterprise includes a K8S distribution. Does it compete with itself?
> Nobody cares about Podman except Red Hat and their most loyal fans. I'd be surprised if podman has managed to steal more than 1% of Docker's install base.
This seems dismissive. Podman has many neat features that docker doesn't (pods) or were added later (rootless containers).
I'm not dismissing how good it is, just how popular it is. Definitely not popular enough to threaten Docker's adoption, not even close. Docker does have problems, but Podman is not one of them.
Docker Hub still has probably the majority of public images. Quay has a ton of RH and enterprise-specific public images, but I see Hub way more often still.
Additionally, being the default host (so shorter image name specifications) for most clients gives it a little bit of a leg up.
GitLab has had this for years and Docker Hub has remained. Additionally, Docker Hub has done a great job getting projects to use them for their official images:
Things has changed a little few days ago. They've sent an email telling their user about their new policies including a "new inactive image retention policy", saying "For free accounts, Docker offers free retention of images inactive for six months" (See link 1).
Not too big of a deal, but I'd imagine many users would take that into account when they're making chooses.
Guess this would work out okay for storing some development images but definitely does not seem feasible for production use.
Unless I'm missing something? There's some fine print about it being free within the context of a github action, but this doesn't really seem clear. Does my fleet of ec2 instances pulling down the image after a new deployment count as within an "action"?