Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GitHub Container Registry (github.blog)
345 points by todsacerdoti on Sept 1, 2020 | hide | past | favorite | 88 comments


Seems extremely expensive, especially the data transfer part. Just a quick calculation at $0.50/GB we would end up spending $10,000s in just data transfer costs vs $0 with AWS ECR (transfer within region is free).

Guess this would work out okay for storing some development images but definitely does not seem feasible for production use.

Unless I'm missing something? There's some fine print about it being free within the context of a github action, but this doesn't really seem clear. Does my fleet of ec2 instances pulling down the image after a new deployment count as within an "action"?


Data transfer between GHCR and Actions is free. We're building out a much tighter integration between your source code and the artifacts created. The effects of this will create a stronger supply chain link and work toward better reproducibility.

We recommend using GHCR for the development and test workflows and then publishing to the "Cloud *CRs" for your production images such that they can be pulled directly from there.


Why do you recommend?

Because that's how you get paid or is there actually a value add here I'm not seeing?

"Tighter integration" and stronger vendor lockin isn't actually a selling point for anyone but the seller.

I'm not sure I buy the "better reproducibility" line either, whats better here?


Well, if nothing else, it means you'll have extremely quick connectivity between your GitHub Action Runner and the registry :)

As mentioned, this is best (for now) as a dev/test tool, rather than a general container registry.

Disclosure: I work at Azure.


Ok, and that justifies the cost somehow?

Disclosure: that was obvious :)

No hard feelings, just pointing out the community needs more than corp speak to be convinced, especially when Microsoft is telling you "its just better ok"


I think you’re wrong about tighter integration not being a selling point. Integration makes things more accessible to a wider audience and can increase productivity when done well too because it will make moving data through a system broadly more reliable.


Thats fair, if you want tight integration with azure and GitHub, I can see how that would be a good thing.

The problem is, tighter coupling still comes with costs that some people aren't willing to pay.


Yep. Agreed on all points.


Probably because it gets rid you of a basic CI: GH Actions will build an image and materialize it for you on GH CR. Guess that's the main advantage.


Really looking forward to trying this! When I tried using Actions to build a Docker image and deploy it to Kubernetes two months ago the entire experience felt rather clunky, especially when working with a cloud provider that's not one of the big three.


I don't see drastic pulling time reduction by migrating from DockerHub.

I have 2GB container image, originally pulling takes 31-44sec. After migrating to ghcr.io, it was 38sec. Image is set public and running on a private(org) repository Action.

How much faster should we expect?


Is it also free if a self-hosted GH Actions runner pulls from the GH container registry?


I'm guessing they mean it's free for Github Actions that run in a Docker container: https://docs.github.com/en/actions/creating-actions/creating...


Probably just GitHub Actions in general, although I'm not sure if it differentiates between self-hosted runners and hosted runners (which presumably run in the same Azure datacenter where transfer is free/less than $0.01/gb).


Where do you see that? I can't find data transfer pricing anywhere.

Edit: Found it. https://github.com/features/packages#pricing


Reminder that GitLab has free private images still, capped @ 10GB (as others have mentioned). It's interesting that from a strategy point of view GitHub thought capped free private images wasn't worth pursuing, despite the obviously large stores of cash that MS has.


Nice to see the registry is free for public images, but does anyone know where I can see pricing for private images? Is it still the same as the packages pricing[1], because 2 gigs seems incredibly low storage space when it comes to Docker images, right?

To me, the price to beat is the $0.04-$0.20 a month I pay for Google Container Registry (price depending on whether I remember to go back and delete older images or not). I'm guessing it's not going to get to that level of commodity priced, but I don't see this product being competitive without giving a heck of a lot more space for cheaper.

[1] https://github.com/features/packages#pricing


> Nice to see the registry is free for public images, but does anyone know where I can see pricing for private images?

This might be a good time to remind that GitLab offers free container registries for all projects for a few years now.


Good callout. The GitLab.com registry has a "soft cap" of 10 GB, which seems like plenty of room for most projects, as long as you don't require a long history.


Thanks for mentioning us, if someone if looking for the docs they are on https://docs.gitlab.com/ee/user/packages/container_registry/


I use it with gitlab auto devops and the whole experience with my k8s cluster on DO is super nice. I very much recommend trying out gitlab.


That’s the cap for the entire repo from what I can tell, and is the equivalent of $2.50/mo from Github.

Not really a meaningful difference if I’m reading that correctly.


To me it means that GitLab offers a simpler solution that does not require invoices and having to track CCs around.

If we add GitLab's superb CICD, which runs circles around the mess that GitHub Actions is, then it's thumbs up all around.


We'll be revising our free storage model in the future to accommodate more open source usage. There are a lot of base layer images people use that are built on GitHub and should be freely available. Those base images should become part of the public space and you only pay for private storage built on top.


This is good to hear, and makes sense to me. I'll also admit that, now that I've thought it over, I'm doing a bit of apples-to-oranges comparison, given the $4/mo is not just "for 2 gigs of storage," since it also includes Actions minutes and I think a few other bonus features (private wikis?). At that point, it's more fair to compare it to what e.g. GitLab offers, than what you'd get from Azure Container Registry or similar.

I'm not sure the Container Registry on its own will necessarily be attractive to people just looking for commodity-priced container storage, but GH Actions + Container Registry does make for a pretty compelling CI/CD story, I have to admit.


It seems like it will be the same as packages:

> Container Registry is free for private images during the beta, and as part of GitHub Packages will follow the same pricing model when generally available.


There must be some fun intracorporate debates discussing how this is positioned relative to the Azure Container Registry, also offered by a division of Microsoft.

https://azure.microsoft.com/en-us/services/container-registr...


Brand is a fine difference. Some people are not going to buy the Microsoft container thing. Other people would only buy the Microsoft container thing. I think that's part of the reason MS wanted to own github!


They don't hate GitHub even though it's owened by MS.


I doubt they really spoke to each other much. Large companies have two or three of everything going on because it's nigh-impossible for everyone to aware of everyone else.

If they did talk, I expect it would mostly be to decide on segmentation.


So far, at least from this outside perspective, it looks like Azure, Azure DevOps, and GitHub all seem to be talking to each other and where there seems to be external redundancy there seems to be internal reuse/recycling/consistency (some of which not always apparent). CodeSpaces for instance looks like it shares a lot of infrastructure no matter how you launch it (from GitHub or from Azure). GitHub Actions supposedly shares a core and a lot of code with Azure DevOps Pipelines.

Microsoft still seems to be quite coy on what the actual long term plan is, but from at least some appearances there seems to be one in this case. (I've heard lots of rumors but no easy to find reputable sources that Azure DevOps eventually gets eaten as a brand by the GitHub brand and an eventual (auto-)migration. I still don't know if I can trust those rumors, but that seems like one of the better options to me.) At the moment in the short/medium term it seems like Microsoft is trying to dual brand the same products to target different customer types.


I feel like the long term plan is obvious: all roads lead to Azure.


Well yes and no, GitHub is increasingly powered by Azure (GitHub Actions and CodeSpaces as obvious examples), but like I said I've got the feeling that "Azure DevOps" is more likelier to be retired as a brand/product line than GitHub will.

You can tell that Microsoft absolutely respects GitHub as a brand and just based on raw public changelogs and available roadmaps there seems to be a lot more investment in labor and "spirit" in GitHub than Azure DevOps right now.

Again, there are nothing more than rumors at this point. Yet there was always something of an impression that when Microsoft's bharry retired the lights might go out on Microsoft's TFS/VSO/Azure DevOps legacy, and the timing of the GitHub purchase coincides with that idea that GitHub is a full replacement.

I've even heard some of how strongly Microsoft sales people are trying to encourage on-premises server installs to move to GitHub Enterprise (and away from legacy TFS products).

Many signs seem to be pointing to GitHub will be the last product standing (powered by Azure).


They're winning on both sides now no matter which option you go for.


What does everyone think about the trend of using .io for registries? I've always considered gTLDs to be safer than ccTLDs.

Given the choice of something like llll.dev or llll.io for a private registry, would one be preferable?


draw.io switched away from io because of security concerns: https://www.diagrams.net/blog/move-diagrams-net


[flagged]


I'm sad that you quoted that sentence without linking to the grim story that it referenced, which I hadn't heard before: https://gigaom.com/2014/06/30/the-dark-side-of-io-how-the-u-...


gTLDs are safer than ccTLDs, especially the ccTLDs that were sold off to a private company like .io was. The contracts that provide the ccTLDs are almost empty of any guarantees whereas the gTLDs have to adhere to strict requirements.


I guess that is ignoring US jurisdiction, right? I guess most gTLDs are owned in the US (or generally speaking, foreign) legal space.

Going for your own (solid) ccTLD should be the safest for you.


But if you want to provide a service for worldwide, you won't want to use ccTLD like .fr, .mx because it looks like domestic service.


It's not immediately clear from their documentation, but do they enforce a naming scheme on images pushed to the registry?

This is an annoyance with the GitLab Container Registry, which mandates that the repository part of the image name be 3 components or less, one of which must be the GitLab project name. So your images have to be named like namespace/project/something:tag, which may not be what you want.


Hey there! Engineer on the GitHub Container Registry here.

Our only namespacing enforcement is that packages must be published under a user or organization namespace for which you have write permissions. For example, I'd be able to publish under `ghcr.io/taywrobel/redis`, or under `ghcr.io/github/redis`, but would be disallowed from publishing as `ghcr.io/saxonww/redis`.

Other than that restriction, you can have nested image namespaces with arbitrarily many segments; i.e. `ghcr.io/taywrobel/dev/redis` would be valid.


Having arbitrarily many segments is an improvement. In particular, we have some tooling that depends on a calculated image name, and we have to pre-pull and rename images locally if we store those in GitLab's registry.

I think for an Enterprise customer being able to push images that don't match the user/org pattern would be good too. While you couldn't allow just anyone to push e.g. 'ghcr.io/docker/docker:stable', I think this is a valid want for an internal/self-managed registry.


PM here for the GitLab Container Registry.

That restriction is something we will be moving away from in the coming months. I like how GitHub allows you to publish an image under a namespace. We have an open issue for that: https://gitlab.com/gitlab-org/gitlab/-/issues/241027.


From my use of the registry on GitHub.com over the past half year or so, that seemed to be the case.


I wonder how long this has been brewing and waiting for the right moment. It's hard to not contrast this with Docker's recent changes to their container registry [0] (discussion [1]).

[0] https://www.docker.com/pricing/resource-consumption-updates

[1] https://news.ycombinator.com/item?id=24143588


I've been in the private beta for a couple weeks, and the service is looking great. My issue was that `containterd` couldn't pull from GitHub Packages' due to some OCI vs. Docker differences. The new incarnation fixes this :)


This should give dockerhub a run for their money. I recently received an email from Docker saying my free access to their repository is now rate limited, which would include public open source images


There was a bit of a show-stopper in GitHub's registry when used with containerd, but that appears to have been fixed now: https://github.com/containerd/containerd/issues/3291


The new container registry should be OCI compliant, and so should work with any OCI compatible tooling, including docker, containerd, ORAS, and hopefully more tools yet to be come.

If you hit any compatibility issues, especially any with regards to the OCI spec, please let us know!


Is this another GitHub thing with only user-scoped access tokens rather than project scoped? I really don't like that and I'm surprised people are seemingly fine with it.


Service accounts are a valid security pattern - what does it mean, project scoped?


I mean, let me limit a token's access to only a certain repository or subset of images or whatever the service is, rather than anything in my account.

Separate accounts are a massive pain to manage by comparison.


It would be a valid security pattern if it was created under the org scope, but it isn't.

A "service account" on GitHub is just another user account tied to a real user with that users MFA (if MFA is enabled, and since we're referring to valid security patterns, it should be).

GitHub's organizational features are poor.


You can use Skopeo[1] to easily migrate your images from one registry to another. Here it is from Package Registry to Container Registry.

A personal access token with `write:packages` and `read:packages` scopes is enough.

  skopeo copy \
      --src-creds <USER>:<ACCESS_TOKEN> \
      --dest-creds <USER>:<ACCESS_TOKEN> \
      docker://docker.pkg.github.com/<USER>/<REPO>/<IMG>:<VER> \
      docker://ghcr.io/<USER>/<IMG>:<VER>
[1] https://github.com/containers/skopeo


Is there a way for you to authenticate WITHOUT using a PAT? It appears that this only works if you create a PAT and use your user as the login.

I would have hoped that the GitHub Action token could login given that the permissions for a GHA token already have the ability to read/write to packages: https://docs.github.com/en/actions/configuring-and-managing-...

If I use an app token that's derived from a GitHub App installation, what user name do I use?


With the GitLab Container Registry you can authenticate using your PAT, Job token or a Deploy token.

https://docs.gitlab.com/ee/user/packages/container_registry/...


So will pulling images via github actions from Github Container Registry within an organization count towards data transfer costs?

Today we are pulling from one repo's packages to another repo via a token when dealing with shared base images. Clunky, but also counts as a data transfer charge I believe. (maybe not if using a github token)

I'm not sure the account billing page clears things up for me:

https://docs.github.com/en/github/setting-up-and-managing-bi...


> All data transferred out, when triggered by GitHub Actions, and data transferred in from any source is free. We determine you are downloading packages using GitHub Actions when you log in to GitHub Packages using a GITHUB_TOKEN.

Should be free I think?


Bandwidth between all GitHub Packages services (GHCR included) is free.


Hope this works, but our experience with the GitHub registry was awful so far. We've spent >10h trying to get GitHub Nuget registry to work without success. We're still using Gemfury. You'd expect that GitHub, Microsoft and .NET should play nice, but no.


Apologies for that. :(

I hope you'll give this a try. GHCR is based off an all new code stack. You'll notice that now you need to associate Containers and the repositories they come from instead of publishing "into repositories". Our new model has packages / containers published into orgs which will make other services simpler as elevate them as well.


We're periodically retrying things.. (had 3 attempts with NuGet). So does this include a redo of / incorporates the package registry? It would make sense to treat them uniformly.


I also had a bad and still have a bad experience with NuGet and GPR. I get packages rejected due to Auth (but not consistently) when building using actions. Authentication is always a problem with NuGet (GitHub, GitLab). It is not only a problem of the registry but also of the client codebase IMHO.


Does this mean that vendors, or the creators of images pay for their users downloading them?

I.e. if I release a popular image, and it's pulled constantly, I'll end up paying for that popularity in my GitHub bill?

Pro - Data transfer out outside of Actions - 10GB limit, then $0.50 per GB

https://github.com/features/packages#pricing


During the beta there will be no charges for GHCR usage.

After we go GA, we'll be charging for storage and data transfer on private images only, the same the current registry offerings. If your popular image is public, you'll incur no bandwidth charges, regardless of how often it's pulled.


Thank you. So "free" does mean free for public images.


Been playing around with this for the past hour. Pushed a couple images. But I can't seem to delete them. Under "Edit packages", there are only the options to "View all versions" and "Edit description". But nothing to manage or delete packages.

Anyone successfully deleted a container image?


Can someone help me read more better? I don't see how to actually sign up for the public beta of this in the article. Am I missing something?

Edit: Okay, so it appears you just have to push it with a GitHub Action. I'll give that a try.


There shouldn't be any signing up needed, as it's an open beta.

You should already see `Container` listed as a package type in the UI, and can begin publishing packages by creating a personal access token with `read:packages` and `write:packages` scopes as appropriate - https://docs.github.com/en/github/authenticating-to-github/c...


You don’t need to sign up. It’s open to everyone with a GitHub Account. Also, you don’t need to use Actions. You can just push a container. Here’s a doc that shows you how to push a container. https://docs.github.com/en/packages/getting-started-with-git...


I see some replies from GH employees. Do you intend to support /v1/search or /v2/_catalog APIs? Would be nice to have access to at least public image listing like Docker Hub's.


Yup, we intend to support them, or at least the v2 catalog API. Given the current state and forward progression of most tooling, as well as the compatibility of docker v2 with OCI v1, there didn’t seem to be much value in implementing the legacy Docker APIs.

The catalog functionality is just more complex given the granularity of our permission model (configurable down to the package level), so it unfortunately didn’t make the cut for the beta feature list.


I don't get the GitHub Container Registry, GitHub Package repository, or GitHub Actions. The only good thing about these products seems to be integration with GitHub. But in many ways they are inferior to other CI and artifact hosting options. The biggest problem is the latency will be significant outside of your own network. The other big issue is security. That's fine, they are new products... except they don't seem to be moving in a direction that would fix these problems. I think maybe a lot of programmers don't realize what they're missing, they're just using them because they're from GitHub.


More often than not, people want build artifacts in the same security profile / performance profile as their CI/CD pipelines. Think of this more as services to support your GitHub Action workflows - where inter-service bandwidth is free and highly performant.

Disclosure: I work at Azure.


This will make my life easier, thanks. Being able to delegate publication to projects instead of setting up yet another silo is great!


I called this a month ago. there is no reason why microsoft (github) would pass up an opportunity to create their own registry.


Good bye docker hub.


RIP Dockerhub


Or the new competition will force Docker Inc to invest in Dockerhub features again.

Including the free tier, if they want to keep their "world's largest library and community for container images" status. But free tiers cost money.

With RedHat going after Docker itself with their daemonless version of Docker, podman, and docker-ce's lack of cgroupsv2 support, Docker Enterprise having to compete with K8S, and there being multiple entrenched container registry companies (eg Quay) it must be tough at Docker Inc.


> With RedHat going after Docker itself with their daemonless version of Docker, podman

Nobody cares about Podman except Red Hat and their most loyal fans. I'd be surprised if podman has managed to steal more than 1% of Docker's install base.

> docker-ce's lack of cgroupsv2 support

containerd supports cgroupsv2. Docker CE is based on containerd. If the current release doesn't have it, the next release will.

> Docker Enterprise having to compete with K8S

Docker Enterprise includes a K8S distribution. Does it compete with itself?

> multiple entrenched container registry companies (eg Quay)

Quay is not a company. It is a CoreOS (now Red Hat) product.

> it must be tough at Docker Inc.

I believe that's true. Just not for any of the reasons you have given.


> Nobody cares about Podman except Red Hat and their most loyal fans. I'd be surprised if podman has managed to steal more than 1% of Docker's install base.

This seems dismissive. Podman has many neat features that docker doesn't (pods) or were added later (rootless containers).

https://developers.redhat.com/blog/2019/01/29/podman-kuberne...

https://developers.redhat.com/blog/2019/01/15/podman-managin...

Obviously, you can use kompose and docker-compose but I would argue podman has better experience.


I'm not dismissing how good it is, just how popular it is. Definitely not popular enough to threaten Docker's adoption, not even close. Docker does have problems, but Podman is not one of them.


Docker Hub still has probably the majority of public images. Quay has a ton of RH and enterprise-specific public images, but I see Hub way more often still.

Additionally, being the default host (so shorter image name specifications) for most clients gives it a little bit of a leg up.


GitLab has had this for years and Docker Hub has remained. Additionally, Docker Hub has done a great job getting projects to use them for their official images:

https://hub.docker.com/search?q=&type=image&image_filter=off...

Until GitHub is able to get as many official images sponsored by the open source projects themselves, DockerHub will not be going away.


Things has changed a little few days ago. They've sent an email telling their user about their new policies including a "new inactive image retention policy", saying "For free accounts, Docker offers free retention of images inactive for six months" (See link 1).

Not too big of a deal, but I'd imagine many users would take that into account when they're making chooses.

Link 1: https://www.docker.com/blog/scaling-dockers-business-to-serv...


Isn't dockerhub much cheaper (for private images)?


Since MSFT bought GitHub, the site has launched a bunch of confusing products. I lost track what all these things do now.

What are GH Packages?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: