Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Seems like they built something that everyone uses as an integral part of their workflow, and they're looking to get paid for it, which, I may be out of date here, was the entire ethos of this entire site for quite some time

1. Not every widely-used tool needs to be a VC-powered unicorn startup. I'd be pissed if Linus/the Linux Foundation started to demand a per-core licensing fee. I'd probably convince my organization to switch to a BSD, on principle.

2. No one likes a bait-and-switch, and lately, lots of companies see Free/Open Source Software as a "growth hack technique" rather than an actual philosophy, because they'll otherwise face headwinds with a closed-source product. This is akin to the underwear gnome strategy:

  1. Author Open Source product
  2. Get wide adoption
  3. ???
  4. Profit


The problem with this is that if Docker Inc goes under, you can say goodbye to Docker Hub: https://hub.docker.com/

Sure, there are alternative repositories and for your own needs you can use anything from Sonatype Nexus, JFrog Artifactory, Gitlab Registry or any of the cloud based ones, but Hub disappearing would be a 100 times worse than the left pad incident in the npm world.

Thus, whenever Docker Inc releases a new statement about some paid service that may or may not get more money from large corporations, i force myself to be cautiously optimistic, knowing that the community of hackers will pick up the slack and work around those tools on a more personal scale (e.g. Rancher Desktop vs Docker Desktop). That said, it might just be a Stockholm Syndrome of sorts, but can you imagine the fallout if Hub disappeared?

Of course, you should never trust any large corporation, unless you have the source code that you can build the app from yourself. For example, Caddy v1 (a web server) essentially got abandoned with no support, so the few people still using it had to possibly build their own releases and fix the bugs themselves, which was only possible because of source code availability, before eventually migrating to v2 or something else.

Therefore, it makes sense to always treat external dependencies, be it services, libraries, even tools like they're hostile - of course, you don't always have the resources to do that in depth, but for example seeing that VS Code is not the only option but we also have VS Codium (https://vscodium.com/) is encouraging.


Docker hub going down would be a disaster for sure, but I consider "pull image/library from 3rd party hub over the internet on every build" to be an anti-pattern (which is considerably worse with npm, compared to docker). That said,if this is where the value is being provided, perhaps they ought to charge for this service? I guess it's difficult because it's easily commoditized.

> but can you imagine the fallout if Hub disappeared?

I wish that would actually happen - not forever - if it'd go down for a day or 2 with no ETA for a fix, and the thousands of failed builds/deploys will force organizations to rethink their processes.

I think Go's approach on libraries is the way forward - effectively having a caching proxy that you control. I know apt (the package manager) also supports a similar caching scheme.


That sort of happened already when docker started rate limiting by incoming IP:

https://www.docker.com/increase-rate-limits

Large orgs started hitting the rate limits since many devs were coming from the same ip. Most places probably put in a proxy that caches to a local registry.


That's what we did, put a proxy in front that caches everything. Now that Docker Desktop requires licensing, we're going down the road of getting everyone under a paid account.

I'm sure Rancher is great for personal desktop use, but there's no reason large companies can't pay for Docker.


Or even small. At work, I advised that we just pay for Docker Desktop. We got it for free for a long time. Our reason for not paying is that we're an Artifactory shop, so their Docker Enterprise offering wasn't really attractive to us. But we're easily getting $5/dev/mo worth of value out of Docker Desktop.

And I don't really see this as an open source bait and switch, either. Parts of Docker are open source but Docker Desktop was merely freeware.

That said, I believe in healthy competition, and so it was quite worrisome to me that Docker Desktop seemed to be the only legitimate game in town when it came to bringing containerization with decent UX and cross-platform compatibility to non-Linux development workstations. So I'm happy to see Rancher Desktop arrive on the scene, and very much hope to see the project gain traction. Even if we stay with Docker, they desperately need some legitimate competition on this front in order to be healthy.


> but can you imagine the fallout if Hub disappeared?

> I wish that would actually happen - not forever - if it'd go down for a day or 2 with no ETA for a fix

Do people not run their own private registry with proxying enabled? If Docker Hub went down at this point, I think my company would be fine for _months_. Only time we need to hit Hub is when our private registry doesn't have the image yet.


It just never seemed worth the effort when we are paying Docker Hub to be our private registry.


The problem is that most of the companies that rely on the Hub aren’t helping it stay afloat.

You are obviously not part of this problem.


You can already cache dockerhub via the docker repo container very easily. In fact, due to the number of builds, it would be foolish not to do this to avoid GBs of downloads all the time.


> Hub disappearing would be a 100 times worse than the left pad incident in the npm world

This is really overdramatic. If Docker Inc. went out of business and Docker Hub was shutdown then the void would be filled very quickly. Many cloud providers would step in with new registries. Also, swapping in a new registry for your base images is really easy. Not to mention the tons of lead time you’d get before docker hub goes down to swap them. Maybe they’d even fix https://github.com/moby/moby/issues/33069 on their way out, so we can just swap out the default registry in the config and be done with it.


> Also, swapping in a new registry for your base images is really easy.

This is the exact problem! Sure, MySQL, PHP, JDK, Alpine and other images would probably be made available, but what about the other images that you might rely on, but the developers of which might simply no longer care about them or might not have the free time to reupload them to a new place.

Sure, you should be able to build your own from the source and maintain them, but in practice there are plenty of cases when non-public-facing tools don't need updates and are good for the one thing that you use them for. Not everyone has the time or resources to familiarize themselves with the inner workings of everything that's in their stack, especially when they have social circumstances to deal with, like business goals to be met.

In part, that's why I suggest that everyone get a copy of JFrog Artifactory or a similar solution and use it as a caching proxy in front of Docker Hub or any other registry. That's also what you should be doing in the first place, to also avoid the Docker Hub rate limits and speed up your builds, not downloading everything from the internet every time.

Otherwise it's like saying that if your Google cloud storage account gets banned, you can just use Microsoft's offering, while it's the actual data that was lost that's the problem - everything from your Master's thesis, to pictures of you and your parents. Perhaps that's a pretty good analogy, because the reality is that most people don't or simply can't follow the 3-2-1 rule of backups either.

The recent Facebook outage cost millions in losses. Imagine something like that for CI/CD pipelines - a huge number of industry companies would not be able to deliver value, work everywhere grinding to a half, shareholders wouldn't be pleased.

Of course, whether we as a society should care about that is another matter entirely.


Using an abandoned image that nobody cares to update carries its own set of problems (e.g security)


As i said, if it's not exposed to the outside world and doesn't work with untrusted data, that claim is not entirely valid.

Imagine something like this getting abandoned, or someone running a year old version of it: https://github.com/crazy-max/swarm-cronjob/blob/master/READM...

Its only job is to run containers on a particular schedule, no more no less. There are very few attack vectors for something like that, considering that it doesn't talk to the outside world, nor processes any user input data.

Then again, it's not my job to pass judgement on situations like that, merely acknowledge that they exist and therefore the consequences of those suddenly breaking cannot be ignored.


If you depend on it, you should keep a local copy around that you can host if needed.

Things get abandoned all the time. When you make them part of your stack, you now are forever indebted to keeping them alive yourself until the point in which you free yourself from that burden.


If only we could have a truly distributed system for storing content addressed blobs ... perhaps using IPFS for docker images. This way you could swap the hosting provider without having to update the image references


I’d love for others with more knowledgeable to chime in, since this feels close to the logical end state for non-user-facing distribution. At a protocol level, content basically becomes a combination of a hash/digest and one or more canonical sources/hubs. This allow any intermediaries to cache or serve the content to reduce bandwidth/increase locality, and could have many different implementations for different environments to take advantage of local networks as well as public networks in a similar fashion as recursive DNS resolvers. In this fashion you could transparently cache at a host level as well as eg your local cloud provider to reduce latency/bw.


Sounds a lot like BitTorrent.


I’m not super well versed, but I thought BitTorrent’s main contribution was essentially the chunking and distributed hash table. There is perhaps a hood analog of the different layers of a docker image.


Isn't this what magnet links for torrent files have provided for years? Maybe even a decade? https://en.wikipedia.org/wiki/Magnet_URI_scheme


Hub disappearing would be the best thing that happened to Docker in years. People really shouldn’t be running the first result from Hub as root on their machines.

I miss a version of hub with _only_ official images.


I doubt a rubber stamp of "officialness" would make the situation much better.


> Docker Hub

Given that it is extremely trivial to run your own container registry, I think the focus on this as some great common good is overstated. As it is 99% of the containers on it are for lack of a better word absolute trash, so it is not very useful as it stands.


VSCodium doesn't add anything other than build VSCode source without telemetry and provide real FOSS build of VS Code. If VSCode development stopped then VSCodium will stop also.


> The problem with this is that if Docker Inc goes under, you can say goodbye to Docker Hub: https://hub.docker.com/

So you think that Docker Hub is Docker Inc's entire value proposition? And if Docker Inc is nothing more than a glorified blob storage service, how much do you think should company be worth?


Oh, not at all! I just think that it's the biggest Achilles' heel around Docker at the moment, one that could have catastrophic consequences on the industry.

It'd be about as bad as that one time when Debian updates broke GRUB and my server could no longer boot: https://blog.kronis.dev/everything%20is%20broken/debian-and-...

Imagine that, but industry wide:

  - you no longer can use your own images that are stored in Hub
  - because of that, you cannot deploy new nodes, new environments or really test anything
  - you also cannot push new images or release new software versions, what you have in production is all that there is
  - the entire history of your releases is suddenly gone
I don't pass judgements on the worth of the company, nor is there any actual way to objectively decide how much it's worth, seeing as they also work on Docker, Docker Compose, Docker Swarm (maintenance mode only though), Docker Desktop and other offerings that are of no relevance to me or others.

Either way, i suggest that anyone have a caching Docker registry in front of Docker Hub or any other cloud based registry, for example the JFrog Artifactory one. Frankly, you should be doing that with all of your dependencies, be it Maven, npm, NuGet, pip, gems etc.


Most widely-used tools are not VC-powered unicorn startups and nobody said they needed to be. You're free to create all the tools you want, while others can raise money to develop theirs.

If open-source helps the product grow and the community benefits then what's the problem? Who lost here? And why are there headwinds with closed-source products anyway? Open-source doesn't mean free, so what's the objection?

Docker the company executed poorly in monetizing their product but there's a lot of undue hate compared to the value it has created. If you don't like it when it's closed-source, and you don't like it when it's open-source, then what do you want exactly?


> Open-source doesn't mean free

Counterpoint: yes it does. Hardly anyone pays for external open-source products. Managed solutions, yes, but we've seen multiple times that trying to close an open system so you can charge for it is very unpopular. For example, my workplace has a company-wide edict against using or even downloading the Oracle JDK.


Open source literally means they show you the source code. It doesn’t have to mean anything beyond that.


"open source" is by now understood to mean https://opensource.org/osd


it doesn't, that's called source-available


.. and let you redistribute it. Which implies that everyone else can have a copy. And therefore usually the binaries.


To be fair, docker desktop never was open source.


As I understand it, that’s not entirely correct, and the Docker Desktop we all use and … well, _just use_ … is built from a number of components that are or used to be OSS: Docker, Docker Compose, docker-machine, and Kite, amongst others. Granted, Docker Desktop is more polished than Kite was, but it’s also had years of VC money thrown at it so that it’s almost as bloated in appearance as the Dropbox client.

And that’s sort of the problem. I don’t want the Docker Desktop that exists. I want something that does all of the behind-the-scenes stuff that Docker Desktop does and gives me a nearly-identical experience to developing on Linux even though my preference is macOS. I might even pay a _reasonable_ subscription for it.

But the Docker Desktop that is? Not exactly something that I think is worth paying for.


Free means free - but if versions 1.0 to X are free today and version X+1 is paid tomorrow, that is a bait-and-switch. There's no hate here, it's just that I (and any competent client company) have no way of knowing if they "won't alter the deal any further".

The problem is not with the open-source approach: in chasing growth, they commoditized both areas they could have monetized - the client and the service. If they had charged for either (or both) at first, they wouldn't have gained traction, and some other company would have ate their lunch.


So they commoditized and failed, or they could've been commercial from the start and someone else would've commoditized it and they still fail. So what? That's the point of a startup, they tried to build something and it didn't work out as a business model.

The community still benefited greatly from all the development and new projects that came from this. And what is this other company that would've ate their lunch? How would that company survive exactly?

The only objection seems to be the license change, which is still free for the vast majority. Only larger commercial users have to pay, but that seems commensurate with the value they gain from it. Should companies never try to alter terms as the market changes? I don't see why people are entitled to products and services forever, and then hate the company if they try to be sustainable but also hate them if they abandon it.


> I don't see why people are entitled to products and services forever, and then hate the company if they try to be sustainable but also hate them if they abandon it.

Nobody is entitled to anything. Users aren't entitled to free services/products in perpetuity, but the other side of the coin is that companies also aren't entitled to those users. Nor companies are entitled to being free of any criticism.


> How would that company survive exactly?

Let me distill my thinking: a tool does not have to be a company, or be backed by a single-product company.

IMO,the more successful tools tend to be backed by a maintainer & contributors who work on it in their free time, or by a consortium of companies that do not directly make money from the tool, but are willing to put money into it. Docker-like functionality can be replaced by such models, so we are not stuck in a perpetual cycle of ${ToolName}, LLC


> Not every widely-used tool needs to be a VC-powered unicorn startup.

Ok, which tools need to be a VC powered unicorn? I’m serious.


If CUDA were a startup, it could be a VC-powered unicorn (not sure about the deserving, but they'd have decent shot at monetization). Unfortunately, my tool knowledge is not broad due to the limitations of the few tech-stacks I'm familiar with.

Honorable mentions: R and Rust, maybe? But I don't see how they'd make the money back (which perhaps is the challenge Docker is running into)

edit: Also SQLite!

2nd edit: I completely misunderstood your question, I think. The answer is "none" - there are no tools that need to be unicorns, at least for those that are downloaded and can be run locally. Those that I listed could be.


Community edition and paid Enterprise plug-ins with support is a standard pattern in the OSS market.

Not quite bait & switch as you put it, and frankly polishing and idiot proofing tools for production workloads is expensive, and requires competent professionals that definitely need to feed themselves and their family.


Just because everyone does it doesn't mean that it isn't wrong and illegal.


That's an extreme claim. How is offering community and enterprise editions potentially wrong or illegal?


It's usually a full open source solution and then the enterprise edition gets introduced later to make money. In other words, it's dumping to gain market share and then later trying to use the cornered market to extract profit.


There is absolutely nothing wrong or illegal with companies offering new additional products.

You're not cornered when you can freely choose to buy the enterprise product depending on whether you get any value from it or stick with the open-source version which remains available and has plenty of competition (like Rancher).

In fact that entire issue with Docker is that they have too little value to charge for and too much competition to defend against, the exact opposite of dumping to clear out the market.


Indeed. I’ve worked with several “enterprises” and often the OSS option was discarded for lack of support for certain enterprise use-case or integration with other commercial products; or for lack of professional support for production issues and SLAs.

Obviously this stuff costs effort and time, you can’t expect that to be available for free.

It’s also a great opportunity to commercially exploit your knowledge and taste and make a living out of it; rather than curse “the powers that be” on a daily basis, while struggling with closed software whose only purpose has always been milking as much profit possible




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: