Hacker News new | past | comments | ask | show | jobs | submit login
Homebrew to deprecate and add caveat for HashiCorp (github.com/homebrew)
262 points by mooreds on Oct 8, 2023 | hide | past | favorite | 102 comments



It's important to note that they are holding off on deprecating dependents to see if they can swap in replacements. For example, programs that depend on Terraform will likely be able to use OpenTofu as a replacement.

Unfortunately it doesn't seem like there are going to be open source alternatives to Vault, Consul, or Nomad (that last one is hilarious to me: nomad was a good product until hashicorp stopped investing in it, now that it's closed source it basically has a snowballs chance in hell of being adopted).


If anyone does start an open Vault fork, I'd be interested in contributing as I get time.

Edit to add: https://github.com/hashicorp/vault/graphs/contributors?from=...


I wish I had the time or energy for it- especially with someone like you willing to contribute. I really hope someone takes up this task.


I like Nomad a lot. It is even more lightweight than k3s and has served my low-budget projects very well. A little sad to see things are going this way.


Nomad was overly ambitious in running everything under the sun. Didn't catch on. Setting it up would also require Consul. Docker swarm is simpler and superior solution to Nomad and is built right into docker engine itself.


I would rather sink in an ocean of kubernetes YAML documents than having to deal with Docker swarm issues ever again. Thankfully Docker swarm is kinda dead so it’s unlikely to be relevant again. Kubernetes won.


I like what kube achieves, I don't like the power and memory hungry requirements of it, it seems ill optimized. A cluster with no pods uses 5%-15% CPU, that's a lot of electricity being used for nothing. It's really odd since Google claims to have green datacenters.


I'd bet you're running a "stacked" control plane and if you moved etcd off to its own machines you'd localize the churn to be that. Which, if correct, doesn't exactly refute your "kubernetes is power and memory hungry" claim but would more clearly distinguish why

I had high hopes back in my earlier days of them offering pluggable KV stores so folks could make tradeoffs if etcd doesn't align with their goals and risks but as best I can tell they've declared it a non-goal


That's part of what k3s does


Does Google use carbon swaps to conceal their emissions? Maybe that's how they reconcile the difference.

A quick search turned up this: https://www.greencentury.com/google-parent-alphabet-pledges-...

Looks like a possible case of greenwashing to me.


I got into an argument with someone at google who said they don't use carbon swaps and ended up pulling up a shareholder report that said they did. Google is real big on the greenwashing.


Google doesn’t use kubernetes to run anything production. They use Borg


I'm curious, are you seeing that on cloud instances, or what?

My experience with bare-metal Kubernetes hasn't been that bad. On my home cluster with very underpowered J4105 CPUs, I get about 5% CPU usage on the master node at idle, and 2% on workers.


People need to check their logs for any oddities more frequently. A lot of the setups I’ve seen have subtle misconfigurations that leak/busy loop resources needlessly


Google doesn’t run on Kubernetee.


Kubernetes won because of its composability. The API is the killer feature. The notion of objects, labels, pretty much an object oriented, declarative interface.

You'd have to do a lots of custom morphing if you want that kind of interface over docker swarm.

For smaller setups, simplicity of docker swarm might be useful but I am curious to know from people who are running clusters of 25 to 50 nodes.


I've actually been using docker swarm for an odroid cluster that runs a lot of services in my house. I really like it- it's extremely low overhead and the network mesh works way better than I was expecting considering I put no effort into configuring it.

That said this is a three node cluster running a handful of containers, with services accessed by my wife and I. So not exactly a large scale production experience.


As someone whose wife will immediately call me if the important services in our household (Plex, pihole, home assistant), I think those definitely count as production services!


Can you elaborate on its issues? I've never used it on production myself, but it looks like a simplified k8n, which is something I feel a lot of people desire


Last used it years ago, networking would just die randomly, requiring container restarts when using multiple nodes


I have heard (though haven't tried) that V2 of Compose/Swarm is significantly better.


Anything can and will break. Symptomatically similar thing happened to me with K8s with (IIRC) Calico. And I’ve had issues with Swarm too, although of a different kind (it quietly refused to run a data channel for me over an IPv6-only network, while control channel worked just fine).

It was all quite a while (3-4 years) ago, so I don’t remember much details, except that Swarm was then not exactly capable of working in IPv6-only environments. It was significantly easier to navigate Docker’s codebase than Kubernetes’.

(Ultimately, my solution was that I understood that I didn’t need any kind of orchestration there, just HA/failover. But that’s another story.)


I haven’t used it for many years but in short it just didn’t work reliably. I mostly encountered network iptables issues from memory.


I've been very happy with until now, though i must admit i'm using one node swarms. But i'd rather use that (1) than sticking together a custom solution using docker compose as i see it mentioned sometimes.

(1) https://dockerswarm.rocks/traefik/


Infinite issues with overlay networking once you scale past 30 nodes in a swarm. Weave aside, network drivers for anything not CNI are basically deprecated/dead.


With Nomad 1.3 and newer, you can run a Nomad cluster without Consul, there's support for built-in service discovery: https://www.hashicorp.com/blog/nomad-service-discovery


> Docker swarm is simpler and superior solution to Nomad

This is the voice of absolute delusion. Swarm is a mess. Nomad actually scales.


Having worked with Nomad, K8s and Docker Swarm at reasonable scale, Docker Swarm is the only one of those three that I would never use again under pain of death.


I am a happy swarm user for a long time now. It is pretty simple. The only complicated part was the reverse proxy, but traefik has been a solid solution.

You will spend quite some time figuring the labels out, but once you have them for one service, it works for many.

A new solution that might be even better is Kamal by 37 Signals. It runs all their production stuff, so it must be capable. Can’t wait for a version 2 where, I’d suspect, by then there will be more community contributions for some more diverse setups merged in.


Recent nomad versions replace lots of consul functionality. Consul is definitely not needed at small scale, even for service discovery.


Honestly, all Hashi products are generally really good. I do think they do suffer from moving too fast syndrome. They pushed to go public and most or all of their recent shortcomings can be traced back to trying to push stock prices.

Early Hashicorp was incredible. They were open source stewards and looked like an up-and-coming Redhat or Canonical. Their products were ground-breaking and truly huge value adds to the open-source eco system. But they got extremely popular (mostly thanks to Terraform, which took off and drew more attention to their other products).

But since going public it has been clear that they are trying to get money and enterprise customers at any cost.

Terraform itself has felt like it was in maintenance-mode since hitting version 1. Terraform providers frequently break. For production I have found you need to pin providers down to the patch level, because i've had multiple problems just in small patch-level updates over the recent years. They famously started rejecting any open-source contributions that didn't provide a business value for Hashicorp. Since TF hitting v1, almost all the attention seems to be towards Terraform Cloud and Terraform Enterprise. At Hashicon, it feels like every talk is just propaganda pushing these products. It is all they care about anymore.

Nomad was a product that Hashicorp was very excited about for a while and then they seemed to just abandon on the side of the road in their quest for enterprise dominance (probably after learning that most Enterprises are all-in on K8s and Nomad is more beneficial to fast-moving start-ups).

Vault was an incredible tool, especially in the Open-Source space. But in the past few years they have really split the open-source and licensed versions of Vault significantly which made the open-source version feel more like a burden to Hashicorp. The last time I talked to Hashicorp about Vault (we were deeply considering a switch to Vault at work last year), they treated the open-source self-hosted solution as a "trial" of the real Vault, and that is what it felt like. Almost everything we ran into during setup, they would respond back with "oh that's fine in the enterprise version".

Overall, they have peeled back on all but the absolute minimum efforts required to support their open source versions of products and have been an entirely enterprise focused company for a while. And I can't blame them, sure you need to make money. But I can't help but point to organizations like RedHat and Canonical as examples of what Hashicorp could have been.

At this point I feel like the parent that watches their kids, talking about how they missed out on reaching their potential, thanks mostly to what seems like greed or over-ambition. "I'm not mad, I'm just disappointed" comes to mind when I think of Hashi. I have high-hopes for OpenTofu to fill the Terraform void. I've moved past Vault and am using one of the big hyperscalers' secrets management tools (which I enjoy a lot less, but is cheaper and less complicated). I use kubernetes instead of Nomad, which again is fine, it has become the standard anyway. So i'll be just fine... but I'm dissapointed at you Hashicorp. That's all.


> But since going public it has been clear that they are trying to get money and enterprise customers at any cost.

And yet, as an enterprise customer, their sales team is terrible. Incredibly unhelpful with problems, not willing to discuss pricing, ultimately lost the deal due to how unresponsive they were, and how they basically told as that they wouldn’t be any more likely to fix all the bugs we were hitting if we paid.

The reason they had to go this route is they are getting trounced by their competitors and it’s all their own fault


I wonder if this downhill trend has been in the works for years. In early 2018 I wrote a post[1][2] about my frustrations with interviewing with HashiCorp.

TL;DR: they put me through an interview gauntlet, took my entire day, and ignored me for weeks until I aired my frustrations in their contact us form.

I haven’t used their products since but always heard great things about them. That enterprise push is fierce.

[1]: https://blog.webb.page/2018-01-11-why-the-job-search-sucks.t...

[2]: https://news.ycombinator.com/item?id=16127697


Not at HashiCorp, but I try to respect applicants' time in interviews, and my overall process is less than 3 hours total for interviewing, across 4 people (all of which are self-scheduled). That said, when you have hundreds or thousands of applicants, people inevitably fall through the cracks unless you have an internal recruiting team entirely focused on maintaining that experience.

Interviewing sucks for everyone on either side, and efforts to improve candidate experience on my end have often been met with resistance in the company. One thing I've done is try to be as clear as possible at the end of interviews where someone stands. If it doesn't go well, I try to relate what didn't go well and how it applies to the position we're hiring for. If it goes well, my interviewers and I try to send out invites for the candidate to schedule the next interview within 10 minutes of completion, and I tell them that during the call.

So many companies are so risk-averse though, that they act like interviews are a poker game. I've generally had nothing but positives with my approach, and yes, someone may sue us at some point for being blunt and upfront, but my experience so far is people appreciate honest feedback if you deliver it kindly on the spot.


> One thing I've done is try to be as clear as possible at the end of interviews where someone stands. If it doesn't go well, I try to relate what didn't go well and how it applies to the position we're hiring for.

I tried doing that last time I hired and I think generally speaking makes a good experience for candidates, BUT when you have those 2-3 individuals that have really high opinions of themselves it can get tiring for the hiring manager (myself), because they won't accept the answer anyway and will keep asking and ultimately give some bad review in Glassdoor etc


I don't have too much issue because when it's bad, it's usually because they've failed pretty hard on tech screens for questions that have absolute answers. I have on one occasion had to pull up my IDE and run code and prove my questions to an argumentative candidate, but generally people appreciate the explanations when they are wrong and tell me they've learned something.


I prefer bluntness as that's my personal style. I'm glad you're pushing through, even despite resistance.

Every person I interviewed with at Hashicorp seemed to enjoy the conversation and the live coding challenge wasn't particularly challenging. Ah well.

Rejection is protection and redirection.


The second comments section, could have been all from today. Nothing about hiring has changed in 5 years lol


Wish I could say I was surprised


The enterprise versions of their products are ridiculously expensive. They're always trying their best to charge you with mistakes (e.g. number of connecting clients). The pricing on the landing page is just a fraction of the real price. That's a big turn-off for me. It's true that I don't want to have the hassle of setting up a production-ready Vault, but seriously, if Hashicorp keeps charging like that I'd would really consider a different solution.


I appreciate Hashicorp, but early on I had a bad taste in my mouth. I paid for their first commercial offering, the VWWare adapter of Vagrant. I had a conversation with Mitchell (can't remember the exact context) around an issue I was encountering, and remember him being pretty dismissive toward what I was asking. Again, I'm sure there's a lot of context I'm not remembering, but I had the feeling that they used their early revenue as their launching pad and were more focused on the next big thing than giving treating their customers as the asset that they were.


Yeah, that was a waste of money. Didn’t work well at all


Wait is Nomad really closed source?


Nomad, along with the rest of Hashicorp's flagship products, transitioned to the BUSL-1.1: https://github.com/hashicorp/nomad/blob/main/LICENSE


Am I misunderstanding something? I thought BUSL means the source is still open, but not free as in freedom.


Perhaps a more apt term would be freeware.

The BUSL is not OSI approved and places restrictions around usage, in an effort to sell some product or service off of a project and prohibit competition. It is not free as in beer either, if you hit such usage.

The source is available (hence source available being another terminology) but even Hashicorp has dropped "open source" from nearly every relevant page in understanding that their products no longer have an open source core (see instead "Vault Community" or "Terraform Community").

The general consensus is that no, the BUSL is not OSS as OSI has not blessed it as such: https://opensource.org/licenses/


Thanks.


It's true that Hashicorp's source is still available, but you might not have permission to use it or modify it any more. Even if you can read it legally. Under their new license, Hashicorp can essentially revoke any commercial user's license to run Hashi software, and there is nothing the business can do except pay. They can do it at any time, if the business has any divisions that do anything close to competing with any Hashi product.


> Am I misunderstanding something?

Yes, source being available is not enough to be called open source.


It's worse in some ways because if you read the source and then work on another open source project in the same field you could end up in legal trouble.


Nomad looked like a good product until I used it. I was so looking forward to a simpler take on orchestration, having spent years in k8s-land and generally being a fan of Hashicorp in the past.

My tldr now is nomad should be considered harmful.


I’ve started playing with Nomad recently just to see what it’s all about. There are some annoying things with networking, but overall it’s been pretty fun so far. What made you consider it harmful?


The number one issue is that deployments are blocking, and scaling is a deployment. That means during a deployment, scaling is disabled. This makes safely deploying to thousands of containers during scale-in hours extremely dangerous. Combined with nomad-autoscaler being a SPOF and crash-happy, the always-be-deploying startup with a daily traffic pattern should not use Nomad.

It’s also nontrivial to get nomad to be resilient. For example, if your template uses keys from consul or vault, even with “noop” mode, and consul or vault are unreachable, Nomad will happily start killing running containers that had been rendered correctly in the past, because it can’t re-render the template. This pattern has only recently been addressed by “nomad_retry” but there have been several bugs with it, and 1.6.2 will currently kill all running containers if some template resource can’t be reached. Under the hood this uses consul-template, which does support infinite retry, but getting nomad to use consul template safely is non trivial. Eg: vault tokens expire so infinite retry for vault doesn’t work as-is.

Node_pools just landed (2023!), and are still broken when using Levant (another abandoned nomad tool - think kustomize with a much more horrifying DSL).

Bonus issues: they’re still trying to get cgroups v2 working, the current version finally doesn’t DOS your backend services in an infinite loop, the UI lies, deployments can get stuck forever because “progress_deadline” is more of a suggestion than a deadline, nomad-autoscaler is not highly available, crashes very easily, often scales faster than its “cooldown” window, and is simply stupid. AWS Karpenter feels a full decade into the future.

And all that so I can write my own Nomad specs instead of installing a vendors helm chart in a network isolated namespace. Boo.

To reiterate, I’ve liked Hashi for years and years, and I don’t have any ill will towards them. Just shocked at how poor nomad is compared to k8s. It’s definitely more fun than k8s when getting started, for sure.


Recent version of Nomad has introduced many QoL features like secrets and service discovery, though logging collection remains unsupported.


> though logging collection remains unsupported.

Do you happen to have a link to the issue (I guess or docs, if it's their official stance)? I'd enjoy reading how they ended up in that circumstance


Logcollection doesn't come out of the box, and you're kinda on your own to set it up, this article (although a bit dated) shows you what you must go through:

https://atodorov.me/2021/07/09/logging-on-nomad-and-log-aggr...

It also doesn't help that Nomad handles everything from Docker containers to managing regular applications.


Also worth noting that nomad’s closest competitor (Kubernetes) does not do this either. It’s a huge oversight in both ecosystems.


We may be talking about different things, but GP's comment said "unsupported" whereas in kubernetes it's just not part of the control plane offered by kubernetes itself, in exactly the same way that cloud load balancer or even cloud auto-scaling isn't provided by kubernetes itself

Having read the sibling's comment, I think I better understand the situation since Nomad isn't a container orchestrator and thus there isn't one homogeneous place from which _to_ collect the logs, but in any sane k8s setup that's not true since both dockerd and containerd have mechanisms to influence that behavior and k8s is perfectly able, and supports, scheduling per-Node utility plumbing stuff like that without drama


Yes k8s does not provide a out-of-box logging collection solution but since everything is in containers, we can piggyback on Docker or Containerd via deamonset to get all the logs.

Nomad on the other hand support various payload(native exec, exec via chroot, containers, even Firecracker VM by community support), so doing logging collection by end users is trickier. It worth noting that Nomad UI(a official web admin panel) has log tailing utility built-in so maybe partial work has already been done. The developers may have other concerns.

The related issue is https://github.com/hashicorp/nomad/issues/10220


All of the work has been done; not partial work.


Nomad has equivalent mechanisms for collecting logs to Kubernetes. None are built in, in either platform.


Would you like to provide a link? It would help me a lot. Thanks.


https://developer.hashicorp.com/nomad/docs/job-specification... covers much of it from the collection side



Note for whatever it is worth that the Homebrew-owned version rebuilds these from source -- https://github.com/Homebrew/homebrew-core/pull/139538/files#.... This is also typical in the Linux packaging ecosystem, but typically involving packaging dependencies explicitly too, which is likely why Vault &co never landed in a distro's package set prior to the license change.

The Hashicorp variants copy the release builds: https://github.com/hashicorp/homebrew-tap/blob/master/Formul...


Why? Isn't homebrew just a package manager, and why should a non free license restrict them of including HashiCorp tools? Or do they have a policy only to include free software.

Edit: yes they have quite strict guidelines: https://docs.brew.sh/License-Guidelines


They do have a policy to only include free software [1]:

> We only accept formulae that use a Debian Free Software Guidelines license or are released into the public domain following DFSG Guidelines on Public Domain software into homebrew/core.

[1]: https://docs.brew.sh/License-Guidelines


In home-brew core.

And the reasoning is that home-brew isn't strictly a package manager, it is a dependency installer and builder - it pulls down sources and everything they depend upon (as source packages too, usually), and builds them locally. So as a user of home-brew you're trusting the formulae you install from to be for things you are allowed to pull down and build locally. Licenses matter.

Hashicorp (or anyone else) can maintain a source tap of their own - when you opt in to using a third party tap you're trusting that tap's licensing as well. And there's also 'casks' which are for non-source distributions, which home-brew can install as well, where source licensing isn't important.


But the license hasn't changed, so that raises the question of why was it included in the first place?


The license will change to BUSL for all upcoming upstream releases. [1]

[1]: https://www.hashicorp.com/blog/hashicorp-updates-licensing-f...


The license did change. From MPL 2.0 to BSL.


did you miss the memo?


> Isn't homebrew just a package manager

It’s not just a package manager.

It’s a piece of software called `brew` (the package manager) but also a package repository called `homebrew-core` to which the software connects by default. The package repository is carefully curated and only accepts open-source licenses.

You’re free to use `brew` to tap into whatever repository you like, but TFPR is concerned with the core repository only.


> also a package repository called `homebrew-core` to which the software connects by default

I think that's only true if one does an `export HOMEBREW_NO_INSTALL_FROM_API=1` otherwise they default to their new JSON API: https://docs.brew.sh/Installation#default-tap-cloning

and is the only way if one needs to alter any core Formulae: https://docs.brew.sh/FAQ#can-i-edit-formulae-myself (which, for clarity, Brew explicitly disavows)


I was referring to the package repository, not a Git repository.


This is only partially true.

By default, homebrew supports (or, in their terminology, taps into) two repositories, homebrew/core and homebrew/casks.

Core only takes free software, built by the Homebrew developers itself, installed in /opt/homebrew etc. Casks takes everything under the sun, including commercial software with no available source code. Such software is often downloaded straight from their developers and installed wherever it wishes to be installed, most often in /Applications.


Much as I love the service homebrew provides, terraform is one of the edge cases that is better being managed outside of brew. I believe tf-switch is now the most popular option?

The problem with Terraform is that you often need a pinned version because accidentally updating your state file can be dangerous. (Though in fairness, updating Terraform is significantly less troublesome than it was in the pre-1.0 days)


I use rtx (rust asdf) https://github.com/jdx/rtx as it lets me install all the languages with one tool (plus project environmental variables like direnv)


Is it compatible with asdf?


From the first bullet point under 'features':

> asdf-compatible - rtx is compatible with asdf plugins and .tool-versions files. It can be used as a drop-in replacement.


I'm going to blame being distracted by my dog for not reading the link you posted.

asdf + direnv makes my work life so much easier. I have one .envrc that checks what branch I'm on and switches you to the right Terraform workspace and sets correct env-vars too.


Yeah I use this same setup with rtx


You‘re right, it‘s best managed inside of MacPorts, which allows you to install specific versions of packages and switch between them easily!


You can do that with homebrew too. But in the case of Terraform, where you can have developers on Windows and macOS and CI/CD pipeines based on Debian/Ubuntu/CentOS/Alpine/etc, it's handy having everyone use the same tool for managing Terraform versions.

It's just like how you wouldn't have different people manage the same Python codebase with different package managers. You agree on the standard for that business or project and have every instantiation of that project follow that same standard.


I definitely agree overall. My last job was standardized on Homebrew and used tfswitch; I personally translated to MacPorts but I do agree that tying a project to a specific package manager (and thus OS, etc.) isn’t advisable.


til macports still exists


It not only exists but is vastly superior to Homebrew.


I think Nix is better suited for that kind of thing. You can specify multiple versions of packages and expose them on a per-project basis. It eliminates the need to switch packages globally.


> Much as I love the service homebrew provides, terraform is one of the edge cases that is better being managed outside of brew. I believe tf-switch is now the most popular option?

tfenv is also available in homebrew for installing multiple/different versions of terraform


They can always live in casks instead. Not terribly impactful practically speaking.


True, Hashi can cask it just fine if they wanted to continue to distribute through homebrew. I doubt they would though, they have already been recommending direct binary installation from their servers for the past few years. That is how the installation documentation for Terraform has been for a while.

The bigger impact is on other tools that have terraform as a dependency, as seen here: https://github.com/Homebrew/homebrew-core/pull/139538#pullre...

Tools like atlantis and infracost are also getting removed since they depend on Terraform. So it is going to make distribution a little harder for those smaller tools. The good news is that the thread does say that they are holding off to allow those tools to update their dependencies to an alternative like OpenTofu when it becomes stable or to remove the dependency altogether. But the real impact imho is these other tools.


> I doubt they would though, they have already been recommending direct binary installation from their servers for the past few years.

September 25, 2020 : Announcing HashiCorp’s Homebrew Tap https://www.hashicorp.com/blog/announcing-hashicorp-homebrew... ( https://news.ycombinator.com/item?id=24619346 - though only 2 points and 0 comments so no reason to click there other than to note that it was mentioned here )

https://github.com/hashicorp/homebrew-tap (last updated 2 days ago: Bump terraform-ls to 0.32.1)


This won't impact infracost, Alistair just responded to it: https://github.com/Homebrew/homebrew-core/pull/139538#issuec...


Homebrew is really useful but there's some weird design choices. Like why does it install a new dedicated python? And why is that python required to be the latest? But not always because each formula must specify the python version so this never actually happens in practice and you have formulas specifying all kinds of versions. Like why not just do what every other package manager does and uses the system's python? Seriously, I already have too many pythons installed, I don't need another. Especially when it makes it confusing because I need to install a pip package to get something working properly.


Not to defend their oddest choices, but using the system Python usually leads to more problems. MacOS no longer has one. Use this:

    pythonX.Y -m pip install foo
maybe via alias to eliminate ambiguity. Also pyenv and virtual-envs for work projects.


seems like politics. there are plenty of packages in hombrew that will no longer receive updates but they’re not deprecated


There’s a difference between a formula which is not updated because upstream doesn’t release new versions and one which won’t receive new updates in Homebrew because its license is no longer open-source (which the non-Cask part of Homebrew requires). This is the latter.


The package being dead is one thing, the users should expect that homebrew will not magically create updates that upstream doesn't have.

That updates exist but homebrew will not legally be able to distribute them, meaning you could be installing an old and vulnerable version that will never get updated on the other hand, seems worthwhile warning people about.


Homebrew is legally able to distribute them, they're making a decision not to


Perhaps "legally" was the wrong word, but the essence is still that they can't redistribute them, even if the reason is not law. The fact that the reason is it would be incompatible with other etsablished policy, and they wrote that policy themselves in the first place, doesn't change the fact of "can't".

It's not legitimate to say "no one is stopping you from rewriting all your other contracts, charters, and principles to be compatible with my new license".

And who knows, maybe they even can't "legally" if everything were fully evaluated.

Also, this sounds like an attempt to obfuscate or downplay the essential fact here of who made the breaking change. If someone cared about Terraform being included in brew, and wanted to know who to blame for their orderly world being disturbed, it is not because homebrew has decided to evict Hashicorp, it is because Hashicorp left.


They made that decision (to distribute applications with a specific type of licences) long ago. Nothing to do with terraform in particular.


Homebrew Core only contains apps with opensource license, specifically license that compatible with Debian Free Software Guidelines license (GPL, Apache, BSD, MIT, etc).

https://wiki.debian.org/DFSGLicenses#DFSG-compatible_License...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: