The learning curve for AWS is steep; but the use of tools with great develeoper experience like Heroku and Vercel is limited to small projects. Teams end up choosing AWS or other big cloud providers, partly because of free credits, but mostly because they know that they'll need something from them that a PaaS cannot provide.
And then there is a huge cloud-native ecosystem and infrastructure-as-code and Kubernetes and all that. If you want to do DevOps right then PaaS doesn't seem to be a good option.
So its either move fast, or build a future-proof stack. We thought that's wrong, and built Digger.dev
Digger.dev automatically generates infrastructure for your code in your AWS account (Terraform). So you can build on AWS without having to deal with its complexity.
You can launch in minutes – no need to build from scratch, or even think of infrastructure at all!
- Easy to use Web UI + powerful CLI
- Deploy webapps, serverless functions and databases: just connect GitHub repositories
- Multiple environments: replicate your entire stack in a few clicks. Dev / staging / production; short-lived for testing; per-customer
- Zero-configuration CI with GitOps: pick a branch for each environment and your services will be deployed on every git push
- Logs, environment variables, secrets, domains: never touch AWS again!
Congratulations on the launch! I personally think that any product that attempts to simplify the experience around AWS is a worthy endeavor - which is why there are a lot of companies popping up trying to tackle this.
I personally have a lot of interest in this space and used to work at AWS. Feel free to contact me at the email in my profile if I can ever be helpful.
I like the general concept of this product, but I feel like it either needs different marketing or more thought about who this product is for.
It seems like the product is either:
(a) A tool for indie or small dev teams to build infrastructure before they have learned the AWS stack.
(b) A tool for small DevOps teams to simplify managing and developing their Terraform developments.
The marketing of the site seems to be selling scenario (a), but the paid plans make it seem like you'll only be making money in scenario (b).
If I imagine myself being in scenario (a), I can see becoming pretty disillusioned the second my first issue or wall popped up, since I am not going to be supported by AWS or the product. It seems someone in this scenario is way better served by choosing a managed hosting solution of some kind.
As someone personally in scenario (b), the idea of "do more without understanding it" is a very off-putting sales pitch. Terraform and AWS have way too many gotchas to fully abstract away all but the simplest of implementations. Sweeping those under the rug with abstractions is too much risk if the team doesn't understand what's happening. If the pitch was something more like "speed up your teams Terraform development and management experience", it would be a lot more interesting.
Thanks for sharing these points! Very thoughtful and helpful.
We may well be wrong. But we believe that "learned the AWS stack" is not something most software engineers should do, ever. If you look at what DevOps originally was, it was about culture, not job specialty. But it became a specialty anyway - because it's so complex. Currently it's the only job in the spectrum that is "second-order" - in a sense that for developers to be productive, first the DevOps folks need to do some work. So it ends up as a permanent bottleneck – unless companies create in-house PaaS-like tools for developers to self-serve. And big well-funded companies end up doing it over and over again. We did it at Palantir and Fitbit; Uber did it, Shopify did it, and dozens more.
So you can think of Digger as such "PaaS builder" in a sense. AWS is still available for all kinds of troubleshooting - Digger doesn't make it any harder. But in 90% of scenarios people won't need it. This allows to reduce DevOps to software engineer ratio from current 1:10 to smth like 1:100.
I think the idea is a good one. It's too hard to set up simple solutions in AWS and abstracting that away is valuable. But you need the freedom to dive into the details sometimes, so being Terraform based is smart.
But the onboarding flow is brutal IMO. The splash page doesn't help me understand when I should reach for Digger - as a customer with an AWS account, I've obviously had to learn enough to be functional in AWS. I would like it if you described a common use case to help me understand when I should be considering Digger.
Once I actually try it out, it's very sterile and I feel lost in Apps and Environments and the UI is mentioning commits for some reason. The docs focus a lot on what Digger is, but I'm really missing an onboarding guide that orients me with a step-by-step guide of how to set up my first environment.
Does way more than that. Digger manages state on the backend and runs it. So it's a bit like "CI for your infrastructure". With versioning, rollbacks, etc. You can connect your infrastructure repository and Digger will export generated TF to it and also pick up overrides from it. So it's end-to-end flow.
You still need lots of DevOps knowledge to use Terraformer. None needed with Digger - you can ignore this part of your stack entirely, until you need to customise smth specifically. And then you actually can customize anything.
> The problem with infrastructure-as-code today (Terraform, CloudFormation, CDK, Pulumi, etc) is that it is not reusable. That is because implementation, configuration and interface are mixed up together.
I find this statement from the documentation[0] unfair, given that the "target" concept this introduces seems to be mainly based on Terraform modules to _reuse code and expose an interface_. Terraform has its problems, but this doesn't seem to be right.
At best, this seems to be a curated set of Terraform modules and a managed CD pipeline execution SaaS. I get that it is supposed to simplify things, but it is lacking documentation for what it will do to an AWS account (you'll still pay for it, after all) and even provides documentation on how to drop "raw" Terraform into it. Why not go with Terraform directly then instead of sending your AWS credentials to a SaaS?
Thank you for these points! I respectfully disagree :)
A raw Terraform module is quite hard to reuse out-of-context for someone who isn't familiar with devops / sysadmin concepts. What's a VPC? Security group? ACL? Each service exposes a bunch of config options that won't make sense to people who are facing it for the first time. TF mimics AWS interface, and it's more like a pilot's cockpit than a car interior. All tools imaginable out there, but you got to know what you are doing to use it.
Targets on the other hand are exposing high-level concepts only. How many services? Is it container or a function? Enable or disable database? Got it, starting building. More like a car interface or a phone UI which you can figure out by doing.
Current implementation of Targets is very simplistic. It just does the job but not much more. In Targets v2 we are planning to introduce proper dynamic generation with a "stack state" API that would allow to create truly incapsulated, smart components that would adapt to a number of environments.
I'm not sure I get your point - Terraform modules[0] are a generic way to encapsulate a set of Terraform resources. What variables you expose to the outside is up to the module developer - You can expose high-level variables like "service type" and "add a database" in your module as well. No need to understand VPCs, security groups, ACLs, either. The question whether there are high-quality Terraform modules that do that is a different one, that's why I think your service might still be of value _if those modules you maintain are of high quality and do reasonable things, which I haven't verified_.
Maybe you have great ideas for this target concept, but the claims in your documentation[1] that this is new and the inference that Terraform isn't capable of this don't hold up:
> it describes a particular architecture that can produce many possible variations for a wide variety of stacks, depending on configuration.
You can do exactly that, with Terraform modules, today, no digger needed.
Thanks again! This viewpoint has a lot of merit for sure.
Please let me defend my claim on Terraform capability though.
The real question isn't whether doing X is possible with TF; it's whether it's likely to be done in practice.
I am speaking from my own experience as a former front-end dev and making a bold assumption that there are many others like me. Whenever I'm using Terraform, even ready-made modules, I find myself thinking of things that I neither want to be thinking about, nor I need to be thinking about. Most of my brainspace is occupied by frontend intricacies; however I still do want to get control of the entire stack. The further some tool is from my primary competence the less capacity I have for various details about it. I want my webapps and containers to work somewhere, that's all. But when I'm facing a problem - a specific problem - I also want it to be solvable. Like autoscaling or load balancing. And I want it to be solvable in a way that's not against industry best practices. Because today I may have a team of 3 but in a couple years that may be a team of 300. I don't want to have to rebuild from scratch half way through. But I also don't want to waste time building something future-proof on day 1.
I get what you're saying and I think it's a valuable discussion to have (I remain skeptical whether handing off your infrastructure design is a good idea or not; but as someone working in the space I might be biased), but that's not my really my point to be honest.
I think that the documentation is making several technical claims (from the quotes I've provided) that are factually false. You're agreeing that it CAN be done with Terraform. Best practice isn't what is being discussed in the documentation, it claims that reusing isn't possible.
Granted, I'm not your target audience, but I would recommend to a) rephrase those claims so they're closer to the truth and b) start documenting the architecture of your targets and the quality of your Terraform code (does it pass tfsec tests for example).
If someone asked me to review this product for their startup, I would primarily see Terraform modules with unknown quality or architecture.
It's also 100% untrue with Pulumi, which by virtue of using general purpose programming languages allows definition of interfaces in a fashion completely decoupled from their implementation.
Thank you!! Perhaps what we mean by "implementation" is different here. We should probably make it more explicit in the docs.
What I mean by "interface" is "My stack needs infrastructure for 3 containers and 2 webapps and container A needs a Postgres DB and container B needs a queue"
In today's IaC, including Pulumi, you actually need to specify _which particular_ way of running containers, with all the configuration details. Same for database. That's implementation. Swtching languages doesn't make it any simpler.
Practical example:
The exact same stack can be run on one EC2 box via docker-compose, and on a Kubernetes cluster with managed databases. Same interface, different implementations. What Digger accomplishes is allowing to swap implementations at any time as long as the interface stays the same.
Switching languages does not make this simpler. Switching the _implementation_ of an interface does. For example, I could implement a "queue" interface three times - once for Confluent Cloud's Kafka, once for Kinesis and once for EC2 instances that run OSS Kafka. The interface remains stable, the implementation changes. This can also be done across clouds.
I think it's worth you doing some more research into what Pulumi opens up before using it as an example like this in marketing material.
This looks interesting. All the best for your release!
I have a few small feedback items:
- The AWS Account ID is not very well blanked out in your documentation. I can easily see what the actual digits are (under the red scratched out parts).
- I realise English is not your first language, but there are many typos and mistakes in the documentation. Once you get a bit further on, it'll be worth sending it to someone to do an edit pass to clean it up a little :)
- Some of the AWS terms are incorrectly written in documentation. For example 'SecureSecret' instead of 'SecureString'.
- On the subject of secrets, would a better option not be to store a Secret using AWS Secrets Manager with the value you need to acquire? Also, I know you mention that the secret value is used and never stored, but how do we know that? If you have access to the secret via ARN and IAM policy, then in theory if your SaaS was compromised, the secret is still retrievable from the customer's account. How about using something like Vault to store secrets?
> On the subject of secrets, would a better option not be to store a Secret using AWS Secrets Manager with the value you need to acquire
You could do that, but you can also throw money in the bin. Secrets Managers is basically a paid for wrapper around SSM Parameter Store. Last I checked the only nice thing it had was automatic key rotation. The price for that ? 50cents per secret per month. That will add up pretty quick.
Secrets Manager has an SLA. Parameter Store doesn’t.
If Parameter Store goes down or suffers a huge slowdown, we’ll that’s just your problem.
If Secrets Manager goes down or suffers a huge slowdown, then you’ve got some recourse to support — and getting your money back.
Parameter Store is also a one-by-one thing per each and every secret you want to store, whereas Secrets Manager lets you store a whole bunch of components inside of one “secret”.
It’s your choice either way, but for me personally, I’d rather use a service that has an SLA.
If I have a company, and we use your service to generate our AWS infrastructure. If there is a bug in your service that causes a misconfigured something that causes downtime to my company, leaking pii/customer info, etc. How much liability are you taking on? Can we sue you?
On the Standard plan we don't take on any liability; but keep in mind that it's all git-based workflow with rollback to any state possible. And TF is mature enough to make glitches extremely unlikely. And we enforce least-privilege principle.
On the Enterprise plan we are more flexible, and you get things like PCI-DSS, SOC2 etc. We could also act as an "automated DevOps consultancy" with a legal arrangement similar to that of an agency (with liabilities on) but without actually providing services beyond enterprise-level support.
Not really related to this tool, but Terraform: We are hiding a few services behind a vpc and they are therefore not reachable by Terraform Cloud without the Business plan which is required to get an agent we can install within the vpc. I cannot use ip restrictions because HashiCorp doesn't expose their ip's. I filled out their contact form 7 days ago and they haven't returned yet. It's likely just a matter of waiting a bit longer, but being this dependent on one organization for our entire infrastructure is starting to get a bit frightening, especially if it's going to take months before their sales department get the time to reach out.
Cool project! I'm building something similar at https://apppack.io. It's not terraform based, but also helps setup, manage, and orchestrate AWS resources for devs.
Using AWS managed services is a huge win for maintainability. A lot of host-your-own PaaS tools are spinning up EC2 instances that you're then responsible for maintaining/patching/securing.
Convox like many other great tools (Cast.ai, Garden.io, Shipa.io, Kubesail) are trying to solve this problem around Kubernetes. But K8S is not a solution to all devops problems. It's just a container orchestration engine with a strong ecosystem of instrumentation around it. In reality people also need databases, queues, functions, storage, domains and a bunch of other things that are very well solved by cloud-native managed services. The problem is the glue connecting all of them.
What we do in Digger is automating this glue. Kubernetes or not actually doesn't matter. Our default orchestration engine is ECS Fargate just because it's so worry-free. But you can totally switch to K8S. The value of Digger is automating DevOps - not automating K8S cluster management.
With respect, if you’re based on Terraform, then you’re not taking advantage of any of the special features of AWS, and you could use any cloud provider.
Conversely, if you do take advantage of any special features of AWS, then you can’t use Terraform.
So, why should I use your tool on AWS versus any other provider?
Looks great, congratulations on the launch! I see GCP & Azure support listed only under enterprise, does that mean that the Standard plan only applies to AWS right?
Nevertheless, good job. Wish you all the best!
I looked at this when you went live on PH. I have very little experience in this field and it seems like this is an easy way to take advantage of Terraform for someone like me.
We have a concept of Targets - essentially "template generators" that follow a particular architecture and include modules for most commonly used services.
When you connect your repositories Digger asks you to confirm a few basic options like your container port or build command for your webapp. By connecting a repository you define a Service. You can also define Resources like databases. This way you describe the "logical structure of your stack". No infrastructure is created at this point yet.
Then you create environments - and it is at this point Terraform is generated, combining the "logical structure" of the stack with this particular environment's configuration.
Just would like to note that if you use something like this do your own security audits and keep your own repositories or this could be an easy attack vector
Yes very true, we are aware and mindful. We have a number of open-source "Targets" (template generators) plus allow customers to use their own. But that has to be configured explicitly.
Very fair request! We are actually planning to support multiple "compilation targets" in future, including CloudFormation and CDK code generation. Just started with Terraform because it's the most widely used of all in the DevOps community and makes it easier to support cloud providers other than AWS, which is appealing to scale-ups and larger teams.
Yes, that would be quite straightforward - just connect a repo and you're good to go as long as you put it into a container
That said, if your entire project or most of it is a Wordrpess or Drupal site, you could be better off by using a specialised hosting like Kinsta or WP Engine. They provide lots of nice extras specifically for WP, whereas Digger is more for building SaaS applications with a bunch of backend services and webapps.
Not really - we are an _optional_ abstraction layer for infra-as-code. Terraform isn't really abstracting away infrastructure, it's a one-to-one mapping of every resource. A bit like assembly langugage.
We are a compiler of higher-level concepts into Terraform, with an option to just go and write Terraform if you need smth very custom.
Its a very understandable viewpoint. We believe that you not only can - you should.
It may sound a bit utopian today, but look at what these managed cloud services really are: the new hardware. You had to manage every piece of hardware yourself in a mainframe system, because how else? Until you didn't have to.
It's just so wasteful to require people do all this semi-manual work. It doesn't make any sense frankly. It has very little to do with the actual products people are building. Sooner or later in such arrangements higher-level concepts emerge and establish themselves. Just like web frameworks most recently, or hardware abstractions (drivers) in operating systems much earlier.
I think you can pay someone to provision infrastructure, then document very well what everything does and hand it over to a human...
But completely hand it over to a third party service does not seem convincing to me.
Even if true, that's a normal progression. Consider something like a FaaS platform built on Kubernetes built on containers built on Linux.. it's turtles all the way down with everything now.
The normal, healthy version of this is building on things that are decent at what they do, but would be easier to work with for common use cases with another layer of abstraction.
Doing it primarily because multiple layers suck when used for their intended purpose may very well be useful, but it's not a sign of health.
This is like having three layers of assembly that are all horrible to write and are built one on top of the other, all operating at roughly the same "level" of the hardware/software stack, and then coming along and writing C for the third one, which will in turn generate the second, and that, the first, which will, finally, actually generate something a processor understands.
Whatever demand there is for this isn't this product or company's fault, but it's definitely a sign that something's not right. Config primitives being driven by orchestration scripts being driven by orchestration scripts being driven by orchestration scripts.
The product may be entirely fine, but the situation is ridiculous.
The learning curve for AWS is steep; but the use of tools with great develeoper experience like Heroku and Vercel is limited to small projects. Teams end up choosing AWS or other big cloud providers, partly because of free credits, but mostly because they know that they'll need something from them that a PaaS cannot provide.
And then there is a huge cloud-native ecosystem and infrastructure-as-code and Kubernetes and all that. If you want to do DevOps right then PaaS doesn't seem to be a good option.
So its either move fast, or build a future-proof stack. We thought that's wrong, and built Digger.dev
Digger.dev automatically generates infrastructure for your code in your AWS account (Terraform). So you can build on AWS without having to deal with its complexity.
You can launch in minutes – no need to build from scratch, or even think of infrastructure at all!
- Easy to use Web UI + powerful CLI
- Deploy webapps, serverless functions and databases: just connect GitHub repositories
- Multiple environments: replicate your entire stack in a few clicks. Dev / staging / production; short-lived for testing; per-customer
- Zero-configuration CI with GitOps: pick a branch for each environment and your services will be deployed on every git push
- Logs, environment variables, secrets, domains: never touch AWS again!