Everyone assumes cloud == multi-tenant, but single-tenant cloud has a lot of benefits and is becoming more common: GitLab Dedicated is single-tenant, GitHub is prioritizing and rebuilding "GitHub AE" (their single-tenant cloud offering), and Sourcegraph (of which I am a cofounder) Cloud is single-tenant[1].
For self-hosted products, offering a single-tenant cloud product is much easier than multi-tenant because single-tenant cloud "just" requires automating and managing infrastructure, not completely changing the core application to support multi-tenancy. This also means nobody should be worried about self-hosted being deprecated in favor of a single-tenant cloud product; companies generally find it easy to keep both.
Single-tenant cloud is also a lot more appealing to large customers because the security is (arguably) fundamentally better and scalability is easier to prove (you can just point to scalability on a self-hosted instance, which has the same scaling characteristics).
Expect to see more single-tenant cloud offerings come out for dev tools, especially those that have a big self-hosted offering (like GitLab, GitHub, and Sourcegraph).
So a code monolith I can just dump on a Linux box is in again?
Our figurative ideas, ephemera, seem to be in a loop of taking a simple mental model, branching it into a mess, feeling over extended, circling back to a simpler mental model, branching it into an over extended mess.
Monotheism versus polytheism, and dozens of flavors of a religion rooted in a central umbrella idea.
Nation states allow creating over leveraged branches of economic Ponzi schemes.
And we’re all certain this is net new and never before seen of humans because the labels are all different!
Just nitpicking, but code monolith does say nothing about the way it's deployed. Could be a massively distributed system, fwiw (we did that and were quite happy with it).
Extensive dependency chain of brittle logic that needs tons of planning and preparation to update and manage is not unreasonable description for microservices architecture from an ops perspective.
Sure generated book sales, crowning of thought leaders, and busy work to soak up easy money for anyone paying attention.
> Extensive dependency chain of brittle logic that needs tons of planning and preparation to update and manage is not unreasonable description for microservices architecture from an ops perspective.
I've seen monoliths that fit that description.
Once you've automated the deployment and configuration of load balancers, firewalls, caches, proxies, and have a DB with automatic failover, that is also sharded for performance, spreading the code out across a few machines is not the hard part.
Maybe it’s a bit literal but I see lots of people at computers planning updating just like I did 20 years ago.
From the IT workers context a lot has changed but the end user outputs are still; video game, chat app, email app, todo app, fintech app, dating app, state management DSL.
AI isn’t going to change the outputs so much as minimize the people and code needed to generate them. Because we’ve mined the value prop of desktops and phones to death.
Materials science, additive manufacturing, biology, are outputting actual net new knowledge. Consumer facing IT is whittling a log into a chair leg, grabbing a new log, whittling a chair leg… but faster!
> From the IT workers context a lot has changed but the end user outputs are still; video game, chat app, email app, todo app, fintech app, dating app, state management DSL.
All of those are working at scales 10x-100x what they were 20 years ago.
Back in 2002 people had to worry about how many emails they had on their machine. Searching unlimited emails? Not happening.
Now with SSDs, better search indexes, more memory, more CPU, handling instantly searching gigabytes of emails on my laptop is not even considered to be a "problem", it just is.
I can drop a hundred 10 megabyte GIFs into a Discord thread and my phone will render everything just fine. Go back to 2008 and, well, there isn't any equivalent because no one was crazy enough to build a platform where you could even try doing that.
OKCupid's backend was written in C++ and was probably the pinnacle of what dating site backend design will ever be, so actually you have a point there. :-D
A good todo apps can geofence[1] your position, remind you to get milk when you are at the supermarket! The amount of tech making that possible is insane. IMHO todo apps have a long way to go, it is sad that Android is going backwards in this regard.
> Consumer facing IT is whittling a log into a chair leg, grabbing a new log, whittling a chair leg… but faster!
That is the entire history of computing.
Our faster whittling has allowed other fields to improve themselves many times over.
Yes! We're doing this at my job for our single tenant cloud offering for a data processing and analytics product.
Monorepo, single Go binary, dump on an instance via cloud init, run it as auto scaling group with count of 1. Only dependency is S3 and database service like RDS.
Super simple for us to build, maintain and operate. Still get a reasonable economy of scale in right sizing the instance / db for the customer workload.
Easy to have a customer set the same thing up themselves for self-managed / any IaaS or on prem version.
More portable than any pre-containerized contraption, because all our enterprise clients all know how to manage instances, and only a few have container expertise, and its trivial to wrap it your container of choosing anyway.
> Our figurative ideas, ephemera, seem to be in a loop of taking a simple mental model, branching it into a mess, feeling over extended, circling back to a simpler mental model, branching it into an over extended mess.
It's pretty standard, the pendulum swings to one side and then eventually swings back to the other. If you stay flexible you can just ride it back and forth throughout your career.
> So a code monolith I can just dump on a Linux box is in again?
Aren't you grossly extrapolating a tad? Nothing suggests there's a "code monolith" involved in the story. Just because the topic is single-tenant instances. Also, single-box deployments are completely orthogonal to "code monolith". Finally, just because a company offers a product that does not mean there's a major architectural shift in the industry.
You can still put it on a Linux box this is more like you enter in a support contract with GitLab and they setup the SaaS for you and the cloud infrastructure instead of being intermingled with their other multi tenant systems.
What this is trying to solve for is companies that can’t buy their other offerings which they said are enterprise companies and or heavily regulated .
Pretty much everything on the cloud is inherently multi-tenant at some layer of abstraction, but because the underlying infrastructure is multi-tenant doesn't mean the application layer needs to be (or that the fact that it's hosted on multi-tenanted infrastructure needs to be relevant at all to it).
It’s still an accurate term under that definition. Single-tenant cloud is technically multi-tenant with respect to the underlying cloud provider (AWS/GCP/etc.), but not with respect to the software service provider (GitLab in this case).
But I believe language and communication are emergent, decentralized phenomena. Single-tenant cloud is a good and useful term.
GitHub has deprioritized AE due to azure capacity and data consistency issues. Maybe AE lives on in some different format, but the Azure-based platform has taken a backseat to other enterprise features for GitHub
I wouldn’t be surprised if they pivoted back, but they lost a lot of talent on that team
I agree. At FusionAuth (my employer) we have a single tenant SaaS solution . I think this has legs and serves a real need.
Interesting to note that you mentioning mostly devtime solutions (version control, source navigation).
I think the proposition is even more compelling for components of your application that are used at runtime (like auth servers, databases or message queues).
Wins for the company offering the self-hosted solution:
* Easier to build the SaaS offering as you mention.
* Can leverage same code for SaaS and self-hosted versions.
* Unique value proposition: "run it yourself or let us run it, or migrate between these as needed" has been a compelling message for some of our clients
Wins from the developer perspective:
* Easier to version the tool like a library. No more surprise upgrades. Especially when you using a component that other pieces of code depend on, devs want control over the upgrade process.
* No noisy neighbor problem (at least, no more than you'd have using a VM in a public cloud).
* Still a SaaS, so you get ease of delivery/usage/maintenance.
The major cons:
* it costs more.
* deployment time can take longer (you're not just adding a row in a database and configuring a hostname).
* making sure folks know when/how to upgrade (it's still a learning experience for many devs to have a single tenant SaaS solution).
I had an interesting discussion with the Cloudility folks about the three models for multi-tenancy:
Honestly, curious ... if I understand this correctly, you are deploying multiple versions of your application? So if you have 100 clients, you have 100 deployed versions of your application?
Do you have infrastructure in place to deploy all at once? All I can think about is a time back in the dark ages where I had 50 different instances of an application deployed and some clients didn't want to upgrade. Some didn't care. And some did. It was an absolute nightmare trying to keep all of the individual systems in sync?
I get the feeling that's not what you're describing here though? It sounds like a really convenient setup you guys have, but I just can't envision it very clearly.
Anyhow, thank you for sharing a bit of your infrastructure.
> if I understand this correctly, you are deploying multiple versions of your application? So if you have 100 clients, you have 100 deployed versions of your application?
That is correct.
We've written a lot of code against our cloud provider's APIs to spin up a new instance of our application. The application itself is pretty simple: java runtime, java code and a relational database (plus optional elasticsearch).
> It was an absolute nightmare trying to keep all of the individual systems in sync?
We don't try to keep all the applications in sync. We let folks upgrade on their schedule, not ours. We inform them of features and security issues and let them make the assessment on when is best to upgrade. These are devs and this is sensitive user data, so they tend to be receptive to upgrading when needed.
It's absolutely a tradeoff between more granular control and operational simplicity (for us; the end user doesn't really care).
FYI, I gave a conference talk about our infra evolution which led to that podcast interview I linked. I can try to dig up the PDF if that would be of interest (it wasn't recorded).
> So if you have 100 clients, you have 100 deployed versions of your application?
We in essence do that. We have about 300 customers, each with their own deployment.
We have a set of active major versions which our customers can run, and then bug fixes etc in minor versions gets deployed automatically.
When the customer is ready they can bump the preferred major version, and the database etc gets upgraded over the following night/weekend (depending on activity level).
When signing on, the user can select the version to run, defaulting to the highest minor version of the preferred major version (after upgrade is ready of course). They can select one of the 5 latest minor versions per major version.
The update service checks for any new builds once per hour and grabs them, informing the "boss users" of any new major version if available.
This makes bugs and issues have less impact. If we screw up, the user can always just use one of the previous versions until we fix it. And once we fix a bug we just have to make a new build and tell the user to log back out and in again in an hours time.
As for keeping it in sync, our installations are fairly stand-alone as such. However we have to take locally. So we make sure database upgrades are backwards compatible as far as possible, ie old code should run fine on a newer db schema. Automated services follow the default version policy (so get upgraded automatically). And we don't have too many major versions around, about a year, so they have to upgrade at some point. At least if they want to receive support.
For us it's what's allowed us to do customer specific solutions and logic within our general application, which has been one of our strong points compared to many of our competitors.
But yeah, one of the downsides is when, like last week, we discover a bug in a library of ours which we've used for a while. Then there's like a dozen branches to patch and build. And as mentioned, upgrading the db schema might require care and/or preparation to ensure older versions can still run against it.
How does one handle cross-customer data work in a single-tenant SaaS arch? Do you pipe a lot of "exhaust" data from those envs into a shared env for analysis, ML model development, etc.?
For products like GitLab and Sourcegraph that started self-hosted, there is not really any cross-customer data or cross-customer functionality. This is what customers expect. They want their data to be isolated and not shared.
For other products needing cross-customer features, I think you'd need to define clear APIs and pipelines for exporting some form of the data that the customer can inspect to be certain it is not violating their security/privacy expectations. I'd love to hear from people who have built such a system!
Single tenant app is also better from data security point of view: there's no way to make a mistake that allows to access someone else's data, if it resides in a different database/object storage. On the other hand, single tenant app is more expensive to scale: memory footprint and caching layer are not shared between clients.
Please don't let this be the kickoff to gitlab starting to remove the self-hosted option like Atlassian/Jira decided to do. I know it's open source and a fork would happen but I would prefer not going through this crap again like I am currently having to deal with Atlassian. What happens to all the premium features we pay for right now? They aren't open source.
I don't see this as an attempt to kill off their self-hosted offering. They charge the same as their SaaS version with some slight differences in features/limits (limits being removed as they're offloaded to your infrastructure.) The self-hosted version should in theory have a better profit margin than their SaaS version. Seems like the Gitlab Dedicated offering is probably just their self-hosted version automated and hosted on Gitlab infrastructure. Appears the demand was there and it probably wasn't much extra work to make it happen so why not.
At least for the US government, that's not going to happen. The government controls the servers is a non-negotiable contract term. We even have our own AWS regions for US government work.
Nah, that's looser than you think. Contractors deploy to the government cloud regions on AWS and Azure all the time. There's no reason GitLab can't become a qualified contractor and gain access.
The government cloud regions are way more about meeting compliance with all levels of FedRAMP and DoD requirements. Such as exclusively only US Citizens may access the systems even on the cloud hosts staff (so no foreign/remote/visa employees) and a whole list of other requirements both big, small and annoying (like FIPS) that affects everything from software to the physical building.
GitLab has already confirmed in comments here that removing self-hosted isn’t something they’re going to do.
That said, as you note yourself, the government is perfectly fine w/ not controlling the servers. GitLab could offer single-tenant (or heck, even multi-tenant) SaaS in AWS GovCloud and sell to government customers.
I bet differently. Gov CIOs are being forced to adapt to rapid business solutioning and minimal overhead. It takes years to operationalize a Gitlab instance, and then requires permanent O&M less adaptive to IT cultural shift.
Gov IT leaders will move to Gitlab SaaS overnight, once it's FedRAMP approved and migration is enabled through a click of a button.
Maybe less favorable for AWS and their contracts oriented to long-term gov owned compute.
There’s plenty of people that want dedicated deployments without having to manage the details of the deployment and upgrades. Especially those that already run their core on one of the target cloud platforms.
To be fair so was Atlassian. It just takes the CEO seeing the amount of money they'll make and save from pushing their customers to SaaS to change the direction for the company.
It's usually not a money thing, it's that the customers are usually unable to maintain the service internally resulting in extremely outdated, insecure, and poor performing versions of your product everywhere generating a lot of support issues and bad vibes around your product.
I think that's fair but on the flip side there's ways of handling on prem much better than Atlassian does. GitHub would be a good example of that. It's throwing the baby out with the bathwater.
Well this is fascinating. There are other Gitlab hosting companies out there like GitlabHost[0] but interesting how the market will shift with Gitlab itself entering.
With the way things have been going and them showing decreasing amount of care for smaller orgs and more push towards enterprise, I'm gonna say we'll see the discontinuation of at least some of the various official self-hosted packages for nonsense marketing keyword reasons.
What does single-tenant SaaS even mean? All of your company's data will be in a separate AWS account? Isolated Kuberentes cluster? Individual RDS instance? And how are you ensuring that the cloud provider you rely on is also single tenant? It is a meaningless term unless the service describes exactly how they are isolating customers (And it doesn't seem like they are in this post). In my experience the term is just used by salespeople to assure paranoid CIOs and nothing else.
I'll attempt to explain to clarify my own understanding.
SaaS is a delivery mechanism. Tenancy is an isolation model. To your broader point, the isolation model is implementation specific.
Multi-tenant SaaS means a single deployment for all tenants, and the data is delivered over the internet.
Single-tenant SaaS means a separate deployment for each tenant. I think common usage means a separate database per tenant. Single-tenant can also include entirely new infra for each tenant with private networking which is what Gitlab describes.
The press release says customers can pick the cloud and region they want to use so I suspect the answer will be 'yes' for those questions.
A ~1000 person company I'm familiar with moved from GitHub to a self hosted GitHub because it was too easy for engineers to hit a button and accidentally publish a repository/gist to the public. That shouldn't matter, but sometimes it does. I'm not sure if that's the same on GitLab.
Usually means each customer gets their own DB and instance of the app. This is a pretty good protection against a software bug from reading another customers data.
A couple jobs ago we had to take our entire SaaS offering (security product) and reproduce it in the EU for GDPR. This exercise evolved into my Systems team being asked to create private deployments for a handful of Fortune 500 / Fortune 10 sales opportunities. The private deployment enabled Sales to land these Enterprise deals that were very risk adverse to having their data in a multi-tenant SaaS product. We ended up with around 10 of these private deployments and some very large big name customers.
We did it entirely with Ansible on AWS or GCP on accounts owned by us. This was before Terraform was 1.0 and Ansible enabled us to quickly reproduce our deployments. It wasn't sexy or pretty. It worked well enough to get the job done. The best aspect of using this model is it gives your engineering team total control over the private SaaS deployment and unlimited access. Where as On-Prem or Customer Cloud deployments are an entire other bag of cats. It is great middle ground for Enterprise customers that won't do multi-tenant SaaS.
I am only sharing this anecdote because I have seen this single-tenant SaaS model work in the past and I think more engineering teams should strive be able to reproduce their whole environments for private deployments.
I worked at a company that did this with Python and Terraform. Another great benefit is being able to rollout product changes based on customers risk tolerance. Some customers will be more interested in new features while others will prefer stability. On the engineering side, you can do canary-style rollouts to each isolated environment
Some enterprise customers will want special features or configurations that might not make sense for everyone else this setup also makes a lot easier (special network connectivity like IP allowlisting, network peering, VPN tunnels, cipher selection)
I would use Pulumi if I had to do it over again. I use the Pulumi Python SDK for some internal things at my work currently and it's so much better of a DX than fighting HCL all day.
We actually tried to adopt Terraform early in 2015 but we hit some of the nasty state bugs back in version .05 or .06 IIRC and we hosed one of our deployments and couldn't recover. That burned the Terraform bridge for us and we just used Ansible to automate everything.
Good question. Our EU deployment was on AWS in eu-central-1. TBH I don't know. I was told it was for GDPR and other compliance reasons and as long as it was being hosted in the EU it checked the boxes for our EU customers.
If I remember correctly, GitLab used to offer the same kind of service, but discontinued it because managing it was not a core competency that they wanted to focus on. I'm curious what's different now.
When I joined 6 years ago, there were 3 products with a team of about 120 people total.
A tough decision was made to focus on 2 products to increase velocity.
Probably about 2 years ago there were some asks if we offered "single-tenant cloud solutions".
At around that time, we had really rock solid reference architectures that were well tested and confirmed to be scalable.
We are now 2130 team members. There is demand, there is capacity, and there is clarity of how to run it. (6 Years ago "how big should the instance be for x users" was a question that was not answered yet.)
Feel free to ask other questions, If I can answer them I will :)
This looks great. I'd love to use something like this.
The biggest thing I've been hoping for for years is a federated mechanism for handling clones and pull requests. Click "fork" on a repo on gitlab.com and get a fork on your single-tenant instance. Push changes to your fork, and open a federated pull request that ends up back on the original repo.
It's an interesting move from GitLab. About 5 years ago they offered single-tenant hosting as well, but they shut down this service. Back then we had just started single-tenant GitLab hosting (focusing on the European market) and were a good alternative, so GitLab referred their single-tenant customers to us.
Fast forward a couple of years and now they are back again. :) In the meantime we also started offering HA GitLab hosting (on AWS, GCP and Azure), so if you're interested but don't want to wait or don't have > 2000 seats, feel free to reach out: support <at> gitlabhost <dot> com or check out our website[0].
so no pricing and its offered by inquiry only which i can only assume means we have serious reservations regarding the salability of this 'new' product versus, say, a $24 a month vultr instance or something...
if you had 60 devs using the service thats 14k a year. you could have one of them thats on payroll maintain the gitlab container along with the rest of the CI/CD and im not sure it would really matter. Is there something im missing?
Exact pricing is not listed, however if you dig around on GitLab’s site [1] it says Dedicated is for purchasers of >= 2000 seats of Ultimate, plus a management fee to cover defraying the infrastructure cost, and the time to look after it for you.
From this even with generous discounting on those seats, I’d infer this is for customers willing to spend >$50k/mo or >$500k/yr.
Yes. This is for big corporations who absolutely must not use shared/co-hosted/non-dedicated hardware, but don't want to maintain the thing themselves.
We had several such customers at my previous startup; one of them was a bank, another was a popular household name, neither of which so much as blinked when we quoted a price much higher than what they could get otherwise if they only shopped for the cheapest option.
> neither of which so much as blinked when we quoted a price much higher than what they could get otherwise
And therein lies the business model of throwing good money after bad, and that's not even taking into consideration the enormous cost benefits of self-hosting with one's own physically colocated hardware infra and a meager full time staff.
Not batting an eye, or rather, "...so much as blinked", as you said, are bourne of business models with factored in wreckless budget kruft that may at first be acceptable until one merely scratches the surface of cost savings.
Even with a full time staff, and carrier hotel fees, the reliability and overall cost savings of self-hosting would likely not even exceed 15% of what the fully managed SaaS hosting package would cost - and two more points as well...
* Response time of support staff would be under 5 minutes.
* Dedicated support staff would actually need to "dedicate" very little actual man-hours to support functions, freeing them up to have their budgeted labor resources allocated elsewhere in the company most of the time.
This is a wonderfully stark and typical example of how to sell vendor lock-in for a FOSS solution... brilliant!
> Even with a full time staff, and carrier hotel fees, the reliability and overall cost savings of self-hosting would likely not even exceed 15% of what the fully managed SaaS hosting package would cost - and two more points as well...
There's a bigger issue: security updates. With self-host, you have to subscribe to a ton of better-or-less-well-organized mailing lists, and once a 0-day is published you are in the race between your IT team and exploiters (who can and will find your instance on Shodan).
In contrast, SaaS vendors (usually...) get informed about vulnerabilities prior to everyone else, so you don't have to worry about timely updates.
> And therein lies the business model of throwing good money after bad, and that's not even taking into consideration the enormous cost benefits of self-hosting with one's own physically colocated hardware infra and a meager full time staff.
First, depending on where you are taking about, “meager” full time staff for that 5 minute response time would be a few hundred thousand dollars because you’d need multiple people per site to be on-call. Could those people do other things? Maybe! But thinking you can hire people for $35 an hour to be on-call or on-site if the servers go down is really misguided. In some parts of the world, that might be possible, but having multiple full-time staff to hit that “5 minute response time” claim is still going to be more expensive than you let on.
Second, you now get to multiply this figure by however many different regions you are in that have to be physically isolated for GDPR or other compliance reasons. And if you’re doing any government work? Well, that requires special audits (that are not cheap and governments love to spend money, regardless is where they are based) and very specific data residency rules and requirements. Even if you’re not contracting with a government, highly regulated industries have very specific data residency rules that must be followed that your local colo may or may not be able to handle. And if it does, that colo is often a lot more expensive than usual.
> Not batting an eye, or rather, "...so much as blinked", as you said, are bourne of business models with factored in wreckless budget kruft that may at first be acceptable until one merely scratches the surface of cost savings.
This reads to me like something a consultancy firm that hasn’t actually done the long-term math would say.
Look, for businesses of a specific size, I do agree that self-hosting can be a more efficient and economical model. But that size changes based on usage and is often elastic.
A smaller business that doesn’t already have its own on-prem setup already and needs single-tenant stuff for regulatory reasons is probably better off looking at getting a dedicated offering from Gitlab or using a third-party vendor who is setup to handle and manage that stuff for them. The hard costs (capex) are often large upfront and taxes work differently (amortized over time for capex) versus the economics on cloud (opex), which might provide some tax savings that are more beneficial.
A business of a certain size and volume who likely can amortize the costs better over time and has a lot of dedicated staff to do their own specialized and customized work, and who is already very deep in the regulated space? They are going to be better off self-hosting.
But the super huge businesses that have hundreds of thousands of employees and need to follow data residency requirements in dozens of regions? They are probably better off using a mix of both. Dedicated on-prem self-hosting in areas where they have lots of clients and business. Use a single tenancy SaaS for places that have high regulatory needs but that aren’t huge business centers (or that are in regions where it is difficult to setup your own tenancy or where you aren’t incorporated as a business).
There are trade-offs with everything. And this is a product that isn’t for 95% of the businesses out there. But for those that it is for, for a lot of places — especially places that don’t count devops as a core competency or product focus — it’s useful and not blinking an eye at the price doesn’t mean people are burning money. It means that the service is of value for their time and energy and focus.
Disclosure: I work at GitHub, who is obviously one of Gitlab’s competitors. But I find this knee-jerk “just self-host” rhetoric to really miss the nuance of all of this stuff.
I have never understood why companies don't just publish pricing. It too often means pricing is just a game they want to play, for both personal and enterprise use, and it's a game I all too often don't want to play. If pricing is not available, I almost universally move on.
Because when you are dealing with corporates the amount of time needed can vary massively by customer. A bank or healthcare firm will come at you with endless feature requests, support requests etc. over things they claim is a regulatory requirement / they must have ASAP.
You also might offer the first bank a discount because once you have one onboard its much easier to sell your product to others. They know a competitor has already done all the due diligence and decided you're safe, so the risk they spend months evaluating you to not be able to move forward is minimal.
I concede that I mainly approach this from a non-enterprise mindset, but even trying to research these things for enterprise can be frustrating. It does seem to make sense for products that are at a large scale, both in deployment "size" or effect and pricing.
Because pricing is always negotiable in enterprise settings. Even if it was published, that wouldn’t necessarily be what would be charged.
I agree with you in a personal sense that it’s frustrating, but in enterprise, it often is too variable to list because if you are a big enough client, you’re going to get discounts and deals that regular folks just won’t. It’s no different than volume pricing in any other industry.
Enterprise software is more willing to milk money from corporate than to leave money on the table. The amount of money a company is will to pay can be order of magnitudes higher than an enterprise software imagined. Hence SaaS company hire bunch of sales people/sales engineer to "understand your need", which is an euphemistic way of asking how deep your pocket is. And when everyone is doing this, you can't simply move on, there are only so much options.
Of course, you could see it completely differently, private pricing means longer sales process, higher labour cost and fewer customer. Having tiered offering is common. This is actually what gitlab did, free plan, individual plans, business plan[1]. If you don't need worrying about data residency, you don't need contacting their sales at all.
One more angle, when you are starting up, every bucks feel like coming out of your own pocket. When purchasing "enterprise plan", your company already attain certain size and you are spending your company's money, money that otherwise would not fall under your pocket anyway.
Or your startup is forced to look into "enterprise plans" due to necessary features for compliance being locked behind them, without having attained a certain size and "enterprise" money.
I assume you don't work in the Enterprise IT space, this is absolutely 100% the norm. There is no such thing as up-front pricing, they always want you to talk to a salescritter. In some ways, this is good as the salesperson (usually) has the same level of knowledge of a system architect or engineer, or will bring one in, and you can have an open dialogue about how (or even if) their product can solve your problem. It's bad when you know exactly what you want and the salesperson's job is merely to gauge how much they think you're willing to pay and quote accordingly.
And many companies will not even talk to you except through a reseller.
Enterprise pricing san be complicated enough that there may be billions of combinations and it can require an engineer to do scoping to determine which line items should be included.
> offered by inquiry only which i can only assume means we have serious reservations regarding the salability of this 'new' product
Much B2B is done on price by inquiry. The three columns and 'best value!' is a very modern thing. Even in those cases you'll usually see the enterprise option is 'call us'. I wouldn't make extreme assumptions based on the pricing model.
> Much B2B is done on price by inquiry. The three columns and 'best value!' is a very modern thing.
Yeah. That is the biggest cultural difference between "suit tech" and "t-shirt tech" - the old boomer-generation suits love personal connections and dealings of any kind, while the modern generation prefers efficiency and getting their work done without pseudo-social bullshit.
Yeah, that's actually how much I pay for my instance on Vultr. Granted it's only a few users... It is a bit tricky at times to keep things up to date and secure enough, it would've been great to see a price tag for small teams or something.
I mean, I do get that at that point (<= 5 devs) these guys (nor any other company of that size) actually cares about that particular case study since one could just stick to their free tier on the main site.
I guess my comment was a bit misleading, what I meant was the entire sysop of having a single node (which is what I have) running smoothly. I mainly work development so at times having to tinker with the OS, keeping things up to date (including GL), secure, manage logs, etc can be quite time consuming.
I do remember the first time I have a bit of a meltdown with the stage upgrade procedure. Granted the same thing can be said of any upgrade, you follow steps toward latest and greatest. But it was a bit of a job to get that up to date from the image provided by Vultr.
I can see gitlab's appeal. I wonder if there is opensource connector that let SaaS company host their application on customer's choice of cloud? Or is there similar project?
We heard you didnt like having all your source code scraped for our ML model so what if we silo you? Would that make you content when we eventually do our scraping again?
For self-hosted products, offering a single-tenant cloud product is much easier than multi-tenant because single-tenant cloud "just" requires automating and managing infrastructure, not completely changing the core application to support multi-tenancy. This also means nobody should be worried about self-hosted being deprecated in favor of a single-tenant cloud product; companies generally find it easy to keep both.
Single-tenant cloud is also a lot more appealing to large customers because the security is (arguably) fundamentally better and scalability is easier to prove (you can just point to scalability on a self-hosted instance, which has the same scaling characteristics).
Expect to see more single-tenant cloud offerings come out for dev tools, especially those that have a big self-hosted offering (like GitLab, GitHub, and Sourcegraph).
[1] https://about.sourcegraph.com/blog/enterprise-cloud