You're not wrong to think Tailscale is primarily a software company, and yes, salaries are a big part of any software company's costs. But it's definitely more complex than just payroll.
A few other things:
1. Go-to-market costs
Even with Tailscale's amazing product-led growth, you eventually hit a ceiling. Scaling into enterprise means real sales and marketing spend—think field sales, events, paid acquisition, content, partnerships, etc. These aren't trivial line items.
2. Enterprise sales motion
Selling to large orgs is a different beast. Longer cycles, custom security reviews, procurement bureaucracy... it all requires dedicated teams. Those teams cost money and take time to ramp.
3. Product and infra
Though Tailscale uses a control-plane-only model (which helps with infra cost), there's still significant R&D investment. As the product footprint grows (ACLs, policy routing, audit logging, device management), you need more engineers, PMs, designers, QA, support. Growth adds complexity.
4. Strategic bets
Companies at this stage often use capital to fund moonshots (like rethinking what secure networking looks like when identity is the core primitive instead of IP addresses). I don't know how they're thinking about it, but it may mean building new standards on top of the duct-taped 1980s-era networking stack the modern Internet still runs on. It's not just product evolution, it's protocol-level reinvention. That kind of standardization and stewardship takes a lot of time and a lot of dollars.
$160M is a big number. But scaling a category-defining infrastructure company isn't cheap and it's about more than just paying engineers.
> but it may mean building new standards on top of the duct-taped 1980s-era networking stack the modern Internet still runs on.
That’s a path directly into a money burning machine that goes nowhere. This has been tried so many times by far larger companies, academics, and research labs but it never works (see all proposals for things like content address networking, etc). You either get zero adoption or you just run it on IPv4/6 anyway and you give up most of the problems.
IPv6 is still struggling to kill IPv4 20 years after support existing in operating systems and routers. That’s a protocol with a clear upside, somewhat socket compatible, and was backed by the IETF and hundreds of networking companies.
But even today it’s struggling and no company got rich on IPv6.
IPv6 has struggled in adoption not because it’s bad, but because it requires a full-stack cutover, from edge devices all the way to ISP infra. That’s a non-starter unless you’re doing greenfield deployments.
Tailscale, on the other hand, doesn’t need to wait for the Internet to upgrade. Their model sits on top of the existing stack, works through NATs, and focuses on "identity-first networking". They could evolve at the transport or app layer rather than rip and replacing at the network layer. That gives them way more flexibility to innovate without requiring global consensus.
Again, I don’t know what their specific plans are, but if they’re chasing something at that layer, it’s not crazy to think of it more like building a new abstraction on top of TCP/IP vs. trying to replace it.
I’m the CTO at OpsLevel, where we’ve been running a Rails monolith for ~6 years. We started on Rails 5, upgraded to 7, and are currently moving to 8. Before this, I worked on Rails at PagerDuty (including splitting a monolith into microservices) and on Shopify’s “majestic” monolith.
The best thing about Rails is its strong, opinionated defaults for building web applications. It handles HTTP request routing, data marshalling, SQL interactions, authentication/authorization, database migrations, job processing, and more. That means you can focus on business logic instead of wiring up the basics.
Rails isn’t as fast or lightweight as Go, but they solve different problems. For most web apps, the bottleneck is I/O, not CPU. Rails optimizes for developer productivity, not raw performance, and that tradeoff is often worth it, especially when speed of iteration matters more than squeezing out every last cycle.
>For most web apps, the bottleneck is I/O, not CPU.
We just have blog post submission on HN that suggest otherwise. At least for RoR.
Luckily we have YJIT and we are finally understanding may be we are actually CPU bound, which means we could look into it rather than always thinking it is a I/O DB Problems.
Fair enough, but the system isn’t set up to optimize for the happiness of founders and employees. It’s set up to maximize returns, which agreed ends up concentrated in a very few rich outcomes.
When you control the devices to which you're deploying to, there is little reason why you wouldn't deploy as often as you can. It helps a great deal in isolating bugs to keep your changesets small, and you can either do that by slowing down the product iterations (and getting poor feedback from each), or releasing more often. This is ubiquitous with web development.
Weekly releases (or slower) is appropriate when you rely on users to update their software or firmware. Most mobile app development does this.
I worked with a bunch of smarter-than-me UW grads after graduating.
My “how to write large systems” takeaway from that early point in my career was to focus on the interfaces between various parts. What I’d never thought about until now is that is a very data centric viewpoint.
- What system has what data?
- In what shape?
- What shape does the next system need its data in?
- Are the interfaces between these orthogonal? Shallow? Easy to grok? Tight (as opposed to leaky)?
In addition to the "show your dev work to the CEO" use case, tunnels (ahem, "funnels") like this are useful when you're building functionality that requires pointing webhooks at your devlocal environment.
e.g., if you're building an integration with any of the myriad of SaaS tools that fire webhooks, you can test in devlocal and have a URL of whatever.tunnel.com.
There are tools like ngrok and localtunnel that exist to do just this. I'm looking forward to replacing those with TS Funnels.
A few other things:
1. Go-to-market costs
Even with Tailscale's amazing product-led growth, you eventually hit a ceiling. Scaling into enterprise means real sales and marketing spend—think field sales, events, paid acquisition, content, partnerships, etc. These aren't trivial line items.
2. Enterprise sales motion
Selling to large orgs is a different beast. Longer cycles, custom security reviews, procurement bureaucracy... it all requires dedicated teams. Those teams cost money and take time to ramp.
3. Product and infra
Though Tailscale uses a control-plane-only model (which helps with infra cost), there's still significant R&D investment. As the product footprint grows (ACLs, policy routing, audit logging, device management), you need more engineers, PMs, designers, QA, support. Growth adds complexity.
4. Strategic bets
Companies at this stage often use capital to fund moonshots (like rethinking what secure networking looks like when identity is the core primitive instead of IP addresses). I don't know how they're thinking about it, but it may mean building new standards on top of the duct-taped 1980s-era networking stack the modern Internet still runs on. It's not just product evolution, it's protocol-level reinvention. That kind of standardization and stewardship takes a lot of time and a lot of dollars.
$160M is a big number. But scaling a category-defining infrastructure company isn't cheap and it's about more than just paying engineers.
reply