For those worrying about concentration ... the market can get even more concentrated than it is now. In the 1880s, 80% of the market was related to railroads. That concentration always mean reverts, but it could take some time.
I feel like the situation in the 1880s was in a very, very different environment to today, though. How many public companies even were there 140 years ago...? (I actually tried to find this out just now with some quick searching, and wasn't able to find anything that looked relevant, so solid data would be helpful...)
Particularly in the latter part of the 20th century, the number of companies that choose to go public seems to have increased quite a bit—and, at the same time, there's been a huge wave of consolidation, meaning that even if there are fewer public companies than there would be without that, a higher share of the total economy is likely to be made up of public companies.
Akka focuses on enterprise agentic with a focus on creating certainty and solving scale problems. We have a customer, Swiggy, which is >3M inferences per second for a blended set of models, both ML and LLMs, with a p99 latency of roughly 70ms.
This level of throughput is achieved by including memory database within the agentic process and then the clustering system automatically shards and balances memory data across nodes with end user routing built in. Combined with non-blocking ML invocations with back pressure you get the balance for performance.
We did a long podcast and a couple blogs that offered transparency to the rationale on why we moved from Apache to BSL, which still downgrades to Apache after 36 months. See Emily Omier for the specifics.
It came down to survival. The company faced a bankruptcy event as customers were using the software without contributions and after exhausting alternatives needed to change the license model to create a more sustainable approach.
The consequence of this choice was that there was less adoption from OSS and ISVs who need a flexible licensing model for embedding and redistribution. It also encouraged the Pekko fork which is a branch that is 2.5 years old. And that branch helped older projects and OSS distributions to maintain their position without financial consequences.
It is not cheap to maintain Akka, and after 15 years we have turned a profit, albeit barely. We are growing, finally, and have a prosperous future and most of our spend goes into development. It did allow us to create Akka 3, which is a simpler model for devs within enterprises mixed with a consumption based model that should be significantly cheaper than the traditional libraries, and cheaper than the cost to adopt most any other framework. We can debate the merits of different business models but we couldn't have maintained the 50 CVE fixes and create a modern version of Akka if we hadn't taken this step.
We need a better strategy on how to appeal to the OSS community once more. To appeal to startups and academics, we have free commercial licenses and subscriptions, which nearly 200 accounts have signed up in the last 18 months.
Well, Vert.x and Spring are maintained by RedHat and Broadcom. Both of those companies measure their profit and loss tied to their broader orchestration and platform sales (Kubernetes). They fund app dev frameworks only to the degree they can drive profitable adoption of their other commercial offerings. Broadcom, in particular, after the VMW acquisition has trimmed their staffing in areas that do not directly impact the Tanzu bottom line. Not all Vert.x and Spring customers need or desire that coupling, and so that poses an interesting dynamic that is different from us.
We are a pure play app dev platform and that gets to the heart of why the business model is different. I'd argue that we are very motivated to make sure that customers are successful with app dev as that is our bottom line where our rivals are financially incentives by infrastructure sales, not app dev outcomes.
Wow, thank you very much for your reply, especially for how polite it was when I was decidedly impolite. I sympathize with how hard it is to make money in the software world, and I know absolutely nothing about business so of course take whatever I say with a boulder of salt.
That said, and I realize that this is crass but it's also honest: Akka's profitability isn't my problem. When I am looking to import a library for my job, I try my best to weigh pros and cons of each (as we all do), and when I see a BUSL that's an immediate red flag; if Akka were the only cool concurrency library in the JVM world then I'd just put up with it, but when there are viable alternatives like Vert.x it's extremely hard to go to my employer and ask them to spend $5000/month + $0.15/Akka-hour [1], especially since we run thousands of individual JVMs, and running a comparable thing in Vert.x cost us nothing (albeit with having to do tech support ourselves). Whether or not it's "fair" that Vert.x is a pet project from Red Hat or VMWare and therefore doesn't have to worry about financing is sort of orthogonal to whether or not I choose it or Akka.
This isn't meant to shit on Akka, it's very cool software, I'm just frustrated by the BUSL because it gives the illusion of an OSS license, the initial marketing around it looked like an OSS license, and I wasted about 15 hours writing some Akka code only to realize that I had to throw it away because there was no chance I was going to get my employer to approve a PR with BUSL-licensed libraries that would have cost us hundreds of thousands of dollars a year.
Again, apologies that this is rude, and if Akka/Lightbend/Typesafe is making a profit then of course all the best to you, but this is just my rationale.
Re-reading this, I apologize for how hostile I come off. You're not trying to sell me, you're just giving justification, which is fine even if I'm not a huge fan of the license.
The remote dev environment space is heating up. Quite a few variants and competitors now emerging in this generation of vendors. I started and sold Codenvy to Red Hat which implements Eclipse Che and Eclipse Theia as CodeReady Workspaces.
There are increasingly limited differentiation between various vendors. The biggest improvement areas needed now are simpler configuration, faster boot times for complex projects (pre-built code, cached artifacts, IDE plug-ins configured).
Personally, I dont think this is going to be a space where one company "wins the market". Whats best for developers is having flexibility and choosing the right tool for what their building and their team. So I'm hoping that these products become/stay more open and let people pick them up and drop them as needed.
On the differentiation front.. there's some common differentiation points, but to your point there's going to be someone doing things like you when theres more and more competitors.
Also - brilliant job with Codenvy! You're one of the pioneers in the field. When we started Nimbus we did our research (even talking to the folks at Koding, NitrousIDE, etc)
> Whats best for developers is having flexibility and choosing the right tool for what their building and their team.
One of the things that's annoyed me about codespaces and gitpods is that they really require/assume the whole team will use that one product. Codespaces does that through their billing and permissions, and gitpods does it via their PR based features etc. Is this something Nimbus is going to try and avoid? What's best for the majority isn't always best for the individual, and that's why we have so many different IDE's. Perhaps my personal workflow is served better by Nimbus and my colleagues is better suited for codespaces.
With some tools, like bug trackers, it makes sense to pick one tool for everyone of course. What do you think about remote dev environments?
I think more people on remote environments does lead to better overall experience. But the incremental benefit for each additional team member drops until everyone is working on the cloud - because then you can do some cool things or stop doing some dull things.
That said, we avoid doing anything that forces an entire team to move to our cloud environments. That's an easy way to piss people off and build a crappy product. Hopefully this approach doesnt burn us.
There are definitely more alternatives in the market now. But I do see the differentiation pretty clear (maybe because I am building it) Actually I think you are pointing out some great categories for differentiation, simpler config, faster boot time, and so on.
A few more things we learned based on the current market is that the team is way more distributed geographically now. So the latency and collaboration across different geolocation are also two areas we really focus on.
Those are the public ones. We manage our own with packer and Ansible, technical a suite of dev VM images, depending on the role or tasks (i.e. reg vs ML).
Ha! I maintain a public database and people can navigate it by going to tylerjewell.substack.com. It links into a public google sheet.
The tracking methodology buckets companies by the primary product they advertise. Withcoherence is in a different category that has a broader platform definition.
There are companies like gitlab and Codegiant that also have remote dev envs as features of the broader product line.
This is the 7th article in a series we call developer-led landscape where we look at the underlying trends affecting the commercial companies in and around the developer ecosystem.
I find that it's important to periodically understand the big picture. I ask myself, are we doing better as a whole for society and can technology aid in that?
2. DeepMind’s protein-folding breakthrough signals a promising decade for the science of proteomics. Most directly, being able to predict protein shapes will enable us to discover drugs more rapidly.
4. Advancement of geothermal as a potential energy source. The next generation of the industry, however, is a bunch of scrappy startups manned by folks leaving the oil and gas industry who think with today’s technology they can crack 3.5¢/kWh without being confined to volcanic regions.
5. Space exploration. The Space Shuttle entered service in 1981 and launched successfully 134 times. The payload cost to low-Earth orbit (LEO) was $65,400/kg. Today’s Falcon 9 is at $2,600/kg.
7. Quantum computing experiments and trails are doubling the number of qubits every couple of years right now. Quantum computing will cause a re-imagining of security and cryptography of digital assets if it becomes production grade. https://www.bbc.com/news/science-environment-59320073
I am sure there are many other examples. Even though I am an enterprise software guy working at Dell, the progress we made in the areas of technology that we get to work in have some contributing impact to all of these trends.