Congrats on the raise, but I'll be honest it really rubs me the wrong way that they renamed to "Serverless Framework" from JAWS. They should have picked a different name. The word "Serverless", for better or worse, was what the community settled on. Lot of companies and individuals use it in many different ways and have before this project existed. But they are trying to own it[1]. Not being a good community member IMO.
Edit: An example of the kind of thing I am concerned will happen[2]. Use the word "serverless" in a project? Get a take down notice.
Since TESS (Trademark Electronic Search System) requires a session, here is your reference [1] in image format. It's a trademark for SERVERLESS and also SERVERLESS FRAMEWORK both filed on November 25, 2015.
Are there other examples of an open-source community project taking on several million dollars in venture capital funding? It seems a bit odd to me.
I'm not sure I'm comfortable using a presumably open-source, free (beer and speech) tool knowing that the group behind it will have to find a way to monetize their users in order to justify the investment from VCs at some point down the road. Open-source developers should of course be able to be compensated for their work, and the project has to find a way to sustain itself (I work for a company whose main product is open-source so I know this better than most), but the venture capital model doesn't seem like a good fit with the interests of the community, in my opinion.
That said, Serverless is a great tool, and congrats on the 1.0. Thanks to the team for their hard work.
There certainly are other open source tools like ansible, Chef, Puppet, Hashicorp, MongoDB (and many more) that have started out as an Open Source project and are still OSS champions.
We have many plans for monetisation and are working closely with small to enterprise scale companies on building products and services around the Framework that help you once your infrastructure has reached a large scale. More info to come in the future.
To expand on my last point, how are you balancing the needs of an open-source community (steady, stable feature development, good communication/outreach/support, dedication to supporting existing products) with the needs imposed on Serverless by the VC funding model (acquiring new users quickly, launching new products you can monetize, getting hockey stick growth)?
The Serverless Framework is at the core of all our monetisation strategies, so we need the Framework to be spread to as many developers and into as many teams as possible and allow for way more complex infrastructure to be built. So by necessity we'll be pushing really hard on moving the framework forward.
I don't think the needs of an open source community stay in stark contrast to VC funding model, because without the Framework getting a lot of traction our other products aren't as interesting. So we'll be working on getting that traction. And we can't do this just by ourselves. We wouldn't be here without the Contributor community (literally because they implement so many features) so without good communication, outreach, support, ... we won't be able to grow fast enough.
I can only tell you that it is our true intention to push the framework forward very hard and build monetisation around it as much as we can to build this into a long term sustainable company. The more help and feedback we get to build the right commercial products and get great revenue (so we can give our Investors as well as our Team a return as well) the happier everyone is, including the community.
Sorry yup I conflated Open Source Tools and Open Source Companies into one. Vagrant, Terraform, Packer, Serv and all the other great stuff coming out of Hashicorp.
Realm, most recently featured on HN here[0], is a good example of this. Open source, widely adopted base product, with optional consultancy and now a full-featured platform for enterprises that need/want it.
- Kickstarting: Django REST Framework's development was successfully funded for a year or more by funds raised on Kickstarter[1]. This model doesn't scale that well.
- Sponsorship: DRF is also a good example of this. Corporate sponsors who use the product are asked to chip in as sponsors, in return they get some branding placement and community goodwill. It seems Serverless was at one point using this model (sponsored by Coca-Cola), it's unclear why they had to stop.
- Services/support model: as popularized by Red Hat, Canonical, etc. Give away the product for free, make money on support contracts.
I recently started using APEX. It doesn't rely on CloudFormation and has supports for hacking Golang support. Its worth a look if your getting more serious about Lambda development and interested in other options. http://apex.run/
I really love the new development being done to simplify cloud deployments of stateless horizontally scalable services.
Once Serverless supports other λaas offerings I think this will really take off.
In addition to the big boys (Amazon, IBM, Google, Microsoft), I'd love to see some alternative stable open source providers come about. Maybe something built on top of kubernetes or docker swarm.
We're actually talking to everyone you mentioned. I'd love to see some "on-premise" Serverless infrastructure as well, the biggest issue with that is still the event system as a Serverless infrastructure really mostly makes sense if you can do things like "when you upload something to this bucket run this function". This is hard to replicate in an on-premise system.
I don't get what's so difficult to do on-prem. Your target market on-prem all use Kafka as their event bus and either Mesos or Kubernetes as their execution fabric - what's so tricky about fitting into that?
Its not about how tricky the implementation is, but what the value of setting this up is. With Lambda for example you just get this magic that invokes your functions when something happens. You don't have to deal with, manage or think about this at all. And its built into many different parts of the stack from S3 to APIG to Dynamo or SNS.
You can of course do something like that yourself in your own infrastructure, but then every piece of software needs to support it somehow, you need to manage that event bus infrastructure and you most likely have to push those events in yourself.
And that kind of thing is already there, so the appeal that Serverless event driven systems in the cloud have (because the providers give you all of this out of the box) is much harder to achieve when you have to do that yourself.
This seems to come up every time serverless comes up. There should probably be some better docs around this.
It's true each function needs it's own connection, but in reality:
1) The containers actually stick around for a while and get reused so if you write the code correctly it only has to establish the connection once per container invocation
2) Unless you are doing a lot of traffic you'll probably only realistically have a few containers running your functions so it will only be a few connections.
3) If you end up with enough traffic that it actually becomes a problem, it would have been a problem anyway because you'd be running a lot of servers with persistent connections in a more traditional model.
In other words, the number of connects and set up and tear down is about the same as in a traditional setup, maybe just a little bit more.
Edit: One more thing. Sometimes a counter I hear is "Yeah but every function needs it's own connection". I counter that with the contention that even with a traditional setup, a good abstraction means only one or maybe two functions actually talks to the database -- everyone else should be getting their data from that function. Also if you do it that way that one function can do some smart caching (which survives at least a few minutes with serverless).
> If you end up with enough traffic that it actually becomes a problem, it would have been a problem anyway because you'd be running a lot of servers with persistent connections in a more traditional model.
I think this is the part I disagree with. DB connection pools are much, much smaller than than the total # of functions that touch a database in any reasonably complex application.
Yes, scale is always an issue, but it seems to me that in this serverless world where you have 1 connection per function you run into scale issues a lot(order of magnitude?) faster than the "traditional" way.
> a good abstraction means only one or maybe two functions actually talks to the database
In a serverless world, does this mean you would run a handful of functions with DB connections, and other functions would proxy db requests through them? I can see that working ok I suppose.
For what it's worth, what we've been doing is building a separate service for talking to the database, which itself maintains a single common database connection pool.
I can't say yet whether that turns out to be a good idea or a poor one. (I'm not the one who designed and built it.) One notable feature of our implementation is that it eliminates all possibility of using transactions -- a design oversight that worries me.
You could build the transaction support into the database service. Then when you need to write multiple things, you put them into a queue as a single work unit, and let your abstraction deal with taking the work unit off the queue and putting into the database in a single transaction.
This has the added effect of making your system more reliable because you'll be using queues and you have a shorter window when a process can die and hang a db connection that is trying to roll back.
You certainly COULD (and if I were designing it I would have either done that or allowed callers to request a transaction in which case a connection is temporarily reserved for that client and a token returned which can be used to continue the transaction). But the people designing it DIDN'T do either. Which is part of why I question their design.
I wouldn't call it so much proxying the DB requests through it as all the other functions do business logic, only one or two actually marshal data into and out of the datastore.
So yeah it's kind of like a proxy, but think of a monolithic application. Do you make it so every object in your application talks to the DB, or do you have a DB object, which handles the connection pooling and all of that other stuff? If you have a DB object, that becomes your DB function, and all the other functions talk to it for getting and writing data.
Wouldn't it also be reasonable to use a remote API call to store data too?
It's one of the things I have in mind for https://dbhub.io. Thinking it should be a good fit (especially for Serverless apps), but haven't yet written the code to try it out.
Likely to do so in near future though (weeks, not months).
Thanks Miguel. We're going to release more about our plans for the next steps in the Framework soon. I'm currently writing those and will share with the community in the very near future (CTO of Serverless here)
> 1) What's the schedule on the documentation for best practices for SLS in production?
These are the goals for the next releases, giving you better tooling and best practices to go from a few services to many services.
>2) What's an example workflow ? (something involving multiple devs, testing, CI etc.)
Thats still relatively standard, with good testing, CI, CD , ... but we'll also add more blogposts and docs around that.
> And lastly, What's the timeline on adding support to Google Cloud Functions (in alpha) and Azure Functions?
Can't give an exact timeline, but we're working with both to be ready once GCF goes into production and working with Azure to get support into Serverless as well. Really also depends on talking to more users to understand how they want to use those providers.
I saw a link on the side to sign up for a beta of the serverless platform, so I did but just keep getting redirected back to the blog post. Is anything supposed to happen?
Nice work! I am delving into setting up services on Lambda at the moment, and it can get really fiddly and messy. I like the way server<framework scaffolds projects and makes deployment a one line option. Interested to see where this goes.
So it Serverless Framework that seems to be considerably tied to AWS servers.
EDIT. Only writing this because its in fact possible to imagine serverLESS framework these days (web workers, p2p etc), but this seems to be just about more volatile servers.
'Serverless' as currently jargoned is used to connote that the developer does not have to think about servers (or VM's or containers) directly. The 'serverless' buzzword does not mean that there are not servers somewhere doing useful work.
Sorry, can you elaborate in which way this is "does not have to think about servers"? Because almost the first thing on their tutorial is some messing around with AWS.
Serverless is really just an a community term for 'function as a service'. There are servers... you just upload discrete bits of code to the platform which executes them on demand.
Hmm if I recall they had logos of azure, ibm whisk, and google cloud fns in their project. Seems those logos have recently been removed. Is this indicative of the future of this project, leaning towards AWS lock-in?
Absolutely not. We definitely want to support all of those providers, but as those integrations aren't in the Framework yet we decided to remove the logos until we actually support them.
Not just because users want those, but for us its important to become more provider independent as we don't otherwise have a defensible product. So multi-provider is definitely coming and we're in constant contact with many providers.
Edit: An example of the kind of thing I am concerned will happen[2]. Use the word "serverless" in a project? Get a take down notice.
[1]http://tmsearch.uspto.gov/bin/showfield?f=doc&state=4802:t7t...
[2]https://twitter.com/sindresorhus/status/776142274564616192