Hacker News new | past | comments | ask | show | jobs | submit login

Are serverless and container based architectures really that popular in the real world, or is it yet another HN bubble? I'm still happy using virtual/cloud servers for anything/everything.



Serverless is pretty big in the PHP world, but people there tend to call it "shared hosting".


Are there shared PHP hosts that bill by the request? TFA indicates that is a key difference of serverless, and why it's not shared hosting. The billing model matters to some users as much as the implementation.

Serverless is just shared hosting like S3 is just FTP.


This, and, in shared hosting you're paying for an instance and if you overload your traffic or utilization you're done.

In 'serverless' (awful, awful marketing name for function-calls-as-a-service), your computation will be executed on one among a pool of executor units from which you are abstracted away from; therefore in theory your throughput scales to accommodate demand (impacts on your wallet notwithstanding).

But this is there the name comes from -- there's no 'server' as in, a box you can ssh or sftp into and muck around with files or configurations.


There used to be quite a few PHP hosts that billed by the request in the past, so, yes?


I've been writing PHP for almost a decade and I've never come across any PHP hosts that bill by the request. Not explicitly, anyway -- there were always those hosts where if you got over some threshold of traffic and it caused a certain amount of load on the servers, they'd cut you off with no warning/notice. Those hosts suck. Never seen anything like Lambda for PHP, though. Care to share?


Well, there was a host – they shut down years ago, though – which did it.

And there were always the kind of hosts which had no quota, and no fee, but where you’d pay based on how much you wanted to in a given month, but you’d be expected – honor system – to pay more the more you used, with a rough formula how much would be expected from you (but you’d never be terminated).


With business models like those, it's not surprising they are not still around.


Oh, the honor model one is still around. The other one, which actually billed you, isn’t.

Then again, here a lot of things are honor model or volunteer based, and society still works.


Interesting -- would love to compare one of these to e.g. Lambda. Would you mind sharing a link to one of these services?

Edit: Also, as a peer comment pointed out, Lambda (and the like) have other differences from traditional shared hosting as well. I'm not convinced Lambda is all that similar to the shared hosting I remember from the '90s.


That's quite good observation, I did not even thought about it. It's another marketing hype of just repackaging old solution with a cooler name.


I think more bubble than substance for serverless, but I can easily see the appeal and expect that they'll become more useful over time.

Certainly the idea of microservices is a good one, for me and a lot of my use-cases, and that's the start of the journey towards an API-gateway. (The big downsides being less control, and a central point of failure, obviously.)

Containers I think are different, and are used in production a lot.


In my experience, they are steps towards the future. VM's are now the baseline; containers are better; server-less is even more so; something like Urbit is a step further.

However, the level of maturity is reversed. Docker is usable now, but you will run into many quirks and inefficiencies. On top of that, it's not standardised in a meaningful way. Only starting this year would I choose it for a new project.

Server-less is even less standard and efficient, but in five years I expect tooling to be usable, though still rough around the edges.

For some reason, we can't just jump to the most ideal architecture but have to evolve there step by step. That's what I would say is the essence of the "worse is better" philosophy.


Why do you define containers as better? Server-less even more so?


In my ideal situation, if I write a function, it's immediately usable wherever I want. As long as it's fast, reliable, scalable, inexpensive and all that jazz, I don't want to think about support infrastructure. Doing the infrastructure yourself has no benefit if it's done right - it's not the end goal.

So server-less is closest to that ideal. Containers are better than VMs because they are faster and lighter weight. So much so, that people develop new ways of working that would be cumbersome with VMs.


Here are my opinions from working with both over the last year:

Serverless is brand new, and has all the burrs of a freshly cut service. Expect sharp corners, like bizarre documentation and lack of examples. I think it goes without saying that tooling and Best Practices are still immature. In my work (Lambdas) I've found they do well when organizing small pieces of work and event driven systems. They really do talk with all of AWS and event triggers are popping up all over their infrastructure.

For me, Serverless is more about "not writing server boilerplate/provisioning" than "not having servers". The largest hurdle conceptually has been hoisting tangential concerns from the application into the infrastructure primitives. Here are some examples in my current work:

* The API Gateway does routing, some payload modification for headers, as well as authentication and authorization.

* I'm also relying more on security groups and specialized IAM roles instead of application specific authorization code. This leaves me with only the application specific code to be called.

* Lambdas communicate with each other through SNS (and only when authorized via IAM).

* All of my application code (currently) exists as Lambdas.

This comes at a cost of infrastructure complexity and a learning curve. I'm also not using several services that AWS provides just to keep my cognitive load from exploding.

Serverless Overall: Like exploring the jungles of Africa! Lots of really cool ideas and deployments, but here be lions!

Containers are on the march of maturation. For me, the most useful part of Docker is decoupling the local application needs with the deployment environment. I exploit that decoupling to make sterile local development environments with Docker/Compose. The recent Docker OSX/Windows support smooths out the last few bumps in my dev workflow.

The deployment, however, is still maturing. To be fair, it's mostly a distributed computation problem and not unique to Docker. Kubernetes is a bear, but an improving one. Same with E2C from what I've heard from friends. On a smaller scale, it significantly opens up the playing field for less popular languages/tools. Heroku's Docker support, for example, allows for any language to be deployed as long as it accepts the Docker contract.

Docker Overall: a) Local is AWESOME! b) Deployment is the Wild West. Still largely unsettled but the Oregon Trail is OPEN!


What are you building that needs to be split up in such a way? And how many people access the system?


> What are you building that needs to be split up in such a way?

"Need" is such a fickle word...

I'm using the Lambdas to build an event sourced system on top of microservices. Because I can hoist so much infrastructure from each service, it makes each service significantly easier to write and manage. This allows the application specific code to change without rewriting the routing/auth(z)/provisioning code along the way. I can even rip out whole services and the others are blissfully unaware.

Because Lambdas are only charged when invoked, it keeps the costs down. RDS is my biggest expense by far, but I'm under a Standard 1X Heroku instance. If you have a service that isn't used much, Lambda's can bring your costs to near-zero. An added benefit is I can be fairly liberal with deployed lambdas because I'm not thinking about machine/hour costs. Multiple staging environments? SURE!

After a little while in the Lambda world, I started to see them as little lightweight workers. They can be told to listen to nearly anything under the AWS umbrella, and AWS is _chatty_. Pair it up with the API gateway and you can build 3rd party integrations. Before I had to set up a TON of infrastructure to have these abilities. It's pretty liberating.

Finally, I feel like the ideas in Serverless are both interesting and not well explored. I hope to share what I've found with others as we chart this new territory.

> And how many people access the system?

I'm not sure about what you mean by people. Users? Developers? I'll answer both.

For now I'm the only developer, but I expect that to change within a year. The local Docker powered dev environment keeps me sane with a simple `docker-compose up -d` to bring it up. The infrastructure hoisting significantly reduces the service's LOC, making them easier to grok. Isolated development environments and very light mocking ease local development and testing. I do pay a penalty when doing integration tests locally, however, but I'm punting that into my staging environment for the moment.

I have some pre-alpha users that are helping me shape the product. I admit at first glance it looks foolish/overkill/wat for my user size. Before Lambda, this architecture _would_ be too complex, and pricing would be prohibitively expensive, for the size of the product as it exists. That's what Lambda (and Serverless in my mind) brings to the table. It leans on Cloud Infrastructure and it's charge-by-use model to bring architectures like microservices to the hobbyist and small businesses. As the tooling and docs improve, I expect it to further blur the lines in "DevOps".

Could you tell I'm a little excited by Serverless/Lambdas?


I'm sure as a percentage of deployed products or services that a full serverless architecture is miniscule. What I have seen in production are systems made better by use of certain AWS/Lambda integrations.

One crisp example of this is with S3 storage of images. When each new image it pushed into S3, S3 kicks off a Lambda function which resizes the image into thumbnails. This S3+Lambda system replaced a much more involved image processing pipeline with EC2, queues, a data store, etc. The end result was a system much easier to manage, support and reason about.


Container based architectures like Mesos are definitely being considered for production use, at least in the startup scene (look up jobs on AngelList for examples). Plenty of companies still sticking with the virtual server route though and plenty more who haven't even reached the virtual server stage yet. iRobot is a good case study for Lambda at least: https://twitter.com/awssummits/status/753325988717604864


I wrote a set of internal use systems using Lambda and API Gateway. I'm in two minds about it; its handy but at the moment I dont think its mature enough for a critical live system.

I'd like better ways to collect execution data for debugging and more visibility all round. With those it would definitely be a contender.


Indeed. Debugging them is painful at the moment. I'm using Apex to deploy a few and running "apex logs" gets me all recent logs, but that's pretty far from being actually debugging.


Won't setting up good auto-scaling and deployment procedures result in something just like Lambda, but you had to orchestrate it yourself?

It seems like there's a lot to be said for the cost and time savings of having it done for you...


Plus you don't have the cost of running all that infrastructure 24/7 to have the ability to scale or provide proper security.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: