Hacker News new | past | comments | ask | show | jobs | submit login

somehow every comment in here is downplaying this achievement as “low” or unimpressive

you truly underestimate the scale of an operation like this

the vast majority of software companies will never count a trillion of anything. even big companies that scale will only have a small subset of teams work on something this large




The article is too light on details to estimate if trillions is impressive or not. For example, if my single-server system easily handles 100 mio per day and the load is almost exclusively CPU bound (like with most AI tasks) then scaling to 1 trillion per day might be as easy as buying 10k servers, which is totally a thing that mid to large sized companies do to scale up.

The fact that makes this Meta paper impressive is NOT scaling up to 1 trillion per day, it's that they manage to do so while keeping request latency low and CPU utilization high. Anyone who's been with Heroku long enough probably remembers when suddenly instances would be 80% idle and still requests were slow. That was when Heroku changed their routing from intelligent to dumb. And Meta is doing the opposite here, reducing overall deployment costs by squeezing more requests out of each instance than what would have been possible with a simple random load balancer.


>then scaling to 1 trillion per day might be as easy as buying 10k servers [...]

..I doubt that. How would you distribute the requests between those? An instance of mod_proxy_balancer?


DNS round robin so that clients get randomly distributed among multiple load balancers

They have 12m RPS so about 10 HAProxy servers should do the trick.


It's an interesting paper, but they've made some weird trade offs regarding latency for resource efficiency that make it seem niche especially for FaaS tech, and the TPS they're hitting is surprisingly low for something that is supposedly in widespread use at their company. Some of their suggestions at the end are also already features for FaaS products in some form or another too.

I think this paper is awesome and this platform is not a trivial piece of engineering to be clear, but it doesn't seem particularly novel or even reaching close to the larger workloads that public cloud services offer.

>the vast majority of software companies will never count a trillion of anything

As others have noted, it's not impossible that many of our own laptops have run a "trillion functions".. the devil is in the details here for systems researchers and engineers, and based on the details XFaaS isn't nearly as novel as say Presto was.


This is HN, there are definitely users on this site who have experienced or have worked at places with these workloads.


And this is also the HN where people boosts they can rebuild on their own $SUCCESSFUL_SOFTWARE over a 3-days weekend.

There are tons of very brilliant and very smart people here, but there are also many that are too fond of themselves or have big issues understanding problems' ramifications in real life/real business.


How do you know the difference between stupidity and ambition unless you try?


Ambition is saying "I think I can bootstrap a successful competitor to $POPULAR_SOFTWARE if I work hard enough, I have the talent and perseverance".

Adding "in 3-days" is stupidity.


The best inoculation against hubris is trying to fly to the sun.

Humans, to generalize, bias towards talking over walking.


12 million QPS isn't nothing but it's pretty common at big companies.


So what? Meta is a trillion dollar company. It should be able to create website, that works.

Compare its budget to WhatsApp before it was acquired!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: