Hacker News new | past | comments | ask | show | jobs | submit login

Fun story time: a few years back I worked at a major EU "traditional" (non-FAANG) IT company, and they were using Apache for handling web traffic. Rumour was that nginx, being already a backbone of half of internet, was dismissed as "too new" :) (we're talking mid-2010s)



Haha that reminds me of a company I worked at that used some MS library for .net.

I think it was like Microsoft.WebMatrix.Data

It essentially was a micro ORM written using dynamic but had no caching so it performed terribly with all the reflection. It was a drop in replacement to use Dapper. But dapper was dismissed due to it being “Demoware” despite it running stackoverflow. I left that place 2 weeks later.


This story really shows the hype of nginx. It wasn't the backbone of half the internet until 2021.

Don't get me wrong, I am an nginx user now for the past decade at least but when it first came out I was very skeptical. People were saying apache was too bloated but you could already run apache with as few modules as possible so that was a false argument.

Then there was the c10k challenge of course. Basically, a lot of hype for nginx but it came out on top in the end so I guess it doesn't matter.


Waiting for everybody else to test the product before you migrate is perfectly common sense strategy. Especially if that product does not give you any special edge over competition.


I think the irony is that newness is irrelevant once it's being used at a certain scale. You can battle test more in an hour than you could a small scale project in ten years.


How do you battle-test in an hour the ability of the upstream developer to provide security fixes? To provide updates at all as the ecosystem develops (e.g. the rise of systemd, taking advantage of advancements in worker models, SSL library API changes, new Lua versions)? Ability to keep backward compatibility with modules?

Your approach might have led you to invest heavily in lighttpd at some point in time.


Also: how do you battle-test a security track record?

It takes years to to tell if serious vulnerabilities are being found often or not.


By the mid 2010s it had definitely proved its mettle.

Arguing that it wouldn't have provided enough benefit to justify the switch is different than saying it was unproven by that point.


I think some of the pressure to update products is irrational. Just because something is newer and better is not yet reason to upgrade.

If Apache did everything they needed I can imagine a company to completely forgo investigating Nginx and this might have been cause of that kind of statement. Or maybe this was just a way to explain it to younger devs who could not understand "don't break it if it works". We don't know.

The correct way to decide this kind of decision (and many other) is to look at the RoI and your available bandwidth to run multiple projects.

I am still keeping some very old (but still actively developed) products. I am busy with other projects and there just have not been any pressure to update. When I have some time available I prefer to choose a project with highest RoI rather than update stuff because of peer pressure.


Well I think you and I are saying the same thing. Don't chase the shiny new thing.

That said, by that time Nginx was a proven performance upgrade over Apache 1.x and 2.x. Quantifying that value is tough but it certainly had value attached to it.


Whether there is any value depends heavily on your application.

If your Apache is responsible for 0.1% of your costs then this is at most what you can save, even if Nginx was magically zero cost (like zero to install, maintain, zero computing resources, zero outages, zero hiring, zero project risks, etc.)

From my experience, most projects have way more important problems to solve and better opportunities to pursue than chase those very small improvements. Frequently it does not matter if one is 10 or even 100 times faster than the other.


Again, we don't disagree. But the purported reason for rejection was the newness of Nginx.


No, switching creates risks. Risk of configuration errors leading to downtimes or vulnerabilities, risk of unexpected delays in deployment, risk of running into bugs that the users are unaware of.

Many software projects fail by facing delays due to excessive complexity and tech churn. Moving carefully helps.


> No, switching creates risks.

Absolutely - but again, that's apparently not why NGINX was dismissed as an option.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: