Hacker News new | past | comments | ask | show | jobs | submit login

Likely the opposite. A reduction of optionality reduces the opportunities for dependency injections.

When you load 50 (or for the tree, 500) independently developed modules the probability of failure (or vulnerability) is typically the sum, not product, of the failure probability of each component. The same reason why not everything is implemented as a fleet of microservices.

Few worry that the TCP implementation in your OS is a monolith, and one that is tightly integrated with the IP code. This will make increasing sense further up the stack, for good or ill.

There are trade offs both ways.




Just couldn't resist nerding out about probabilities. Summing failure rates would potentially get you above 100% chance of failure. What you'd really want is the product of success rates. So, if you have a 1% failure rate on a module, and you have 50 modules, that'd be 0.99^50, which brings you to about 60% success rate, or 40% failure rate overall. For 500 modules that drops to less than 1% success rate.

But yeah, agree that the odds of failure are probably lower pulling in a handful of monoliths that are well exercised than hundreds of miscellanea of varying robustness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: