Hacker News new | past | comments | ask | show | jobs | submit login

> You just move the complexity one layer up, at the composition of all those small utilities.

Out of curiosity, what would the name of the metric for measuring this tradeoff be? Average lines of code per script/program? Average scripts/programs to accomplish a given task?

I feel like it'd be really good to talk more about this tradeoff, e.g. having smart programs that do a lot but are bloated (OpenVPN, OpenSSL, Docker all come to mind) versus smaller programs that do less, but chain together (most GNU tools with the piping mechanism) and the extreme ends of this scale.

Yet, i don't even know how much research has been done into this, what the status quo is and what terms to even look up. It's like talking about the difference between a monolith application or a microservices application, an abstraction that would be applied to tasks and the ways to do them, much like we have SLoC or cyclomatic complexity in regards to reasoning about code (though neither is exactly perfect either).




Average number of interfaces/solution. If you only have 1 program, you have 1 interface (1 set of command-line arguments, 1 set of environment variables, 1 STDIN, 1 STDOUT). If you have 50 programs, you can have 50 interfaces. So, more interfaces, but.... more interfaces.

Composeability requires many different interfaces, but not every solution needs composeability.


Fair point, but doesn't that also kind of muddy the waters because interfaces also being a regular programming construct? E.g. you might have 50 libraries with 50 interfaces that still go in one very large program, no? And in practice that would be very different from chaining 50 different scripts/simple tools together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: