Hacker News new | past | comments | ask | show | jobs | submit login

I figured this point of view would come up in this discussion. What you're really implying here, though, is that it's impossible to produce quality "hand-crafted" (that is, zero-dependency) software, or that it's so time-consuming that it might as well be considered impossible.

I'm not 100% sure I even agree that it takes longer to produce software that builds on top of a tower of dependencies than it does to just program the damned thing yourself, but it seems indisputable that the finished product will be higher quality than the one that defers as much functionality off to external dependencies as possible. Why? Because for an external dependency to be useful, it has to address many different use cases. To do that, it has to expose abstractions that only exist to allow reuse. The unused use cases cause memory bloat (that even tree shaking won't address all of) and the abstractions slow things down at runtime.

But beyond that... why are you guys always in such an almighty goddamned hurry? You do realize that if you succeed in rushing out an inferior, shameful, mass-produced crud app, there's either another one waiting in the wings immediately after this one, or there's no reason to keep you around? Why not take pride in your work and insist on spending the time needed to produce the best quality, most efficient, smallest footprint, most user-friendly, snappy, responsive software possible? Users despise the software 99% of us churn out and I don't blame them - it feels like somebody was putting in the least work possible so they could leave work a half hour early in time to make it to the golf course.




> but it seems indisputable that the finished product will be higher quality than the one that defers as much functionality off to external dependencies as possible. Why? Because for an external dependency to be useful, it has to address many different use cases. To do that, it has to expose abstractions that only exist to allow reuse. The unused use cases cause memory bloat (that even tree shaking won't address all of) and the abstractions slow things down at runtime.

You seem to be treating "quality" as a synonym for performance. But it doesn't matter how high-performance your GUI is if someone can't use it because, for example, it doesn't support their language or their assistive technology (e.g. screen reader or alternative input). And supporting these things requires -- guess what? -- abstractions, that take lots of time to implement if you roll your own GUI from the ground up. That's why I stand by my view, which you nicely summarized in your opening paragraph.

Your last paragraph is a tired straw man. No one has the time to develop software that's perfect in every way. When we sacrifice absolute performance, it's not necessarily for the sake of slapping together crap software to make a quick buck. Suppose I set out to solve an urgent problem in my chosen field (accessibility). Shipping my solution faster doesn't just mean that I can quit work faster and start making money. It also means the solution can start making a positive difference in users' lives sooner. In light of that, it would be irresponsible for me to obsess over maximizing speed or minimizing RAM consumption, as much as some vocal nerds might insist that I have a responsibility to do so. My real responsibility is to the users who are waiting for the solution that I'm developing. Now, that doesn't mean that I should absolutely ignore performance or resource consumption, but I shouldn't obsess over them either, and I certainly shouldn't use them as reasons to waste time developing everything from the ground up when there are so many high-quality components I can use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: