Hacker News new | past | comments | ask | show | jobs | submit login

The fact that a library discourages you from learning it's internals is how you know it's well done. If you needed to understand it, you could. The reason you don't is because you've never had a compelling reason to.

If only one person knows how a library works, that is a problem. If a 100 person team maintains it and you're not on that team.... it's probably because you have other stuff to do.

Software is an engineering discipline. Computer science is a science. If you are in programming because you want to advance our understanding, great, go work in one of the many fields with large novel algorithms that need to be understood.

For a typical programmer... Look around. Modern software is one of the best things about the modern world. It does SO much for us. Do you really think we, with a real world distribution of programmer abilities, could do all this without massive division of labor? How much would be missing if we insisted on understanding everything before we use it?

I suspect very few alternative approaches to software would work as well to truly build the modern software defined world where everything from cars to taxes to photography is digital.

Because... whenever someone tries an alternative approach, there usually seems to be the hidden unspoken assumptions that they actually don't want software to be as big as it is.

The end product of software itself these days is a service(in the sense of this article, not the SASS sense), wheras software built with an understanding-first mindset seems to usually wind up being closer to a digital version of a passive analog tool.




> The fact that a library discourages you from learning it's internals is how you know it's well done. If you needed to understand it, you could. The reason you don't is because you've never had a compelling reason to.

I don't think we need to protect people from learning internals, they do that just fine on their own. I know many situations where we failed to understand internals even in the presence of compelling reason.

With apologies for continuing to quote myself:

"I want to carve steps into the wall of the learning process. Most programs today yield insight only after days or weeks of unrewarded effort. I want an hour of reward for an hour (or three) of effort." -- http://akkartik.name/about


I’ve been coming around to the conclusion that some coding patterns, especially overuse of delegation and mutation of inputs, make code hard to learn.

I talk occasionally about viewing your code from the debugger but that is kind of hand wavy. I wonder if there’s a ‘linter’ one could write that looks at coverage or trace reports and complains about bad patterns.


yes, lately the Butler Lampson quote echos round my head "All problems in computer science can be solved by another level of indirection"

The problem is, we have been adding layers of indirection for 80 years, and each is leaky. So now it's very difficult to do basic stuff, and people are ok with software thick with abstractions.

The next stage should be removing unnecessary layers of indirection, as, like you said, things are much easier to understand and maintain that way.


The remainder of the quote Butler quoted matters;

"All problems in computer science can be solved by another level of indirection, except for the problem of too many layers of indirection." -- David Wheeler


And performance. You speed things up by removing a layer of indirection.


It's very easy to do basic stuff. It's hard to do basic stuff (or anything) well, since you end up having to solve problems that should have been solved by layers you're building upon.


We haven't been monotonically stacking layers steadily. In fact sometimes we collapse layers into monoliths. Sometimes we rebuild layers and make them bigger.

But we don't have 900 layers or anything. Performance is... usually pretty darn good, except for uneccessary disk writes.

It's trivial to do basic stuff because of those layers. What's hard is doing low level things from scratch in a way that doesn't conflict. But at the same time, there's less and less need for that.

I think things are way easier to maintain with a bunch of layers than the old school ravioli code. Most layers are specifically meant to make it easier to maintain, or they just formalize a layer that was already effectively there, but was built into another layer in an ad hoc manner.


> I think things are way easier to maintain with a bunch of layers than the old school ravioli code.

Pasta-based metaphors are the best.


I feel like better code visualization would solve a lot of my problems. Or at least highlight them.


As I've spent more time with flame graphs I realize they are, once you get down to brass tacks, the wrong tool for perf analysis because it's the width of the flame that tells you the most, not the height, and we usually worry about height when thinking about actual flames.

However there are all sorts of little subtle costs in your system that aren't captured by most of these tools due to lack of resolution (I haven't spent a lot of time with Intel's hardware solution) and asynchronous costs like memory defragmentation. Depth and frequency of calls are a useful proxy for figuring out what rocks to look under next after you've exhausted the first dozen. For this reason the flame graph is a useful fiction so I don't poo-poo them where anyone can hear. I can barely get people to look at perf data as it is.

But then I think how I'm turned off by some fictions and avoid certain fields, like parser writing, and wonder if a more accurate model would get more people to engage.


The difficult part about having opinions about the industry of software engineering is that it is, somehow, a largely enjoyable job.

But those two things are never going to be perfectly aligned. Enjoyment of the process does not guarantee working results that people want to pay for, and the thing that people want to pay for may be deeply unenjoyable. You can choose to tackle a subset of the problem space that happens to be both enjoyable and worthwhile, but you ought to admit that you are looking at a subset of the problem space.

I do a lot of other things with my time that are also near the confluence of enjoyable and marketable. I sing, for instance, and choir rehearsal and individual vocal practice are both a lot of fun. But any professional musician will tell you that if you want to be world-class at it, you occasionally have to wring the fun out of it. I've chosen the other option: I'm a volunteer singer with my church choir. I sing because it is fun, and only insofar as it is fun. We do new and not-very-difficult music each week, and an hour of effort is inherently an hour of reward. If my choir director said we're going to spend six months drilling some difficult measures and perfecting our German pronunciation, we'd all leave.

(In that sense, a volunteer church choir is rather like hobbyist open-source development. If it produces an enjoyable result for the public, great, and we do find that rewarding. But there is an upper bound on just how hard we're going to try and on the complexity of what we commit to doing.)

If you want an hour of reward after an hour of programming effort, that's fine! But the employment contract I've signed promises a year of payment after a year of effort. Occasionally, I want to work on things that are immediately rewarding because that helps with motivation. And it's important that such work is available. But I have a job that ultimately demands specific products, some of which just aren't going to be immediately rewarding, some of which do actually take a lengthy learning process, and - most importantly - many of which require building on top of other people's work that I could understand if I wanted but there is no business value in doing so up front.

(Incidentally, in the other direction, there are cases where I want to tackle problems that promise years of reward only after multiple years of effort, and figuring out how to get time to work on those problems is actually hard, too.)

We share almost all our code internally and leverage lots of open-source code so that we all have the option of understanding the code if we need it - but we rely on division of labor so that we have the option of building something on top of the existing base today, if we need that, which is more often what we need.

If you want to work on a hobbyist UNIX-like kernel for enjoyment's sake, great, I'm genuinely happy for you. But my servers at work are going to keep running a kernel with millions of lines of code I've never read, because I need those servers to work now.


Thanks, I largely agree with that. I just don't think we're getting a good deal for society with a year of effort from a paid programmer. When I said "reward" I meant understanding, not enjoyment. We programmers are privileged enough, I'm not advocating to optimize for our fun as well.


Separation of concerns is a must for ordinary/mediocre programmers to take part in building complex software. Getting these guys involved is getting a good deal for society.


I probably should be more clear, i don't think we should make it hard to learn how they work, or do anything to stop them, and we should probably even make easier, but at the same time, we should make in unnecessary, so we can get stuff done with maximum reuse and minimum monkey patching and custom stuff.


what is an hour of reward? I understand reward in terms of some form of a joyful/insightful/... experience, not in terms of time


Parent talks about how fast you get back results from effort (learning, fiddling, etc) spent:

"Most programs today yield insight only after days or weeks of unrewarded effort. I want an hour of reward for an hour (or three) of effort."

So they want to be rewarded with at least an hour of saved time, or more productive time, for one (or few) hours they put in, instead of spending weeks to reap rewards.


I was just alluding to a subjective sense that you got something reasonable for your trouble and efforts. You could do something, or you learned something, or you were entertained. My argument doesn't depend too much on the precise meaning, I think.


I think an older version said, "an hour's worth of reward." Does that seem clearer? Thanks.


"Division of labor is an extremely mature state for a society. Aiming prematurely for it is counterproductive. Rather than try to imitate more mature domains, start from scratch and see what this domain ends up needing."

This is more the gist of the article than "lets not have libraries or abstractions at all", its pointing more towards "lets defer those questions until the right time rather than from the jump".

I think Enterprise Java and the insane complexity of modern web dev are indicative of a consequence.

>Because... whenever someone tries an alternative approach, there usually seems to be the hidden unspoken assumptions that they actually don't want software to be as big as it is.

I really don't think that OP is making this assumption


People’s opinions on this stuff contain a lot of unstated pain points. Monolith first argues against a coping mechanism people use because they hate Big Ball of Mud projects.

Anyone who has ever done repair work can tell you how often people wait until it’s far too late to do basic maintenance. Software is similarly bimodal. People either run it till it falls apart, or they baby it like a collector’s item.

We have not collectively learned the trick of stopping at the first signs of pain or “strange noises” and figuring out what’s going on. But mostly we have not learned the trick of insisting to the business that we are doing our job when we do this, instead of plowing on blindly for six to eighteen months until there’s a break in the schedule. By which time everything’s a mess and two of the people you hoped would help you clean it up have already quit.


I think most software engineers know when it's time for maintenance. The issue is that the people who write their paychecks don't want to hear it.


I read too much commit history, including my own, to agree with that. We are too close to the problem, and we enjoy deep dives too much.

I save myself a lot of frustrating work by stopping for a cup of coffee or while brushing my teeth or washing my hair. “If I change Z I don’t need to touch X or Y”


I mean, I do the exact same (I did it Friday), but I don't think this is counter to my point. It's not because I don't want to fix something so I avoid it, it's because fixing it puts me behind on other tasks (that come down from people writing my paycheck), so I avoid tasks that won't be clearly rewarded unless I can convince them otherwise.

It's because time pressures are designed in a way that incentivizes you not to do maintenance when you see a mess. It's also not sexy on a resume that you altered something from a mess to not a mess vs implementing some new feature.

It takes highly detail oriented and creative people to develop good software and those traits tend to drive one crazy to the point of fixing something when you see a mess. Given no constraints I bet most developers would clean up their code to the best of their ability and fix things as they come across issues they identify. I've been in these no constraint environments, usually on stuff I write myself for myself, and I don't mind going back and doing a significant refactor when it's clearly needed. Once I'm done, I feel genuinely satisfied that I've done something useful and productive because I only need to convince myself and because no real external financial pressures exist in this context.


A lot of that pressure is in your own head.

Slow is smooth, smooth is fast. We spend a lot of energy running as fast as we can in the wrong direction.


Totally agreed!


I actually think this applies to the passive analog world too. Very few engineers could tell you how steel or concrete are made. And we all use abstractions / libraries, for instance using tabulated properties of steel alloys rather than knowing the underlying physics.

In fact, if put in plainer terms, every engineer would nod in agreement with Hyrum's Law. Everybody has a story of a design that worked because of an undocumented property of a material, that stopped working when Purchasing changed to a new supplier of that material.


The poster child for this was storage of liquid radioactive waste in barrels of clay cat litter.

Some bright-eyed purchasing agent substituted bark-pellet cat litter, because it was ecological or something -- anyway, not cheaper -- resulting in need for a cleanup so expensive that how much it cost is classified.


Someone substituted inorganic cat litter with an organic cat litter (specifically, wheat husks), it shut down WIPP for three years at a cost of half a billion dollars, and it's not classified: https://cen.acs.org/articles/95/i20/wrong-cat-litter-took-do...


That sounds like a great engineering story. Do you have a link to a more detailed account? I'd like to add it to my catalog.


Sorry, my searches are unlikely to be better than yours. But "radioactive waste" and "clay cat litter" should narrow things, even without "disaster".


On the contrary. Every engineer is able to learn the process when it matters. And it often does. The type of steel used, the treatment and the orientation is imporant to the outcome. You can not hand wave them away as "implementation details".


I don't see this is a disagreement with the GP. You say they (all engineers) can learn it, GP says they (few engineers) could tell you how it's done. Those are in perfect agreement. At any moment in time, there is likely no engineer who can explain fully every process used by their domain or object created by others in their domain. That doesn't mean that the majority of those engineers couldn't learn any arbitrary process or the details of an arbitrary object in their domain.


> Every engineer is able to learn the process when it matters.

A civil engineer is going to become an expert material scientist, "on need"? I doubt it.


Of course not. A basic understanding? Absolutely, yes.

It's even widely accepted to be an important part of the education.

Just as Karnaugh maps and parser theory is part of computer science engineering curriculums. It's not something that's expected to be used a daily basis but some general knowledge of how a processor works is absolutely necessary to be able to at least talk about cache line misses, performance counters, mode swaps etc.


One issue is that the divisions between engineering fields are somewhat arbitrary, and technology doesn't always respect those divisions, so we don't know in advance what education is going to be needed. A second problem is that we make it very hard for young engineers to maintain their technical chops when they can be quite busy and productive doing basic design. In fact, engineering students hear through the grapevine: "You won't use this stuff once you get a job."

As a result, the industry settles on an "efficient" way of managing issues that require depth or breadth, which has to have a few people around who handle those things when needed. That becomes a form of division-of-labor.


This is true, but perhaps a bit of a tangent to a parable.

I was merely reacting to the statement that most construction engineeers wouldn't know "how concrete is made". Most of them could tell you a thing or two about it, it's even in the curriculum. They are even expected to know about different preparations.

The idea that specialization doesn't exist is a bit of a straw man argument and not something anyone seems to argue.


If you're minimizing challenge for your software engineers, you're making worse engineers over time


>If you're minimizing challenge for your software engineers, you're making worse engineers over time

Or you're solving your current problems in the most efficient means possible, e.g. engineering. Minimizing one challenge frees mental processing power to worry about bigger issues. I couldn't care less how the memory paging in my OS works. I care about building features that users find valuable.


Such a needlessly extreme example, sometimes it might mean just working on the backend after doing nothing but frontend for your career, sometimes it might mean not assigning a feature of a certain complexity to the person you know likely grasps it instantly and assigning it to someone who you're less sure about but wants to do more. It can be just as much about figuring out and cultivating the potential of your workforce and that can create greater efficiency over time. There are certainly opportunities where this can be accomplished, because if everything is mission critical, business itself is doing something wrong. The amount of opportunity and the nature of such will vary across businesses, and perhaps an engineer may need to go elsewhere to search out further challenge. Your statement that "Minimizing one challenge frees mental processing power to worry about bigger issues" presumes a level of uniformity.


On the contrary, by not minimizing the challenges you are left with less productive engineers. The idea that tools etc should be written internally if hen they already exist is such a Byzantine way of looking at things. The only challenges an engineer should face should be the problem they’re trying to solve. If you’re building a SaaS you shouldn’t be worrying about rebuilding all the tools a project needs to reach conclusion.


Yeah, but this isn't the Olympics. If the same people are producing better quality work with less stress... Sounds like you're doing something right even if they are technically not learning certain things that aren't relevant.


I have to agree. The article quotes sound right and wonderful, but thinking about the environment in which I work, there’s not really a practical way to combine our efforts. We simply wouldn’t get things done in time.

Division of labor ensures that changes can happen reliably and quickly. I’m ramping on an internal library right now and it’s taking a while to understand it’s complexities and history. I can’t imagine making quick changes to this without breaking a lot of stuff. It will take time to develop an intuition for predicting a change’s impact.

Now multiply my ramp up time by every dev on my team every time they need to make an update. You can imagine productivity slows down and cognitive overhead rises.


Exactly. Division of labor is essential for any complex project. Whoever says the opposite has probably only worked in simple/small systems.


> Do you really think we, with a real world distribution of programmer abilities, could do all this without massive division of labor? How much would be missing if we insisted on understanding everything before we use it?

I agree that you don't want to spend time becoming super familiar with everything that you use but you should ALWAYS have a high level idea of what's happening under the hood. When you don't it inevitable leads to garbage in garbage out.


>real world distribution of programmer abilities

Way before the arrival of personal computers, it was clear for me to realize that I would be highly disappointed if average programmer abilities declined half as far as they have by now.

Some of the most coherent technical projects are the result of a single individual's outstanding vision.

Occasionally, the project can be brought to completion by that one individual, in ways that simply could not be exceeded alternatively, all the way from from fundamental algorithms through coding to UX and design.

Additional engineers could be considered useful or essential simply to accelerate the completion or launch.

When the second engineer is brought on board, it may be equally performant to have them come fully up to speed on everything the first engineer is doing, compared to having them concentrate on disparate but complementary needs which need to be brought to the table in addition.

If they both remain fully versed, then either one can do anything that comes up. Sometimes they can work on the same thing together when needed, other times doing completely different things to make progress in two areas simultaneously.

You're going to end up with the same product either way, the second engineer just allows an earlier launch date. But truly never within half the calendar time, those man-months are legendarily mythical.

For projects beyond a certain size more engineers are simply essential.

Then ask yourself what kind of team would you rather have?

All engineers who could contribute to any part of the whole enchilada from primordial logic through unexceeded UI/UX?

Or at the opposite end of the spectrum, all engineers who are so specialized that the majority of them have no involvement whatsoever with things like UI/UX?

Assuming you're delivering as satisfactory a product as can be done by the same number of engineers in the same time frame.

You're NOT going to end up with the same product either way.

Now aren't you glad it's a spectrum so it gives you an infinitely greater number of choices other than the optimum?


The programmer ability decline is probably just because it's easy now. There are still amazing programmers, it's just that a "mediocre programmer" was hard to find before, when getting ANYTHING done took a ton of skill.

As long as the number of excellent programmers is steady or increasing, I'm not too disappointed if there's a bunch of average ones too, as long as those average ones have great tools that let them still make good software.

It does seem like some of the very top innovative projects are done by one person.

I have very little direct experience with software small enough for an individual to understand, but it seems like a lot of our modern mega-apps are elaborations on one really great coder's innovation from the 80s, and real knock your socks off innovation only happens every few years.

Engineers that don't understand or care about UX can be a really bad problem, attempting to bolt on a UI on something meant to be a command line suite is usually highly leaky.

The opposite seems to be slightly less of a problem, to a point, almost nobody writes sorting algorithms, and writing your own database is usually just needless incompatibility.

I definitely am glad it's a spectrum, because having absolutely zero idea about the real context gets you in trouble, and stuff like media codecs are hard enough we'd lose half the worlds devs(including me) if you needed to understand them to use them.


Few library and service creators seem to think you can treat what they built as a black box. At the very least, they generally come with some advice on usage patterns to avoid.

Writing performant code unfortunately tends to require at least having a basic working model of how all your dependencies work. Otherwise, you tend to find yourself debugging why you ran out of file handles in production.


Most (useful) libraries are managing state. This means the user of the library should have a pretty good idea of how the state is managed or they will call functions in the wrong order.


Well I think that’s because most open sources licenses remove any liability from the creators and have no support outside of volunteers. For this reason a lot of them, and even on closed source projects to add an extra layer before support, provide basic troubleshooting information.


Programming seems to be one of the few fields where we think it’s a bad thing if our tools aren’t made by us (this has luckily been waning in recent years, but the push back against “bloat” we see on every tech forum is proof that the mindset is entrenched.


UPDATE: It seems the good folks at a certain subreddit found this comment and are interpreting "Discourage you from learning about it" as "Make it copyrighted" or obfusctation or something. I don't think those are signs of a good library!

I'm referring more to things like FOSS JS frameworks with the kind of batteries included design that prioritizes abstraction and encapsulation over simplicity. Nothing actually stops you from learning them, it just takes time, because their big, and it's not necessary to use them.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: