Hacker News new | past | comments | ask | show | jobs | submit login

I feel that one big way in which engineers talk past each other is in assuming that code quality is an inherent property of the code itself. The code is meaningless without human (and computer) interpretation. Therefore, the quality of code is a function of the relationship between that code and its social context.

Cognitive load is contextual. `Option<HashSet<UserId>>` is readable to someone knowledgeable in the language (`Option`, `HashSet`) and in the system (meaning of `UserId` -- the name suggests it's an integer or GUID newtype, but do we know that for sure? Perhaps it borrows conventions from a legacy system and so has more string-like semantics? Maybe users belong to groups, and the group ID is considered part of the user ID -- or perhaps to uniquely identify a user, you need both the group and user IDs together?).

What is the cognitive load of `Callable[[LogRecord, SystemDesc], int]`? Perhaps in context, `SystemDesc` is very obvious, or perhaps not. With surrounding documentation, maybe it is clear what the `int` is supposed to mean, or maybe it would be best served wrapped in a newtype. Maybe your function takes ten different `Callable`s and it would be better pulled out into an polymorphic type. But maybe your language makes that awkward or difficult. Or maybe your function is a library export, or even if it isn't, it's used in too many places to make refactoring worthwhile right now.

I also quite like newtypes for indicating pragmatics, but it is also a contextually-dependent trade-off. You may make calls to your module more obvious to read, but you also expand the module's surface area. That means more things for people writing client code to understand, and more points of failure in case of changes (coupling). In the end, it seems to me that it is less important whether you use a newtype or not, and more important to be consistent.

In fact, this very trade-off -- readability versus surface area -- is at the heart of the "small vs large functions" debate. More smaller functions, and you push your complexity out into the interfaces and relationships between functions. Fewer large functions, and the complexity is internalised inside the functions.

To me, function size is less the deciding factor [0], but rather whether your interfaces are real, _conceptually_ clean joints of your solution. We have to think at a system level. Interfaces hide complexity, but only if the system as a whole ends up easier to reason about and easier to change. You pay a cost for both interface (surface area) and implementation (volume). There should be a happy middle.

---

[0] Also because size is often a deceptively poor indicator of implementation complexity in the first place, especially when mathematical expressions are involved. Mathematical expressions are fantastic exactly because they syntactically condense complexity, but it means very little syntactic redundancy, and so they seem to be magnets for typos and oversights.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: