Senior leaders and people that want to get there take note: cognitive load is the base problem one solves for when scaling development organizations. This article is a pretty good introduction -- especially building with empathy and emphasizing "what" over "how". But one can also look to any tech company that provides an API devs like (my fav: Stripe) as an example of what low cognitive load relative to the problem looks like.
Truly talented leaders tend to realize that high cognitive load comes from lots of places besides technical considerations too. It can be hard to think about a problem when you're also dealing with a toxic team member, untrustworthy leadership, lack of organizational focus, shitty HR policies, feeling unsafe at work, etc. Unfortunately, fighting against those elements is a never-ending battle.
Leaders that mix low cognitive load with clear direction and an interesting problem start to approach that highly-sought-after "early startup productivity" that so many companies can't seem to figure out. At least until a re-org, acquisition, or change in the c-suite comes along and blows it all up.
> cognitive load is the base problem one solves for when scaling development organizations.
Just to add, this generalizes beyond just development orgs.
I've scaled out analytics orgs at a few companies, and one of the most difficult aspects is penetrating into the workflows of business functions. As a rule, most[1] business users tend to operate at a high level of cognitive load, with little in the way of support structures to reduce that. But effectively integrating operational analytics requires[2] contextualization. So more often than not, the analytics scaling acts as a forcing function to put in place the operational and process supports needed for that contextualization, which has the secondary effect of reducing the cognitive load of the end users. In more than one occasion, I've seen that secondary effect have a greater impact than the analytics itself.
[1] A major exception to this rule are tech companies and companies with technical founders. If the core of your business is a technical system, then your business functions are created as extensions of that system and you get a lot of process/structure "for free" on the business side due to that fact. And if that's not the case but you still have technical founders, those individuals will approach business functions with a process/systems-first mentality since that's how they approach technical problems. While it has it's own issues, it does lend itself to reducing the cognitive load of those functions and enabling more efficient scaling. Business leaders will take the opposite approach, where they leave the function mostly unstructured and (ideally) create process, structure, and systems based off of the emergent needs.
[2] You can certainly provide analytics deliverables without contextualization. But it won't be nearly as effective, will mostly be ignored, and will rarely actually help anything since it leaves the onus of contextualization and interpretation on the end user and therefore does nothing to reduce their cognitive load.
Any particular industry/function/scenario you'd like used as an example? I can think of plenty of examples, but not sure which would click the best for you.
+1 for an example - not for the "major exception" but rather for the "rule". I'd love to find out the specifics of contextualization that you mention - how should it be done, what's the ideal process for it, to deliver a valuable analytics deployment in the end that won't get buried due to lack of use several months later.
Smaller business teams tend to use the "shared hosting" model when it comes to staffing: they hire a person and start subscribing relevant duties/needs to that person. At some point in time, they'll get oversubscribed to the point where it noticeably impacts their latency/performance. Once that happens consistently enough, the team will (hopefully) hire a second body and overflow duties to that new person. Rinse and repeat.
Think of a 1-2 person marketing team, that has to handle anything that crops up and happens to fall under the "marketing" umbrella.
At some point, demand for particular workloads will reach a point to justify adding in a dedicated server or two (i.e. hiring a specialist).
But, while you now have a specialized/dedicated resource to direct specific types of work to, it's still a "subscribe relevant duties" reactionary model. And the specialized resource gets utilized/subscribed to just like a general resource would, but only for specific types of work.
The day-to-day for these people is to execute work in the same manner as an interpreted language - they parse the request, constraints, and their capabilities, then immediately execute that work. Very little in the way of formal or documented processes, structures, outputs, inputs, etc exist. While some processes might be "cached" if they occur frequently and consistently enough, at some point they'll be garbage collected and have to be re-interpreted next time they execute. This is the reason most business teams operate with high cognitive load - disparate workloads, assigned with an oversubscription model, that require going through an interpretation phase before every execution.
Some particular workloads may be consistent or important enough to warrant a JIT compilation model. Or they'll be underutilized enough to have the capacity to proactively implement a JIT compilation model. Components of the work will be precompiled and defined and structured, then at runtime they can reduce the interpreter overhead (cognitive load) necessary for execution. Very, very little of their workload ever makes it to a state of ahead of time compilation and the corresponding reduction in runtime overhead.
That's the general rule for business teams. Developers though are so expensive that they're rarely allowed to operate this way. Instead, it's (theoretically) more cost efficient to invest in JIT optimizations for their workloads in the form of BAs, PMs, DevOps, ticketing systems, formalized processes, etc. And because they create systems and structure for a living, they're more apt and able to precompile parts of their workload themselves far more efficiently and effectively than a business team can do.
As for ensuring a valuable analytics deployment: there's no single answer to that, it has to be interpreted at runtime. ;)
But as a generalization, focus your analytics on giving your end users the tools necessary to be better JIT compilers. And ensure you don't introduce runtime errors for them. If your analytics deliverable is relevant to 80% of process executions, and has a known 20% of the time that it's not applicable, the user will lean into it and rely on it as a runtime optimization when appropriate. If it doesn't work for 20% of executions, but that 20% isn't known ahead of time, it'll be seen as a potential risk of runtime errors and completely rejected or bypassed by the end user. And if you prematurely go past that and leverage analytics for AOT compilation (i.e. full automation), your deployment is likely to fail entirely due to adding in too much rigidity to accommodate the unstable and ill-defined state of the process itself. Or if you go to the other extreme and your analytics just provides more inputs at interpretation time with little benefit of JIT improvements, it's just more runtime overhead and will have limited adoption unless the end user is forced or has enough spare capacity to absorb the additional interpretation overhead.
Feel free to reach out if you want something more contextual - my email is in my profile.
Just also wanted to add a big factor can be personal problems as well. Many of them can't be helped, but things like transportation, commute time, local real estate prices, health insurance, etc. can be helped and should not be overlooked.
Unfortunately not any that still exist. The only companies that seem to be able to accomplish it are early stage startups; because it is orders of magnitude easier to do with such a small group of people. Code School & Envy Labs are my personal examples.
I do think it is possible at large companies. I've built, been part of, and talked with managers that have created teams that obviously have low load (due to how much they produce, satisfaction with work, and level of trust). But it never lasts for long. Inevitably something like Radford is implemented by HR or some VP gets the reorg itch and tears down the system because these sorts of details haven't made it into boardrooms beyond "two pizza teams". Pluralsight and Amazon are my personal experiences with large companies.
This is why people who can't handle too much cognitive load end up being better leaders.
Or in other words... less intelligent people make better leaders.
On a side note, less intelligent people also write more readable code. The principle, Keep it simple stupid, aka kiss is indeed better followed by people who are described by the acronym. So in other words... A simple and stupid person is better at keeping their code and designs simple and stupid than a smart and complicated individual.
I respectfully disagree. Simple and elegant code is extremely difficult to write, and the ability to do so is mostly orthogonal to intelligence. However, below a certain threshold of skill and capacity for reasoning about spatial complexity, the developer is much more likely to write confusing, tangled, unnecessary, and verbose code. Reading through a codebase with tons of copy pasted code requires far more effort than a codebase with well designed abstractions. The key phrase there is well designed. What you are describing is someone who know which abstractions to use when and where, and that is a skill that requires lots of experience and technical maturity.
Experience and technical maturity do not equate with intelligence.
Abstractions serve one purpose and one purpose only: to reduce cognitive work load. Any abstraction above a primitive implementation only can offer added inefficiencies . Ex: SQL is less efficient than C++ which is less efficient than assembly. A zero abstraction code base is the usually the most efficient implementation and it will be written in assembly.
So from a technical standpoint, we use abstractions only to reduce cognitive workload because other than that abstractions can only offer inn-efficiencies.
Intelligent people do not put in the effort to learn about or implement proper abstractions because they usually deem it unnecessary to abstract what they perceive to be trivial cognitive workloads..
I respectfully disagree with your disagreement of that disagreement.
Abstractions have far more roles than reducing cognitive load. You're completely overlooking platform realities, code reuse, usability and other benefits of reduced/managed complexity.
From a technical standpoint efficiency of the code is irrelevant if it's bug-ridden due to it's complexity. Code is for humans, not the other way around. We chisel away at lower-level languages if efficiency is required (Ex. C bindings in Python).
If you want to utilize your intelligence to the fullest you abstract away most of the trivial stuff to the point where it pays. You can still go down and inspect or override the abstraction if it's needed. Experience in this case is knowing what to abstract in what manner so it will work for you. Intelligence is the act of keeping everything in a sane state without over-focusing on unimportant stuff.
What I think you're critiquing is the act of adding abstractions when there's no need for one at a given time, just to make something simpler in name of simplicity overlooking it's usability. This can be attributed to the lack of experience.
KISS is a suggestion, not a rule. It also applies to abstractions, so in one could argue that doing everything in assembly is actually the "simplest" way of programming, like a rough sketch of a scene is simpler than a full-blown oil painting.
As an aside the whole notion of "intelligence" is a bit twisted with experience IMO. The "classical" IQ applies mostly to dumb pattern matching - a skill one can perfect. EQ can be trained by getting out and deliberately practicing human interactions.
I respectfully disagree with your disagreement of that disagreement of that disagreement.
>Abstractions have far more roles than reducing cognitive load. You're completely overlooking platform realities, code reuse, usability and other benefits of reduced/managed complexity.
I am not overlooking anything. The traits you bring up in this statement do offer any performance improvements to the system. Therefore the only other possible benefits that these traits offer is that they reduce cognitive overhead.
>From a technical standpoint efficiency of the code is irrelevant if it's bug-ridden due to it's complexity. Code is for humans, not the other way around. We chisel away at lower-level languages if efficiency is required (Ex. C bindings in Python).
Yes. And intelligent people can write more complex code with less abstractions and have less bugs... We agree.
>What I think you're critiquing is the act of adding abstractions when there's no need for one at a given time, just to make something simpler in name of simplicity overlooking it's usability. This can be attributed to the lack of experience.
I am not critiquing anything. I have not said anything about my opinion on where or when to add abstractions. I have only commented on how a very intelligent person would do it. I never said I was intelligent... All I said was more intelligent people tend to write less readable code and this can be attributed to the fact that they have less need for abstractions. So no I am not critiquing about when or where to write abstractions.
>The "classical" IQ applies mostly to dumb pattern matching
IQ tests present questions with patterns that the test taker usually has not seen before. A test taker cannot "dumb pattern match" a pattern he has not seen. Therefore the IQ test cannot be testing for "dumb pattern matching." If the IQ test measures IQ and the IQ test is not measuring for "dumb pattern matching" then by concrete logic IQ must not apply to "dumb pattern matching." QED
IQ must apply to something more. A general intelligence.
If you're working with what you perceive to be "trivial cognitive workloads" I think that is more a sign that you aren't quite working at the level you're capable of, than a sign that you're a really intelligent person.
The thing is that all humans have a limited working memory (which generally correlates with 'intelligence'). Whether you can juggle 3 objects in your working memory or 30 doesn't matter; eventually you will get to a problem that you can't hold within your working memory. At that point you will need to start using abstractions for both object-structure and functional characteristics. The limit of what you can do with abstraction is way higher than that of your working memory or intelligence.
As a rule it's always better to properly abstract and structure your code, to keep it as simple and well-formed as possible, and to let your tooling deal with any higher-order actions.
As a bonus this allows other people to have an easy time understanding your code, especially when those 'other people' are your future selves.
>If you're working with what you perceive to be "trivial cognitive workloads" I think that is more a sign that you aren't quite working at the level you're capable of, than a sign that you're a really intelligent person.
Logically it can be a sign of either. If the cognitive workloads are on average hard for normal people and you perceive it to be trivial than you are likely intelligent. If the workload is considered to be trivial by most people and you see it as hard than you are likely not intelligent.
>The thing is that all humans have a limited working memory (which generally correlates with 'intelligence'). Whether you can juggle 3 objects in your working memory or 30 doesn't matter; eventually you will get to a problem that you can't hold within your working memory. At that point you will need to start using abstractions for both object-structure and functional characteristics. The limit of what you can do with abstraction is way higher than that of your working memory or intelligence.
I get where you're coming from. Think of it this way. If we organize the complexity of programs into layers such that each layer functions as an abstraction to reduce cognitive overhead, then we can use the number of layers as a quantitative measure of how much abstractions were used on a program.
A very intelligent person would use 3 layers while a less intelligent person would organize the program into 10 layers. There see how it works? Yes complexity is potentially unlimited but the more intelligent person still ends up relying on abstractions less than the unintelligent one.
One thing that does equate with intelligence is capability for metacognition.
Less intelligent are also those less capable of realising that they’re overburdening their cognitive capacities with their convoluted code.
More intelligent people are likelier to notice that they’re making things harder for themselves (even if it is within the realm of tolerability) and look for alternative strategies.
I actually agree with this. I've seen it happen but not in the way you expect.
I've seen less intelligent people write unnecessary design patterns and abstractions and end up making things harder for themselves. In essence, the heavy reliance the less intelligent programmer has on writing hyper readable and abstract code ends up damaging the app overall.
> Ex: SQL is less efficient than C++ which is less efficient than assembly. A zero abstraction code base is the usually the most efficient implementation and it will be written in assembly.
I don't think we can compare SQL and C++ like that. Not at all.
SQL implementations are most likely written in C or some systems level language. It is a language built on top of another language. A higher level language built on a lower level one. It is therefore by definition a higher abstraction and thus since the nature of the comparison is based off abstractions the comparison is apt since it fits the definition of the word. What you think is different then facts defined by words, and therefore your thinking: that you can't compare C++ to SQL falls in the face of facts.
Perhaps for a very narrow definition of efficiency. Writing in assembly what could be written in SQL is definitely less efficient from a productivity point of view. Even for your definition of efficiency that may not be true.
>Writing in assembly what could be written in SQL is definitely less efficient from a productivity point of view.
You mean productivity as in cognitive workloads can be so high in assembly that it's not efficient? I choose to use a narrow definition of assembly because my argument is that the productivity gains given to an intelligent person by abstractions above assembly language are inconsequential. It wouldn't make sense to use a broader definition. You use the word Productivity but we're really talking about the same thing or in other words "the loss of productivity due to cognitive workloads." Thus your argument is circular to mine and therefore ineffective.
My definition of efficiency is true. SQL eventually executes assembly commands thus it is a higher level of abstraction than assembly and thus it can only be equally or less efficient in terms of space and time complexity. Realistically, the assembly code actually executed is most likely not the most efficient way to execute the code.
I can only assume that you were trying to say that the more intelligent leaders are worse because they cannot understand more complex systems as easily and therefore require things to be done with lower cognitive load.
Regardless of how simple your delivered system ends up being, you still have to understand what problems you're solving and the reasons behind it. This is almost always a product of factors outside of your control.
Let's assume that in the worst case, you're going to be facing a very complex situation. Now you have three possible outcomes here when delivering.
A) Your solution is simple, because you either did not grasp (or were otherwise unaware of) the complexity of the problem, did not acknowledge it, or disregarded the severity of the problem. Or your organization's engineering culture puts pressure on you to cut corners (e.g. "Move Fast and Break Things").
B) Your solution addresses the complexity of the problem, but it was a lot of work to solve and you did not have the time or experience needed to build it in an elegant or optimally maintainable way.
C) Your solution addresses the complexity of the problem, and is built in a reasonably elegant, simple manner.
Which of these did you say correlates with intelligence again?
Edit: There's also a fourth outcome, reserved for the real rockstars out there.
D) Your solution is to not solve the problem at all, because you found a way to avoid your organization/product having to deal with the problem to begin with, or you otherwise re-assessed the assumptions surrounding the problem. The famous post from a few years ago "Why I Strive to be a 0.1x Engineer" comes to mind.
Good leaders tend to use the intelligence of their team to determine direction and design. They have meetings and empower team members to perform at 110% by giving the team members ownership and clearing away obstacles.
Less intelligent people are better at this because they have no choice but to do this. Less intelligent people do not have the cognitive power to design the full system so they have to resort to relying on the team to do it.
Very intelligent people tend not to do this. They rely on their own intelligence to direct the team and often times just use the team members as grunts to implement there grand designs.
No matter how smart you are, your cognitive powers will always be lower than the cognitive powers of the entire team combined, therefore any attempt to have the team follow your lead rather you following the lead of the team will be worse off.
The science supports this. Teams with leaders that empower team members are statistically more successful than teams that don't. My anecdotal observation is that these types of leaders tend to be the less intelligent types. They empower you and than get out of your way because they really don't have the intelligence to get in your way.
Here are two statistics about humanity that will lend support to my thesis. Your IQ does not correlate with how successful you will be in life, success being measured as how high you climb the corporate ladder and how much money you make as an individual. This is an actual correlation. In direct contradiction to this... the average IQ of a country highly correlates with the economic success of that country. These two statistics say something about the nature of leadership. It says that leaders succeed off the backs of people more intelligent than them.
I'm not sure if I'm misreading something, but are you claiming that higher IQ does not correlate with higher income? You'll have to source that claim, if that's what you're claiming.
You are not misreading something. I made a mistake.
IQ does correlate with salary but that correlation is weak and a higher IQ brings in only a slightly higher salary. Correlations between IQ and GDPs however are very strong and a small increase in average IQ is correlated with a significant jump in GDP.
My source is the hive mind book, and my original footnote about IQ was incorrect and completely wrong. My original thesis still stands though. Do you have any thoughts about that? The whole IQ and success thing was just a footnote. I shouldn't of added it in, as you are correct in identifying that the footnote was completely wrong. I believe in admitting mistakes and not charging forward with bias so please critique anything you disagree with... I want critiques that address more than the footnote.
I read (between the lines, and maybe mistakenly) that you consider yourself highly intelligent. With that perspective, the comment sounds like reasoning for why you aren't going to be a good leader and why that is a good thing. Also, it sounds like it might even be an excuse for writing unreadable code.
This perspective seems based on a premise of "intelligence" being a one-dimensional scale where everybody has an absolute ranking. I think this is generally a problem with the "IQ" concept that we've all agreed on. It seems extremely oversimplified to me, at least if we want intelligence to mean something like "brain capacity".
>I read (between the lines, and maybe mistakenly) that you consider yourself highly intelligent. With that perspective, the comment sounds like reasoning for why you aren't going to be a good leader and why that is a good thing. Also, it sounds like it might even be an excuse for writing unreadable code.
Humans tend to construct biased logic around themselves to justify their weakness. We lie to ourselves so that we may be happier. In fact, although there is currently a crisis in the veracity of psychology papers, I recall seeing studies showing a positive correlation between how realistic a person was and how depressed he was.
I can tell you that I strive to be as unbiased as possible. I strive to remain unbiased even when it hurts. This is not a trivial skill. I have not seen anyone call themselves stupid even when they clearly are, in fact the way we lie to ourselves is so powerful that even if you are stupid, you wouldn't know it.
That being said I would say I am more intelligent then average. I don't have IQ test scores but I have data that by common sense should correlate. I went to UCLA where the admit rate for engineering 11%. By that fact, I would assume I have above normal intelligence.
I actually believe the opposite to writing unreadable code. In fact I have a bias for readability and extreme abstractions to a fault. I am a huge fan of functional programming. My observations about super intelligent people are of others not of me (note that I only believe I am higher than the average, not someone incredibly smart).
I also believe that I know what it takes to be a good leader, that's why I'm able to identify the traits that make a bad leader and thus come up with this conclusion.
Please stop trying to equate my personal experience with what I am trying to say. I present my views and opinions in a way where I strive to be impersonal. Please judge what I say with the same impassion.
>This perspective seems based on a premise of "intelligence" being a one-dimensional scale where everybody has an absolute ranking. I think this is generally a problem with the "IQ" concept that we've all agreed on. It seems extremely oversimplified to me, at least if we want intelligence to mean something like "brain capacity".
Well clearly when you have opinions about people. In your vicarious opinions you and others often make a judgement that person A is more intelligent then person B. The ranking is based off of that fuzzy general opinion and my thesis is in no way an exact law or proposition. My thesis is just a fuzzy generality that I believe is true.
Those leaders are gifted with a lack of intelligence. People with lower IQ don't necessarily appear less intelligent. In fact they can be more charismatic and appear more intelligent.
Steve Jobs was not an intelligent person when compared to Woz. Yet Steve is often referred to as a genius. I don't think Woz could ever come up with the design ethos that Steve Jobs had around products and this is due in great part to Woz's higher level of general intelligence (IQ).
I don't know about anything that points to a statistical advantage. I'm sure you can find some small studies via google. Since statistical evidence is rigorous and hard to find, a good portion of the knowledge of an intelligent mind must be derived from induction and anecdotal evidence with lots of assumptions. The origins of my argument are anecdotal in nature. Statistical evidence is stronger but so hard to establish that if all my arguments are always based off of statistical evidence I would hardly have anything to argue.
Additionally statistical conclusions are already established. Hardly worth arguing about all together. It is the things that aren't backed by statistical evidence YET, that are more interesting and worth talking about.
Other way around. I consider myself as someone who writes extremely clean code. I consider many people who write messier code than me as people who are smarter. I self depreciate my own intellectual abilities in service of rationality. I write cleaner code because I am not intelligent enough to write messy code.
Honestly, I detect a thinly veiled personal ire in your response. Perhaps the bias is with you. Perhaps you write clean code and you use it to justify your supposed intelligence. If so, you wouldn't even know it. Such is the nature of bias.
Writing "clean code" is in actuality a simple endeavor with high euphoric rewards. All you're doing when writing "clean code" is following conventions and modularizing your code. It feels good to organize your room just like it feels good to organize your code, but organizing your room is not an intellectual feat just like how writing clean code isn't either. Think about it, cleaning up your own code is really, really easy. To have it function as the pillar of your intellectual self esteem is a weakness.
So I haven't found any of the common measures of intelligence (self-reported IQ, presence in honors/accelerated programs in school, "seems to learn new concepts quickly," etc.) to be indicative of anything particularly useful in a work environment. So how are you measuring intelligence to make these sorts of statements?
I ask because reading your comment leaves me with the impression that there's some conflation between intelligence & empathy + humility going on.
> So I haven't found any of the common measures of intelligence (self-reported IQ, presence in honors/accelerated programs in school, "seems to learn new concepts quickly," etc.) to be indicative of anything particularly useful in a work environment.
Right and this is in line with my statements. Less readable code by common sense will not correlate with being more useful in a work environment. Though intelligent people tend to have other benefits that allow them to excel.
>So how are you measuring intelligence to make these sorts of statements?
Anecdotal evidence. I try to be as unbiased as a biased creature can possibly be.
>I ask because reading your comment leaves me with the impression that there's some conflation between intelligence & empathy + humility going on.
I don't know where you get this from. All I am saying is intelligent people write less readable code... or vice versa That's it. I have not commented on any other factors.
It isn't bad per se but it needs to be applied strategically. After all, this ability is exactly what enables some people to deal with problems of high essential complexity, achieving what no one else can. You just have to realize that the majority of software does not fall in this category.
It seems to be exactly the opposite instead: “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”
I think the best single observation about cognitive load is in Ousterhout's book A Philosophy of Software Design[1]. In the book he promotes the idea that classes should be "deep", such that their top-level surface API is small relative to the complexity they hide underneath.
This applies to the microservice/monolith debate as well. And it basically boils down to the observation that having lots of shallow services doesn't really reduce complexity. Each service may be simple unto itself, but the proliferation of many such services creates complexity at the next level of abstraction. Having well designed services with a simple API, but hide large amounts of complexity beneath, really reduces cognitive load for the system as a whole. And by "simple API" I think it's important to realize that this includes capturing as much of the complexity of error handling and exceptional cases as much as possible, so the user of the services has less to worry about when calling it.
Yes, Ousterhout's work is still a great read decades after he published.
What people forget when doing microservices, server-less, or other 'modern' ways of breaking up software into more or less independent things is that these are just variations of decades old ways of breaking stuff up. Whether you are doing OO, modules, DCOM components, or microservices, you always end up dealing with cohesiveness and coupling. Changing how you break stuff up does not change that. Breaking stuff up is a good thing but it also introduces cost depending on how you break things up.
In the case of microservices, the cost is automation overhead, deployment overhead, and runtime overhead. If your microservice is a simple thing, it might be vastly cheaper to build and run it as part of something else. I've actually gotten rid of most of the microservices and lambdas we used to have to get a grip on our deployment cost and complexity. We weren't getting a lot of value out of them being microservices. The work to get rid of this stuff was extremely straightforward.
I've noticed this as well. It's much easier to find the logic I'm looking for when I can easily remember where the domain of one class ends and another begins.
In my experience, communication skills are always the bottleneck.
> But with the coming-of-age of IoT and ubiquitous connected services, we call them "stream-aligned" because "product" loses its meaning when you're talking about many-to-many interactions among physical devices, online services, and others. ("Product" is often a physical thing in these cases.)
Foreboding over IoT is not a strong argument against product teams. Whether it is product teams, streams, microservices, or monoliths: people have limitations on how they can maintain equilibrium and the current set of processes and tools overwhelm it at the cost of productivity.
I agree with the spirit of the argument, but I think it's counter intuitive to suggest "yet another" thought construct based on a premise of under-loading cognitive faculties.
As somebody who worked quite a bit on various smaller film sets it always shocka me just how bad people in IT often are when it comes to communicating either within their teams, with their code or with non-IT people.
It certainly got better in some ways, but nowhere near the military precision of a well tuned and experienced film crew.
But film crews don't have to invent a vocabulary for each film. The special terms used are all the same.
Developer teams juggle multiple special-purpose vocabularies specific to the technology stack they use, the techniques they employ within the technology stack, the language of the domain they are encoding in software, and the language of the software solution itself.
The set of vocabularies gets expanded or swapped every time you add a person, a technology, or a new project.
Of course we're "bad" at communicating! It's a harder cognitive problem space to convey meaning in.
You can absolutely create a culture where there's a shared amount of base concepts though. Communities form in code around those vocabularies - if you have no shared vocabulary you don't have a team, you have a bunch of individuals.
I think it is more than communication skills. Communication is effective when something is "speakable". This implies both parties has good abstraction over the same thing. But when the system is overly complex (no matter monoliths or micro-services), it is impossible to build a good abstraction.
Just considering a n parameters boolean function. You can build a simple formula around it and communicate well if there is simple pattern. For example, f(x1, x2, ...) = x1
When there is some exception? Fine, we can still talk about general pattern, while communicate particularly about the exception.
But when the mapping gets more random, at some point no one can describe it without going through each value one by one. The complexity will become O(2^n). You will spend whole day to communicate only tiny part of the function
This article was initially inspiring and interesting, but eventually I think it is a mud of "sound lies".
I like the idea that human cognition has a huge impact on our lives and it certainly has. But trying to pull off a new pradigm shift on the congnition hype train is annoying.
What is true for me is that even average software and systems we build can get insanely complex. To fix it, it is just suggested to reduce the product size, train / hire people. It is absolutely not wrong, but the narrative is. If we talk about congnitive load we should look at formalism. Yes it sounds awful. But having clear rules of yay's and nay's is when designing and evolving systems can clearly reduce cognitive load, because I cannot remember all the stuff people failed at in history.
PS: Please don't mix up fomalisms with principles.
Conway's law. organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations. The law is based on the reasoning that in order for a software module to function, multiple authors must communicate frequently with each other.
Splitting your app up based on "cognitive load" is just as bad a boundary as 100 LOC per microservice. It's an arbitrary measure and varies widely per developer.
The most "correct" way I've ever seen applications divided is based on knowledge domains ala domain driven design (DDD). Drawing boundaries around the domain functionality of your business or operations means that the domain can be ignorant of other parts of your system.
This is the issue at the heart of most architecture and language/idiom arguments.
We're meat bags selected for avoiding predators, telling stories around a campfire, and poking things with a stick and we're trying to reason about and craft functioning complex systems. Until the machines take over, the optimal language or architecture will be the one your team can both make sense of and employ with relative ease, full stop.
This is a fundamental operation in most engineering disciplines related to human interaction: reducing the system (the designer's PoV) to a particular path through that system (the user's PoV), constrained by cognitive capabilities of the user. The designer sees a graph of entities and relationships, the user only needs to navigate it with the least effort. This is applicable to everything, from the structural design of a rocket (a structural engineer only needs the specs and the load profile for the specific part, not the entire rocket internals) to videogame level design (different paths taken by players through a location).
Your reply is exactly what I needed as ammunition to people who don't see how machines can really help us in the future - if you don't mind, I'll paraphrase this comment! Too many people I talk to who aren't computer programmers but maybe, designers or architects who have been to a few conferences, don't believe there is revolution yet to come.
I'm a 47 year old programmer who doesn't think a revolution is yet to come because I actually think programming is far more creative in nature than we can ever give to a machine.
Not that there won't be tools to assist the human programmers, but the robot uprising will never occur until we can at least answer the very basic question of why will the robots care, and how exactly, in some non-hand-wavey fashion, will they get creative about solving novel problems?
Consider this: You can write a genetic algorithm maker which will randomly iterate through all possible abstract syntax tree morphs and then run a test that evaluates whether the code "performs better" as a solution to some given need, and eventually you might strike upon some novel way of solving a problem through pure randomness. But here's the thing: The ultimate arbitrator of "what is better" will always be a human and the ultimate agent of "need" is a human as well. Machines just don't "need" things, like a faster way to ray-trace so your videogame open-world simulation is more immersive, or even a nicer GUI, much less a better way to make money... People need all these things, and people evaluate whether the machine reduces those needs. Machines DNGAF.
When people say to me eventually computers will program themselves and I’ll be out of a job, I think that’ll be the least of our worries, as humans will become redundant.
I like to say that programming is the last job that will be automated. To normal people it's a turn of phrase; tech people understand that I'm literally referring to the impending end of life as we know it.
Being able to keep the system you are working on in your mind with sufficient but excessive detail is key. Keeping the system in all is parts "as simple as possible and as complex as necessary" (that is my engineering philosophy) is what make se all the difference. I used to think: since I am smart I am going to write something more complex because I can manage that. Wrong. For the smart and dumb engineer a simpler system is faster to build and moreover easier to maintain.
Managing dependencies (briefly mention in the article) is even more important. In a large organization what show you down the most is having to wait for somebody else. Having autonomous teams as much as possible is part of the equation to keep a growing organization able to move fast.
That said everything else being equal (you are writing a good code, you have good engineers, etc), a monolithic system tends to be easier to debug (one stack trace is easier to debug than a trail of logs), the code base is usually easier to refactor.
Where a monolithic system makes things harder is asynchronously deploy different parts of the system and scaling different parts of the system at a different pace.
As a rule of thumb, trying to keep things in one system until it is evident where the cut should be and it is clear it is time to do it help to keep both infrastructure overhead and cognitive overload under control.
I believe one of the main reasons for ending up with a microservice is that you just don't want to implement certain functionality yourself.
If you are running things like sentry, wordpress, mediawiki, keycloak, forums etc and interconnect them in meaningful ways, that is essentially a microservices architecture.
> Intrinsic cognitive load, which relates to aspects of the task fundamental to the problem space. Example: How is a class defined in Java?
Article gets this wrong immediately. Intrinsic should be to the characteristics of the feature being developed not about the mechanics of your tools. They are secondary brought in your solution space. It bothers me when core terms are misused making it seem not worthwhile to read on.
I have found myself tending more and more towards a style where I break even some relatively small modules up into fairly small pieces, and I have very, very clearly specified definitions for "what this module consumes" and "what this module provides". (I have not quite reached "very, very clearly". At the moment I'm still resisting the sheer amount of keyboard typing it takes to be clear. But I can see I'm trending this way.) In essence, take the idea of a dependency-injected function that uses no globals, and bring the same organization up to the module level.
Nominally, languages support this, but a lot of it is still implicit. For instance, do you have a command you can point at a module of your code and get a complete report of A: what libraries this code uses and B: exactly which subset of calls from those libraries this code uses? There's so many languages and environments and IDEs and such out there I imagine the answer may be yes for a few of you, but probably not that many, and even fewer of you use it.
The primary reason I find myself moving this way is to try to make it so you can read a piece of code and the cognitive overhead is minimized, because there's a clear flow: 1. Here are my assumptions. 2. Here is my environment. 3. Here is what I do in that environment. 4. Here are the test cases that show that the thing I wanted to do is in fact done. In current languages, these things are not exactly "all mixed up", but they are not exactly cleanly separated, either.
I realize this may sound vacuous and obvious, but, err, if that's so, a lot more of us could stand to actually do it, to put it in politic terms. I think the languages and environments work against us in a lot of ways by making it very easy to add dependencies without much thought and weave all those concerns together into one big undifferentiated mass.
In the last few weeks, I've had a language coalescing in my head, which I'm not particularly happy about since I have no chance of being able to implement it, and one of the things it does is to encourage this sort of thing by making it easy. Basically, whenever importing a library, it would automatically add a layer of abstraction between your code and that library that allows you to override that library wholesale for testing purposes or something. I do this manually in a lot of languages I work in, but it involves writing a tedious layer that just takes calls to "A" in one side and routes calls to "A" out another. There's an idea of a "context" that you can pass to a module that would do this override. Basically it would be a statically-typed ability to monkeypatch, safely, and in a way where by and large, the compiler could optimize access to the "default" implementation such that you should generally not be paying an abstraction penalty. Then there would be a report that you could use the runtime tooling to generate that would tell you exactly what external functionality you're using, and you could use that to guide you in your override so you only implement what you need.
(I have a mental image of a cell, which being biologic, doesn't really do anything cleanly, but taking it metaphorically, you can see cell walls declaring the things they will allow to pass through, and with only a bit more work we can clearly declare what comes out.)
There are, of course, bits and pieces of this scattered all over the language landscape, but I'm not aware of anything that quite has everything I want in one place. (Perhaps surprisingly, Perl's "local" keyword is the closest single thing I know, albeit not written in a way that can support threading well which I'd want to fix. You can use local to override arbitrary function/method symbols, and it will be scoped within that local only, giving you that "dynamic language" monkeypatching while preventing it from being global state.)
Part of the idea of that report too is that it would be part of the documentation for a module, further increasing the ability to pick up any arbitrary module in a program, and cleanly cut away "look, this is exactly what this particular module does. You may not necessarily know what the other modules this communicates with is doing with that stuff, but at least you know what this module is doing." Again, that may sound like it's a thing that all languages already do when you bring up simplified mental model of a codebase, but think about this next time you're hip deep in code you've never seen before and you hit the thirtieth line of code and suddenly, oh crap, it just referenced another module I've never heard of.... this is when your cognitive load goes through the roof.
> I have found myself tending more and more towards a style where I break even some relatively small modules up into fairly small pieces, and I have very, very clearly specified definitions for "what this module consumes" and "what this module provides".
I have found this exact thing to be a very natural fit for functional programming. Which is also why I switched from Ruby (where I was already starting to write in a functional style anyway, for these exact reasons) to Elixir (which basically tries very hard to prevent you from writing in a non-functional style while also giving you guarantees that are simply not available in a mutable procedural/OOP language).
The vast majority of application code can be rewritten in such a way that it takes some struct or data, and spits out some struct or data, and has no other side effects (which include mutating the structs or data originally given to it). You can confine the side-effecting code to its own interface code... a principle embraced by https://blog.ndepend.com/hexagonal-architecture/ (also a natural fit for functional languages, but can also be utilized in OOP)
The advantages of this design end up being numerous: easier unit-testability, easier reasonability, easier maintainability, fewer dependencies, fewer bugs generated.
In my experience, everyone talks the talk of encapsulating programs into well-defined pieces that don't leak implementation details, etc., but very few walk the walk as it turns out to be pretty hard.
> I have found myself tending more and more towards a style where I break even some relatively small modules up into fairly small pieces, and I have very, very clearly specified definitions for "what this module consumes" and "what this module provides". (I have not quite reached "very, very clearly". At the moment I'm still resisting the sheer amount of keyboard typing it takes to be clear. But I can see I'm trending this way.) In essence, take the idea of a dependency-injected function that uses no globals, and bring the same organization up to the module level.
I'm not quite sure how this is different from classes. Visual Studio build out reports on call and data flow so you can easily see what calls what and what dependencies you have. You can also do neat things like see what code is dependent on a given library.
> In the last few weeks, I've had a language coalescing in my head, which I'm not particularly happy about since I have no chance of being able to implement it, and one of the things it does is to encourage this sort of thing by making it easy. Basically, whenever importing a library, it would automatically add a layer of abstraction between your code and that library that allows you to override that library wholesale for testing purposes or something. I do this manually in a lot of languages I work in, but it involves writing a tedious layer that just takes calls to "A" in one side and routes calls to "A" out another. There's an idea of a "context" that you can pass to a module that would do this override. Basically it would be a statically-typed ability to monkeypatch, safely, and in a way where by and large, the compiler could optimize access to the "default" implementation such that you should generally not be paying an abstraction penalty. Then there would be a report that you could use the runtime tooling to generate that would tell you exactly what external functionality you're using, and you could use that to guide you in your override so you only implement what you need.
C# has this feature and calls it "Microsoft Fakes" but they got a lot of vocal pushback from the TDD community who see great value in adding dependency injection and an interface to every library call you make. But boy is it useful for unit testing legacy code and reducing boiler plate required for testing.
Honestly everything you mention is available on the .NET stack.
> got a lot of vocal pushback from the TDD community who see great value in adding dependency injection and an interface to every library call you make
Hah, adding a matching interface for every..single..class is something that drives me nuts! Many C# devs seem to have fallen into this trap, because "I need it for mocking". Sure, but do you actually need to mock out the dependency? IMO, overuse of mocks makes for brittle tests that are painful to maintain. I prefer only mocking "complex" dependencies.
In any case, hopefully C# 8's default interface methods will help alleviate this madness.
"I'm not quite sure how this is different from classes."
Well, depending on your definition of classes, I suppose. "A struct bundled together with methods for operating on it + some sort of polymorphism" on its own doesn't say anything about dependency management. But get 10 programmers together and ask for a definition of "class" and you can easily get 15 answers.
"Visual Studio build out reports on call and data flow so you can easily see what calls what and what dependencies you have."
Does it really have a report of exactly what APIs you use out of a module? I'd love to see an example of that if it's not too difficult, not because I disbelieve you, but because I'd love to see it. I know I've seen plenty of flow diagrams for what "libraries" or "modules" you use (whatever word is appropriate to your language), but knowing that you use "the AWS S3 module" is much less informative than knowing "you only use GetObject", to be specific about an example.
"C# has this feature and calls it "Microsoft Fakes" but they got a lot of vocal pushback from the TDD community who see great value in adding dependency injection and an interface to every library call you make."
I am not intimately familiar with this feature, so please do correct me if I am wrong. But I observe that according to this page: https://docs.microsoft.com/en-us/visualstudio/test/isolating... that for stubs you have to manually create an interface, and for shims you're in a situation where the code is being rewritten dynamically. I would propose making it so that all library usage is automatically an interface type created by your usage. And as it would be implemented in the language spec, rather than being done by instruction-rewrite very late in the process, it would also be something that could be built on by other things.
To be clear A: I'm well aware that I, just like everybody else, do not have any totally unique and new ideas literally never considered by anyone ever, so I am well aware that there are things like this in various bits and pieces elsewhere and B: I'm not exactly "criticizing" the .NET stack, especially with my vaporware-beyond-vaporware ideas. I'm just observing that there is an engineering difference between the things that run as instruction-level rewrites very late in the process and things integrated at the beginning. That's one of the ways Microsoft and Oracle/Java punch above the "weight" of what I'd otherwise expect from the languages in question merely on their features, and a legitimate advantage of being on their stacks, but even for something the size of Microsoft, that's where you have to stop advancing. You can't really build on that sort of tech because you can't stack that many such features together before the complexity exceeds what even those entities can deal with.
(There's also some other features in my highly vaporous vaporware language that this integrates with in some other ways, which is why I'm concerned about needing to be able to build on these features more officially than last-minute assembly rewrites.)
I didn't mean to say "look your idea's not unique", hust a hey if your interested in those features I happen to know of some similar features on my preferred stack.
> and for shims you're in a situation where the code is being rewritten dynamically. I would propose making it so that all library usage is automatically an interface type created by your usage. And as it would be implemented in the language spec, rather than being done by instruction-rewrite very late in the process, it would also be something that could be built on by other things.
Mind explaining the advantage of auto-generating interfaces for library usage over shims?
And a couple of other questions, is an interface automatically generated on compile for every exposed class? How does this work for static class usage?
"Mind explaining the advantage of auto-generating interfaces for library usage over shims?"
For the particular definition of "shim" used by Microsoft, the fact that it's always implicitly there, rather than something you have to generate and add to the code. For example, if you import the "strings" library and use only "TrimSpace" and "IndexOf", there would be a way to A: have the tooling system directly feed you a listing of "this is what you use from this library/object/etc.", possibly even pregenerating the manifested interface and an initial stub object for you and B: there's a way to coordinate passing your new interface as the implementation of the "string library" that is going to be used for a particular run time context. The compiler can also statically test that your implementation completely covers the module; if you add a call to string.ToLower(), the compiler can complain that your shim is now incomplete.
For static usage, I expect to say that the common use case is that you use the default implementation of the strings library, and in that case, it can be statically compiled as usual. I observe that in practice there's almost always a "real" implementation that is used everywhere except tests. If you're willing to pay the price for dynamic resolution, you'd be able swap out whatever you like during runtime though.
In the common case, I expect it would look a lot like what a modern language does; you say "import string" (somehow), and you just get on with using the string functions. If you don't want to override that, you don't see anything strange.
Languages that already have some similar features would include Python, where a module comes in to your namespace as just another object. In Python it's pretty easy to "duck type" up something that looks enough like a module that you could replace one if you want. I don't see it done often, probably because it's a global variable modification, but it's an example of how you can conceive of a library or module as something other than a completely static import of a static name.
(Hypothetically, you could even get some dynamic overrides to be compiled statically in some cases. There are languages like Rust that successfully do that sort of thing a lot. However, that has a lot of preconditions for it to work, and requires a great deal of sophistication in the compiler. Initially I'd punt; performance is not my #1 goal, and if the implementation just fell back to dynamic resolution it wouldn't be the end of the world. Plenty of languages do just fine with having vtables.)
Well, I've stayed up late one night already trying to get this crap out of my head. As anticipated, it has only made it worse in the short term, but I expect once I'm "done" it'll go away.
It'll show up at jerf.org/iri, but I warn you that A: it may be a while yet and B: I'm not 100% sure it'll ever be publishable, but we'll see.
I would say that searching for the answers to these problems in a new language is going to result in heartbreak. It is likely to not happen and if it does that language will be barren territory where you have what you want architecturally but nothing else.
I think the goals are great but addressing them at the architecture level is better, especially with tools to really take advantage of that modularity. Something like this:
That is largely why I'm not particular happy to have this language coalescing in my head. It's rather against my will. (And I'm actually pretty decent at deliberately not thinking about things; I know, for instance, that you don't sit here and actively fight it, that only makes it worse. You have to let it passively flow through you and then out again, like when you screw up meditating.) I know what it takes to make a successful language and that I basically have no chance. Not to mention it's probably more than my entire lifetime's spare time to implement something that does enough of it to be usable.
If it doesn't just go away in another couple of weeks, I think I'm going to sacrifice an evening or two to writing out the ideas and hope that satisfies my subconscious that something came of it and it's OK to drop the idea now. I mean, in some sense, the best case is to publish it, get it on to Hacker News, let it go infect someone who does have the time to deal with it, and then use their language when they're done.
I'm hoping perhaps even typing this much may do the trick...
Somewhat relatedly, there's already a section on "culture", something I think that is sometimes overlooked by language creators. The shape of the initial libraries can shape the entire ecosystem of a language, even when the language itself is technically capable of supporting a different culture.
There's a lot of winds blowing against a new language, but this is one of the rare winds blowing for it; it can literally be easier to write a new language from scratch and boot up a new culture than it can be to change an existing culture. For example, it wouldn't matter if C++ imported every Rust feature, even for the sake of argument to the point where this new (terrible monstrosity of a) compiler could literally compile all current Rust code. The result still wouldn't be "Rust", or able to "kill" Rust, because the sheer inertia of C++ culture (or, for a language this big, "cultures") would simply never produce the sort of code that you see in real Rust.
Broadly speaking, you should attempt to minimize the intrinsic cognitive load (through training, good choice of technologies, hiring, pair programming, etc.) and eliminate extraneous cognitive load (boring or superfluous tasks or commands that add little value to retain in working memory). This will leave more space for germane cognitive load (where "value-added" thinking lies).
The whole point of design is reducing cognitive load. Whenever you are looking at a given thing, you shouldn't need to keep more than about 10 things in your head to understand it completely, preferably less. Programming hardly ever blows up your brain with one fantastical concept. Instead, you're worn down with a hundred thousand cuts.
Microservices are designed differently than monoliths. The architecture enables you to easily do something. Cognitive load isn't what matters, it's the engineering. You can spin microservices up and down to meet demand. You can put them in containers to build an anti-fragile system. You can load balance them. That's much more difficult and more expensive to do with monoliths. The author is thinking of the distinction like an ivory-tower CS professor who thinks he's talking about abstractions, when you're actually talking about design and engineering.
This paradigm will be inn-effective in my humble opinion.
It will be because it involves prediction. Cognitive load is unknown. If predicting when a project will complete is hard, predicting how large and where the most cognitive load will be is going to be just as hard if not harder.
Microservices and monoliths have never been about cog load. Its an implementatiom detail that solves scaling, deployment and team managment problems.
The cog load should be fairly similar weather you jave micro or monolith.
There are good and bad abstractions but in the end the cog load is a sum of the leaves and this never changes in the tree. In fact the cog load is better with bad abstraction , or no abstraction.
Hard coding is no cog load. Its in that file that function for that feature.
Unless you have a concrete way of measuring something as abstract as "cognitive load", the situation is as it ever was: the stuff liked by me is low cognitive load; the stuff liked by people who disagree with me is high cognitive load.
(Feel free to sub in "complexity", "cohesiveness", et cetera, for "cognitive load")
> Feel free to sub in "complexity", "cohesiveness", et cetera, for "cognitive load"
"...the situation is as it ever was: the stuff liked by me is rationally oriented by reasonable goals; the stuff liked by people who disagree with me is nothing but a post-hoc rationalization of their arbitrary preferences."
Truly talented leaders tend to realize that high cognitive load comes from lots of places besides technical considerations too. It can be hard to think about a problem when you're also dealing with a toxic team member, untrustworthy leadership, lack of organizational focus, shitty HR policies, feeling unsafe at work, etc. Unfortunately, fighting against those elements is a never-ending battle.
Leaders that mix low cognitive load with clear direction and an interesting problem start to approach that highly-sought-after "early startup productivity" that so many companies can't seem to figure out. At least until a re-org, acquisition, or change in the c-suite comes along and blows it all up.