These all stem from the complete lack of empiricism and scientific method in this discipline. I'm pretty sure we all have opinions on most of that stuff. None of which are backed by any evidence whatsoever, we are basically always going with our gut.
"23. Has anyone ever compared how long it takes to reach a workable level of understanding of a software system with and without UML diagrams or other graphical notations? More generally, is there any correlation between the amount or quality of different kinds of developer-oriented documentation and time-to-understanding, and if so, which kinds of documentation fare best?"
This is such an important question and it's just the tip of the iceberg of a very deep problem that is rotting our software systems. We are absolutely pathetic at dealing with complexity and we actually enjoy complexity. We don't tackle questions such as 23. ANYWHERE near as seriously as we should.
Developers overestimate their mental bandwidth which leads them to pompously build over-complicated tech stacks despite only having archaic tools to mitigate and navigate their complexity.
Companies don't need to hire more devs to deal with their complex software systems, they need better tools to navigate their software systems. But because companies don't truly value their money and devs don't truly value their time, we end up in the situation we are in now.
We should have hundreds of companies investing on initiatives akin to Moldable Development[1], instead they play the following bingo:
1) let's just hire more devs and hope to land on a 10xer
2) let's build our own framework
Additionally, we overvalue specialization. By overloading developer brains with complex tech stacks, we encourage a culture of specialized profiles who find solace in trivia. Doing so, we limit cross-pollination and stifle true innovation. This attitude is actively killing-off thousands of valuable ideas.
Every second, there's a coder out there who thinks of something wild, which requires very specific tools from different fields and finds out that the people who built such tools couldn't be bothered making them accessible under a sensible time-budget to people outside of their niche/ivory tower. So the dev either drops the idea or gets sucked up into a niche.
This is tragic, but hey look! We have a new (totally not low-hanging fruit that could be predicted 10 years ago) Generative Model, WOW! "What a time to be alive"!
> Developers overestimate their mental bandwidth which leads them to pompously build over-complicated tech stacks despite only having archaic tools to mitigate and navigate their complexity.
Spot on. Most engineers don't (and can't) demand a working understanding of the systems they build, and are perfectly happy reverse engineering even their own creations (although we rarely admit so bluntly that this is the case). So, we're primarily a trial and error species, not architects. Alchemists, perhaps?
Tests, which is the only way any moderately sized software holds together, is just a way of automating the trial-and-error process from development. It even kinda works, but is limited by (among other things) the imagination of the test author, which frequently happens to be the business logic author.
Definitely agree in that we need better tools, but i'd argue that we just need better reasoning to be able formulate the right questions. Asking the right question can lead to surprising and sometimes trivial solutions.
that's because creating software is all about making human artifacts, not studying the natural world and that's an inherently subjective task. It's like asking "what's the best set of tools for a blacksmith?" or "what's the best way to brew beer?". Well, depends very heavily on the blacksmith or the brewery. Of course that doesn't mean all tools are equally good, but all the interesting cases are going to be personal.
There is no scientifically correct answer to what often are not objective questions, you will quickly find out that almost everyone will weigh the questions that the article asks very differently, for often legitimate reasons.
> that's because creating software is all about making human artifacts, not studying the natural world and that's an inherently subjective task
Electronic circuits are also human artifacts, but there objectively valid principles and practices for developing those. Of course there are also opinions and subjectivity on some characteristics that generally don't matter to the actual functionality, but that seems far from where software development currently is.
This may be, but it is useful to point out that, in many cases, engineering practices in more tangible scientific areas are also driven by some personal preferences / opinion.
For example, in which situations should you use Redlich-Kwong equations of state over a simple ideal gas or VdW? There will always be one model which gets closer to the measured values in a specific condition, but the entire domain may be difficult to determine which model is most appropriate.
Even in these fields the decision falls to opinion - some models may be more precise and accurate, but require more processing/ development time. Which is just software engineering again.
My previous comment came out harsher than I intended -- I don't mean to imply that opinion and experience don't play a factor in other disciplines, we are more prone to see issues with software engineering because we are more familiar with software engineering.
However I still think the situation is a bit worse. The tradeoffs in the models you mention are well understood. The equivalent in SE would be something like "should we use python or go", but it's very unclear that we can definitively state which use cases are better for which. There is simply no evidence, the plural of "anecdote" isn't "data".