Here's the problem, though. Brooks made a certain observation about accidental vs. essential complexity; the author of this post makes another. Brooks made a certain prediction based on his observation (diminishing returns in impact to productivity), and many believers in the power of programming languages found his prediction to be too pessimistic at the time (he lists their objections in "No Silver Bullet, Refired"). Only it turned out that his prediction was correct, except it was too optimistic. So if anyone wants to claim that his observation was wrong, they would need to explain why his prediction turned out to be true, while that of those who believed it was wrong turned out false.
In addition, I agree with the author that the biggest contributions to productivity we have seen were, in this order, the availability of open-source libraries and online forums, automated testing, and garbage collection -- all of which have been adopted at rates we'd expect from adaptive traits in an environment with selective pressure. What is conspicuously missing is linguistic features, also in line with Brooks's observation. And yet the author still claims that linguistic features are the silver bullet. At this point this qualifies as an extraordinary claim that requires extraordinary evidence, but I would settle for ordinary evidence, which is also lacking -- strange, considering that a silver bullet, i.e. something with an extreme bottom-line impact, should be easily observable.
In your first paragraph you say that Brooks was right in his prediction. In your second you say you agree with the author that "open-source libraries and online forums, automated testing, and garbage collection" were the biggest gains. But those first two were the counter-examples to Brooks according to the author.
So where do you actually disagree with the author? You don't think the gains from these were substantial enough?
Also the author makes a clear argument for why functional programming is, from a complexity perspective, something entirely different from mere "linguistic features". And a "silver bullet" according to Brooks is something that might take a decade to show full impact.
> But those first two were the counter-examples to Brooks according to the author.
They are not.
> So where do you actually disagree with the author? You don't think the gains from these were substantial enough?
I think that 1. the collective gains might have been substantial but not 10x, and 2. the very ideas proposed by the author as being a silver bullet are not even among those in the first group.
> Also the author makes a clear argument for why functional programming is, from a complexity perspective, something entirely different from mere "linguistic features". And a "silver bullet" according to Brooks is something that might take a decade to show full impact.
When I was in university, Haskell was all the rage. People were saying how in some years we'll all be programming in it or in a similar language. That was in 1998 or 1999. Functional programming in general has been well known, and taught for many decades. I'm not saying it's particularly bad, and maybe it could even have a small positive effect, but saying it's a 10x silver bullet at this point sounds delusional. If Brooks was wrong and there is a silver bullet, I very much doubt it's something we've known about, and tried again and again, in various forms, for decades.
Brooks defines a silver bullet as having a more than 10X productivity gain.
None of "open-source libraries and online forums, automated testing, and garbage collection" have led to a documented 10X improvement in productivity by themselves, so they are not silver bullets.
A silver bullet can not be something that is itself on the same order of complexity and labor demand that the code it is helping. Those are just a ton of lead bullets, and Brooks does have different predictions for those ones.
We've discussed this before [1], but what I've called the ML-ification of programming languages (static types, type inference, higher order functions, exceptions, ...) has been pervasive. Your own Java has been doing quite a bit of this recently. All major software companies (FB, GOOG, MS) have by now retrofitted types to their dynamically typed legacy languages. Even smaller outfits that are based on dynamically typed legacy languages (e.g. Strip, Dropbox) are retro-fitting typing.
> what I've called the ML-ification of programming languages (static types, type inference, higher order functions, exceptions, ...) has been pervasive
It has been popular in some class of languages that directly influence one another, and less popular in others. If you look at the software world in, say, 1995, I think a larger precentage of programming was done in typed languages than today. The question of "to type or not to type" wasn't on anyone's mind because virtually all languages used for serious stuff were typed. In any event, even if there is an MLification in important corners of the industry, it's still nowhere near a silver-bullet level of adoption. There is, I agree, a return to the popularity of typed languages, but it hasn't yet reached the level it was before 2000.
In fairness, the adoption of typed languages in the 1990s was about C/C++, Java, possibly Pascal, Delphi, Ada. None of those languages were built upon ML's innovations (type inference, first-class generics, first class higher-order functions, pattern matching etc). I conjecture that the rise of dynamically typed languages was primarily because the aforementioned typed languages adopted an awkward approach to typing (no inference, subtyping + casting to implement both, ad-hoc polymorphism and generics, or, in the case of C++, generics by compile-time meta-programming which has been awkward in other ways).
So 1990s types were a worst-case scenario: not expressive enough for many practical cases (you had to cast a lot anyway), hence the safety you get from typing was not much, syntactically heavyweight, bad error messages. Pointless trivial type annotations like
A a = new A(...);
are truly grating, but ubiquitous in the typed languages popular in the 1990s. No wonder people preferred Python ...
I’m not sure why you start in the 90s; plenty of people were using C and Pascal in the 80s.
Are you sure there was a big rise in dynamic languages at any particular point? I’m not so sure there really was (but I’m interested in documentary evidence if you know of any).
I always figured dynamic languages became more popular as computers got bigger and faster, so efficiency became less important in most scenarios. But that’s not the only trend -- plenty of people programmed in BASIC in the 80s, on tiny 8-bit computers! It was slow as hell, but P-code can be very memory efficient, so even on a tiny slow computer dynamic languages can make sense.
(I count BASIC as dynamic even though it lacks useful abstraction mechanisms because most dialects manage memory for you automatically.)
Edit to add: maybe Forth is a better example (and a much better language!) although I don’t think it was ever as popular as BASIC.
Lisp is from the 1950s, dynamically typed and one of the most influential languages of all times. Lisp introduced GC, the single biggest advance in programming languages (but it took 30 years to make it fast). I am not trying to argue that dynamically typed languages became popular in the 1990s.
If anything, I see ML as the (spectacularly) successful attempt at typing Lisp for increasing reliability. Remember Milner was at Stanford in the 1960s, surely he will have conversed with McCarty. Milner's early theorem provers were in Lisp. He invented ML to reduce the pain of using Lisp (remember, ML stands for "meta language"), and to make theorem proving less error prone. More precisely to reduce the TBC (= trusted computing base) in theorem provers through types (viz the famous Thm abstraction).
ML was essentially finished in the late 1980s [1]. Every responsible programming language designer could have taken on board ML's lessons sincen, and it's an interesting historical question why this did not happen.
Thanks for that, I wasn’t fully aware of the history of ML! Very interesting (and very impressive that this is just one of Milner’s contributions).
I started with Haskell, where the main influence besides ML was Miranda. (If I remember correctly, Haskell was only created because Miranda was proprietary; similar to BitKeeper and Git.) I guess Miranda’s main innovation was lazy evaluation. That has certainly been influential but outside Haskell I don’t think it’s ever had widespread adoption in the same way as ML-style typing and type inference.
* Every responsible programming language designer could have taken on board ML's lessons sincen, and it's an interesting historical question why this did not happen.*
Agreed! But maybe it is happening, it’s just that it’s taken 30 years instead of the 5 or 10 one might have expected?
Yes, Miranda was very expensive, had an onerous license and only ran on Unix, that's why researchers felt the need to create a free and open alternative.
Lazy evaluation was invented multiple times. First in theoretical studies of the lambda calculus by Wadsworth in 1971. Later, Friedman & Wise, and independently Henderson & Morris proposd a lazy Lisp, Turner (who later did Miranda) gave SASL a lazy semantics (SASL was eager initially). All three were from 1976, so it was clearly an idea in-the-air. SASL later evolved in to Miranda. Another of Miranda's innovations was to use combinators for graph reduction to implement the Miranda run-time.
Which is double “damning” because plentiful memory means more objects, and often deeper interactions, which means the likelihood of getting object lifetime exactly right declines. Not only does manual management become less necessary, it also becomes less tenable.
Hmm, I’m not sure you’re right about the historical trends. As I see it, the static typing battle has been going on forever. Each style has been popular in certain areas but neither has ever been completely dominant.
Javascript, Python and Ruby are the big dynamic languages today. But you can go back as far as you like, to when languages like Perl, TCL, Lisp and BASIC were big. There have always been dynamic languages.
Static languages have gotten more robustly type-safe over time (even today’s C is much safer than K&R C). But apart from that shift, it seems to me that the static vs dynamic landscape has been much the same for decades.
I'm not sure why you're including BASIC in with the dynamic languages -- in its classic microcomputer form, it's pretty well limited to int or float, and string, and arrays of those things. When you make a variable, you always define it's type (A is a double, but A$ is a string).
Some versions are a little bit richer in that you can have both ints AND doubles (woot, woot!).
>So if anyone wants to claim that his observation was wrong, they would need to explain why his prediction turned out to be true
Well, who said his prediction turned out to be true? Who actually measured the "impact to productivity" in any quantifiable terms? And who re-did that measurement after "No Silver Bullet, Refired" (which was 25+ years ago)?
You don't need to carefully measure to know that no single development has led to a 10x boost. As someone who was working on large software since the late 90s, we're building things today at a similar pace, with the exception that there are many more pre-built components we just grab. Can you point to a technique/tool that has led to a 10x boost?
Precision is less valuable when the measurement is larger.
The very first piece of software I optimized, I started with “One Mississippi”, moved to a stopwatch app on my PDA, before I ever bothered with instrumentation. Because any idiot can tell 2 seconds from 3, that a boat won’t fit in the garage, or that more than double is way less than ten.
Ironically, if Brooks had said 3x instead of 10x, he might still be right and we’d have invested more in measuring things.
I would like to see the evidence for any of your claims. As far as I'm aware there is very little evidence for any productivity improvements related to programming practice, mostly due to lack of high quality studies. It's my understanding that CS doesn't really do that kind of study, for a variety of reasons (primarily, I believe, coming from a mathematics not social science culture).
First of all, empirical verification is not only done through studies. Software is not particle physics, but one of the world's large industries, practiced by millions of practitioners every day. Some techniques/methodologies/practices have spread through the industry like wildfire -- the three I mentioned -- which is at least consistent with the hypothesis that they deliver value; so, to the extent that anything has had a positive effect I believe these three are pretty much the only contenders. It is possible that even I am too optimistic, but there is very unlikely at this point that I'm too pessimistic.
Also, the claim that X has not had a large positive effect requires no evidence. It must be assumed true until proven otherwise; that's just the scientific method. The larger the claimed effect and the longer until it is detected -- especially in a selective environment -- the less likely it is real. However, the claim that something has had a big positive bottom-line effect and yet that effect is not detected in a selective environment that is known to have selected beneficial techniques all the time is false with a very high likelihood to the point of being self-contradictory.
I am sorry, but the first para of your comment does not make sense. You say "believers" found prediction "too pessimistic". And then you say "except" as if they were wrong, but follow it with "it was too pessimistic, not too optimistic".
This is a really interesting take on the Silver Bullet argument - Brooks is right that big gains in productivity are only gained by reducing accidental complexity but he was wrong about how much accidental complexity there still is to reduce.
I think most Software Engineers probably suffer from some cognitive bias in trying to estimate exactly how much accidental complexity exists that could be eventually removed. We tend to think about incremental improvements to tools and processes rather than thinking at a higher level about improving the overall process of translating business requirements into working software. Even with much better tooling and languages (i.e. #F, Git, AWS, etc) there's still a lot of fat that could be trimmed.
I'm excited to see what the wold of low-code will do to our current assumptions about how much accidental complexity there actually is. Maybe projects like Dark[1] could actually achieve the order-of-magnitude gains that Brooks was convinced wouldn't be possible. Sure, there's no panacea, but maybe we're round the corner from a genuine "Silver Bullet" in the sense that Brooks actually meant.
Along the lines of your "no code" reference, I'm enamored of the potential for well-architected and -implemented design systems and their component libraries to virtually eliminate the need for non-experts to write CSS in constructing good, maintainable UI. That's a profoundly impactful change vs the status quo.
I'm in Brooks' camp. I see mostly essential complexity. There are problems we've been pounding on for a decade that have solutions that are mostly essential complexity. Then people come in and create pointless requirements that turn that fairly simple essential complexity... extremely complex.
I think the modern problem isn't accidental complexity, it's essential complexity that isn't actually necessary for project success.
Another way of looking at this argument is that the majority of hard problems in software engineering aren't computer science problems, they're people management problems. It's not merely enough to create a programming language with an order-of-magnitude improvement in productivity, you also need to convince all the stakeholders to let the project be developed in that programming language, which includes both helping them get over their unfounded fears ("it won't work") and doing the technical work (or the recursive people work) to get them over their well-founded fears ("how will we hire/train programmers," "what do we do with all our existing code in this other language," "is there a vibrant language ecosystem," "where's the development tooling," etc.).
Same applies to other sorts of possible silver bullets. The Linux kernel has been using git for well over a decade, but even GCC (a free software project which has no managers adding requirements to make themselves look good) only switched to git last week.
In other words, if you have a 10x technical silver bullet against accidental complexity, it won't look like a silver bullet / it'll appear to get stopped at essential complexity until you do the people work too. Without that, you won't be able to point at very many projects that have actually seen the 10x improvement.
>There are problems we've been pounding on for a decade that have solutions that are mostly essential complexity. Then people come in and create pointless requirements that turn that fairly simple essential complexity... extremely complex. I think the modern problem isn't accidental complexity, it's essential complexity that isn't actually necessary for project success.
That would be a third category of complexity.
Essential complexity
Accidental complexity
and the a new one: Complexity-imposed-as-"essential"-by-marketing-customer-etc.
In the context of any given project, that's, if anything, a subcategory of Essential. The software is a tool to do something, and the functional requirements are what define the purpose of embarking on the project. You can't escape the business logic that needs to be enabled. It's like the old service industry joke "this job would be great...if it weren't for all the customers". There would be no software if there weren't a set of requirements.
>In the context of any given project, that's, if anything, a subcategory of Essential. The software is a tool to do something, and the functional requirements are what define the purpose of embarking on the project.
It's not that clear cut.
For me "Essential" is what captures the essence of the problem domain and the requirements in actual use.
On top of that, there are lots of "requirements" that are there due to bribes, idiotic "brainstorming" where everyone felt obliged to chime in, what the CEO's spouse things should certainly be there, recent fads, over-thinking it from some execs, etc.
The distinction between the "essential" is of utmost importance.
First, because these "required but not essential" area is where an engineer can look for tradeoffs (to the benefit of essential areas).
Second, because those items might hamper the essential functionality (like a requirement to build a website in C, because the CEO heard "it's the fastest language", can hurt security, maintainability and other aspects, or the requirement to make a banking webpage with small fancy grey on white fonts because they though the design is "cool" hurts readability).
Third, because engineers should not just operate on a "follow orders" basis. They also have an ethical responsibility towards the final users (society at large for public software, customers, the customer's employees, etc).
And, of course, because not being able to make that distinction, means that the engineer can't tell good from bad, useful for useless, and it's all the same to them. Whether they can convince the customer it's another thing. But engineers should absolutely be able to make the distinction.
>There would be no software if there weren't a set of requirements.
Which is neither here, nor there. There would also be a hell of a lot of better, faster, more reliable software, if bogus requirements would get pointed out, and removed.
I see accidental complexity every time I step the debugger into library code. That’s where it’s hiding. In the internal and external impedance mismatches of library code.
I had to step into papi the other day. Most Node libraries for http have been designed by someone who loathes code duplication so much that they don’t care if the user-baby goes out with the code duplication bathwater.
I pray for a day where code is measured not by lines of code or code coverage but by call stack metrics.
Brooks defines a "silver bullet" as a single technology or tool which yields a 10 times increase in productivity across the board. In other words, a "silver bullet" have to eliminate 90% of all the work a developer does during a workday.
But a typical developer is not even coding half of the time! They are discussing with the product owner to clarify requirements, reading specifications, writing specifications, thinking, surfing Jira etc. A lot the work of a developer is to take vague requirements and transform them into unambiguous language. These are effects of the inherent complexity of software development. And then we have stuff like researching and evaluating what framework or library is the best fit for a given task.
Haskell is cool and all, but it will not eliminate 90% of what a developer does in a day. No single programming language, however perfect, will.
I disliked that twitter thread. It's rife with rose-colored views of the past, and misunderstanding of what they are referring to. The two main ones are a) library computers weren't always fast, and they definitely weren't checking as much data. We're a few orders of magnitude higher in what things are searching, and Amazon is still really fast, and b) Google maps is a very specific use case of mapping, not analogous the general purpose map you would get previously. There are plenty of online trip planning services that do most or all of the interesting stuff that was mentioned. I've used a few of them.
The whole thread is analogous to a person that grew up on a farm complaining that their Honda civic doesn't deal with rough terrains as well as the tractor on the farm they grew up on did, to which the only response is either duh or no shit sherlock, depending on how charitable you're feeling.
I think Brooks is still right and what author mentions is not one thing but a set of practices that became proven enough that became almost like a standard approach in software development. And they provide some productivity gains if we compare how software was built 50 years ago. Yes, the difference is big but we forget that all those things were introduced incrementally and in the course of decades. So yes, together they make a huge difference but they haven’t appeared in one day.
Git and Mercurial were both originally developed over the course of a month or so. If DVCS really results in an order-of-magnitude improvement (which do I find credible), then I think it qualifies as a silver bullet by Brook's definition.
Of all Git users, only an extremely small percentage use Git as a DVCS (Linux kernel, etc). GitHub is a centralized VCS. In fact, I think you could argue that the emergence of GitHub is a demonstration that DVCS is nearly useless for the vast majority of software development.
I think you don't understand what makes centralized VCS different from distributed. GitHub is just another clone of the repo.
For example, I can commit in my Git clone, can create a new branch, merge a branch, push it to the server, all without GitHub server being available. If GitHub would be down or closed tomorrow, this would not significantly affect my work on the repository.
You can do all of this because Git repos are self-contained complete historical records.
You could do this with a traditional VCS as long as you took frequent backups. Git is an improvement in this regard, but not a revolution.
Furthermore, you are describing a workflow that is inconsistent from many "industry best practice" recommendations. If GitHub went down, a very large number of Git users would not be able to run tests or deploy their code to production - their CI/CD pipelines don't work without GitHub. Their historical record of issues goes away when GitHub goes down, etc.
With traditional VCS, I would lose all the history and ability to work (to commit my changes, for example) if server goes down.
And I understand that issues and CI/CD will also stop working, but neither of that is part of the VCS itself. I'm not aware of distributed issues (maybe FOSSIL) and CI/CD.
Still it doesn't mean that GitHub makes Git centralized.
DVCS results in an order-of-magnitude improvement? In writing software? I highly doubt it. For that to even be possible, DVCS would have to take on the tasks that take 90% of your time as a programmer. I don't think it can.
To quibble, I've long been frustrated that we have no way to distinguish between natural and decimal orders of magnitude. I'm convinced the former is nearly always a more useful measure, but we're sort of stuck with the latter.
Even so, no one prior to Git spent 50% of their time wrestling with the VCS, or solving problems caused by a lack of one.
Agreed. There is still "no silver bullet", though experience has shown some practices that seem to be moderately successful.
I think the author of this article is overselling almost every of these advances. Don't software projects still keep failing or being cancelled at an alarming rate, even when doing some or all of these improvements? And there's no compelling evidence that projects using, say, TDD, are significantly more successful, is there?
I maintain an Emacs package[1] that focuses on user interaction and hooks deeply into the internals of Emacs in potentially fragile ways (e.g. it sometimes needs to actually inspect the call stack in order to determine the correct behavior). For a long time, I had no way to test it in an automated fashion because it was an interactive package, so each release required me to laboriously test its functionality to make sure I hadn't broken anything. Beyond just the time required for manual testing, the mere need for manual testing discouraged me from making releases in a timely fashion. The manual testing also meant that my package was only tested on a single Emacs version (i.e. whatever I was running at the time).
Then I finally figured out how to simulate user interaction using Elisp[2], and used this to add a test suite to my package. My tests have caught a number of regressions, especially in older versions of Emacs that I no longer use but still want to support. In addition to being faster, development has been much easier and less stressful for me because I know I can experiment and rely on my test suite to catch almost any regression I introduce. And the stress factor is important, since this is an open source package that I maintain for free, and if it was too stressful, I would probably just stop working on it.
Was adding tests to this project a 10x improvement? I don't know, but it sure felt qualitatively like a silver bullet to me.
You have a point and automated tests are definitely a big deal. However, as Wikipedia summarizes about "No Silver Bullet":
> "While Brooks insists that there is no one silver bullet, he believes that a series of innovations attacking essential complexity could lead to significant improvements."
So I'd say your experience doesn't contradict his point. Automated testing is not an orders of magnitude improvement, and software projects continue to fail or be flawed at an alarming rate, but it definitely helps! (like Brooks also said of high-level languages).
Automated testing existed at the time Brooks wrote his essay. In "The Soul of a New Machine", published 5 years prior, engineers arrived in the morning and examined the results of the automated tests that ran overnight, looking for regressions and improvements in the microcode for the new processor. The author treats this sort of behavior as standard engineering work, something all serious engineering professionals do and have been doing for as long as such a thing was possible. Not new, not novel.
What we have now is the result of refinements and improvements, perhaps it is a series of innovations. You are right - automated testing doesn't refute Brooks.
This is off-topic in a discussion of silver bullets, but what advantages do you see in ido-style completion over icomplete? They feel very similar to me and icomplete is built-in, so I went with that.
I feel like ido is just a more fully-explored version of the same concept. For example, if you use C-. and C-, to rotate the list in icomplete, that rotation is not preserved if you then continue typing, whereas in ido it is preserved, so that continuing to type characters that match the currently-selected completion isn't going to switch you to another matching completion.
Software projects don't fail or get cancelled because of the technology, but they do for "technical" reasons - many of which get amalgamated into the politics bucket.
The difference between accidental and essential complexity cannot be underscored enough. Removing accidental complexity, even if it flies in the face of "best practices," can be very powerful and appropriate as long as it's understood where you are intentionally coloring outside the lines and the conditions under which you'll need to revert to best practices for scaling up.
An example: I attended a conference presentation in which the presenter discussed dissecting the implementation of Identity Server into a dozen sub-services with half a dozen backing data stores with Redis caching layers and an Event Source model for propagating inserts and updates of the underlying data. This would be a prime example of accidental complexity gone wild if you built this just to have a multi-container microservice -- unless your single-signon service as a whole needs to support 100MM+ users, in which case this is essential complexity, not accidental.
Reducing accidental complexity but being mindful of how it could become essential complexity under certain conditions in the future is the mark of a wise architect, IMO.
The progress in tooling and methodology is paralleled by ever increasing complexity of software and problems it is solving. It's a zero-sum game. Still no silver bullet.
If a 10x improvement in productivity is paralleled by a 10x increase in expectations for the complexity of problems we can solve, that's not a zero-sum game, that's a silver bullet.
I think the points about accidental vs essential complexity are well made. However...
The title/s are about Silver Bullets, and I strongly do not believe in them.
I've been programming for an embarrassing number of years, and I've seen methodologies come and go. Increasing productivity is about developing a good team culture with a small number of simple tools, not about following the latest Silver Bullet.
In fact, it's very easy to increase the level of accidental complexity by adopting an overly prescriptive or complex tool/methodology (Jira anyone?).
Right there with you! I've also been programming a while now, and the biggest improvements to productivity, code quality, and overall success are indeed building a good team and keeping things simple. That hasn't changed since the 80's/90's when I first started out.
It's also true that overly prescriptive tooling can be such a pain. This also includes automated testing in my opinion (something the author counts as a big advance). It's fantastic for certain classes of software code and terrible for others. I've seen half my team spend multiple days just fixing integration or e2e tests instead of working on features that add value. You can grind to a halt with this stuff just as much as you can grind to a halt because of so many bugs to fix...
You have to find the balance, and I think that's where senior engineers really add value -- they've been round the block a few times and have a better idea where that balance lies.
The only way we'll see real 10x improvements take hold is for our discipline to mature. As long as we are experiencing high rates of growth, where all it takes is 3-5 years to become "senior", while the voices of the real senior developers are swamped by hordes of newer devs, we will wallow in inefficiency and fail to improve much.
Most of the accidental complexity I see day to day comes from slow accumulation of errors in craft. Need to validate that an ID is a least plausibly valid, "I'll just match it with a regex here", and in 27 other places in the code, each one added in a different sprint. Little things like referring to data that is passed into an api with the variable name "json" when it is really a data-structure that was parsed from the body of an API request. Is this thing I am manipulating a "tab", a "filter", a "type selector" or a "group"? When methods and variables use a mix of conflicting names for the same concept.
Until we learn, as a profession, that shipping working code is not enough, that our code must clearly, concisely, and consistently communicate the core concepts embodied in the program, accidental complexity will bury us.
I would argue that a good chunk programming tools don't reduce complexity at all. Just shifts it to well understood "accidental" complexity.
Your program needs a way to store and retrieve key:value pairs. It's an inherent complexity your program requires. You solve it by using an industry standard product like Redis. Now your code doesn't contain any key:value logic itself! complexity avoided right? Nope just shifted and doubled or trippled in size to boot. No KV logic but now you have to have db libraries, parameter parsers and sanitizers, sockets between services, etc, etc, so on, and so forth.
It's well trod ground, your IDE will generate 90% of that type of code for you. you throw a library or template or whatever at it and go about your day.
Software isn't just your code. Now your application that needed K:V storage has the entire bug and exploit surface area of your code and Redis. Do this 20, 30, 100 times with other libraries and packages. Each time your code gets a little smaller, a little slicker looking and the actual running software is an unholy monster spread across an entire server cluster. Oh well, your code is clean and tidy.
What I dislike about all Haskell examples on every blog is that the writer happily assume that I'm familiar enough with Haskell syntax to understand the code.
In the ordinary method, there's a function name, it takes one input, and returns a result; easy-peasy. Yse, what the method does isn't clear, but at least I know the methods name!
Now lets look at the Haskel fuction.
tryAccept :: Int -> Reservation -> MaybeT ReservationsProgram Int
I've read enough of these Haskel enthusiast blogs to know that the name of the function is tryAccept (it helps that it's the same as the C# method), and it takes in two parameters, an int and a Reservation and returns a Mayb of a ReservationProgram. And it's somehow an int, so it's like an enum? That's weird, clearly I don't understand Hasket syntax.
And then later, I'm told that it's simple to figure out what a ReservationsProgram is because there's some enum that doesn't include the word "ReservationProgram" in it.
So, the author here is arguing that between ready reference/help on the web and TDD, that programmers today can be _100x_ more productive than they were in the 80s? Add a dash of strongly-typed functional programming and we can be 1000x more productive?
I agree that these tools can be productivity enhancers, but if that factor is even 10x that would a huge claim, much less 1000x.
Especially if we break this down to working days. There are just about 200-250 per year. Could you really build a software that used to take 4 years to develop in a single day now? Even 100x would mean a years work in three days.
It's a shame that the author blows his credibility asserting that garbage collection reduces accidental complexity, because he is right that the amount of the latter really has been much larger than Brooks estimated.
If you have ever watched the hoops somebody jumped through to plug a memory leak in a Java program, you will be forced to agree.
A fundamentally very similar process appears as a functional programmer unravels a program to fix a performance bottleneck. The reason it is similar is that, just as ballooning memory usage indicates a memory leak, a ballooning runtime indicates a time leak.
And, just as GC languages provide inadequate facilities to manage resource use, functional languages provide minimal resources to manage time use.
You could argue that memory and time management are on the accidental side of the ledger, and while the scientist in us would be tempted to agree, the engineer knows that managing limited resources is the essence of engineering, and taking away the tools needed to manage resources makes resource problems balloon out of control, breeding extremes of accidental complexity.
> If you have ever watched the hoops somebody jumped through to plug a memory leak in a Java program, you will be forced to agree.
In my observation, a lot of “improvements” in software development are of this kind: they make it quicker and easier to get started, but make troubleshooting a lot more complex specifically because they’ve hidden what’s actually going on in an attempt to simplify things. Not that managed memory is a bad thing! But ideally it would be wielded by people who’d spent a fair amount of time manually managing memory so that they know what to look for.
Managed memory is more of a superfluous thing. When you have resource management, memory is among the easier to manage, and GC looks like a solution without a problem.
This might be just me, but Brooks explicitly put a decade time tag on his 1986 prediction, complicating the authors's comparisons from ~three decades later. I guess the way to tell would be to have people or teams build examples of 1986 level-of-complexity software and see how long it takes them.
That's fine for Brooks, but the OP is entirely correct that people still routinely cite the "no silver bullet" doctrine 3 decades later. So it's still worth revisiting even if it's technically 2 decades past its expiration date.
Brooks revisits NSB 20 years after the fact and makes pretty strong arguments that specific intervening technologies have not disproved his original conclusions. I think we could do the same today.
The author of this post conflates better with a 10x improvement from a few anecdotal experiences that are themselves extremely accidental to the act of creating software. He may have a point about the large volume of accidental improvement areas remaining but I think this is because like the JS ecosystem they are growing faster than we solve them.
I think the reality is likely the author is themselves a 10x improvement over their decade-past self.
Closest thing I could find was a report [0] on a 2007 panel discussion with Brooks at OOPSLA, titled “No Silver Bullet” Reloaded – A
Retrospective on “Essence and Accidents of Software Engineering"
Here's [1] a link to his 1995 update, an excerpt from an anniversary edition of 'The Mythical Man Month'
> Ostensibly in the tradition of Aristotle, Brooks distinguishes between essential and accidental complexity. This distinction is central to his argument, so it's worth discussing for a minute.
> Software development problems are complex, i.e. made up of many interacting sub-problems. Some of that complexity is accidental. This doesn't imply randomness or sloppiness, but only that the complexity isn't inherent to the problem; that it's only the result of our (human) failure to achieve perfection.
> If you imagine that you could whittle away all the accidental complexity, you'd ultimately reach a point where, in the words of Saint Exupéry, there is nothing more to remove. What's left is the essential complexity.
Okay, that might be a useful way of thinking of things...
But the author then goes on to talk about how he thinks that Fred Brooks underestimated the percentage of accidental complexity in the average project.
However, he then starts to go into things he thinks are "silver bullets", but most of them are tools that, in my opinion, address essential complexity:
* The World Wide Web: in his description, this solves a problem of the complexity of finding how to do things. Can we imagine a solution where finding out how to do things isn't part of the system? No? Under the definition, that's essential complexity.
* Git: in his description this solves the problem of tracking changing source. Can we imagine programming where we don't need to keep track of what source changes happened? I can, maybe, in a distant future where programming is done an entirely different way, but I'd argue that at that point it's not really the same problem. If we're writing code to solve a problem, then tracking changes to that source is essential complexity within that solution.
* Garbage collection: again, maybe there will be a computer that works completely differently (quantum computers?) but under current architecture, the physical machinery of a computer has limited space, so we need to reallocate memory that is no longer being used. That's not accidental complexity, that's essential complexity. There are other solutions to this, but every program solves it in some way, whether it's by reference counting, manual allocation, borrow checking, preallocating everything, or just allocating everything on the stack and copying everything, and each of these has complexity: that complexity is essential.
I think what this all points to is that essential complexity isn't intractable: you can make it someone else's problem. In these cases, we're taking some part of the essential complexity of solving a problem, and letting our ISP and websites solve it, or letting code that we don't maintain solve it. Some of the most powerful tools are ones that solve a form of complexity that is essential to a wide variety of problems.
In addition, I agree with the author that the biggest contributions to productivity we have seen were, in this order, the availability of open-source libraries and online forums, automated testing, and garbage collection -- all of which have been adopted at rates we'd expect from adaptive traits in an environment with selective pressure. What is conspicuously missing is linguistic features, also in line with Brooks's observation. And yet the author still claims that linguistic features are the silver bullet. At this point this qualifies as an extraordinary claim that requires extraordinary evidence, but I would settle for ordinary evidence, which is also lacking -- strange, considering that a silver bullet, i.e. something with an extreme bottom-line impact, should be easily observable.