The interesting thing is that the programmer's tendency to abstract and future-proof their code mirrors the late 19th and early 20th centuries' high modernism movements, as James C. Scott analyzes in Seeing Like A State and Michel Foucault discusses in depth in Discipline and Punish and others.
"Modernism" is basically characterized by attempts to predict and control the future, to rationalize and systematize chaos. In the past this was done with things like top-down centrally planned economies and from-scratch redesigns of entire cities (e.g. Le Corbusier's Radiant City), even rearchitectures of time itself (the French Revolution decimal time). The same kinds of "grand plans" are repeated in today's ambitious software engineering abstractions, which almost never survive contact with reality.
> The same kinds of "grand plans" are repeated in today's ambitious software engineering abstractions, which almost never survive contact with reality.
Yonks ago, i was reading an introduction to metaphysics at the same time as an introduction to Smalltalk, and it struck me that the metaphysicians' pursuit of ontology was quite similar to the Smalltalker's construction of their class hierarchy. The crucial difference, it seemed to me, was that the metaphysicians thought they were pursuing a truth that existed beyond them, whereas the Smalltalkers were aware that they were creating something from within them.
It's very likely that my understanding of one or both sides was wrong. But ever since then, i've always seen the process of abstraction in software as creating something useful rather than discovering something true. A consequence of that is being much more willing to throw away abstractions that aren't working, but also to accept that it's okay for an abstraction to be imperfect, if it's still useful.
I, probably arrogantly, speculate that metaphysicians would benefit from adopting the Smalltalkers' worldview.
I, perhaps incorrectly, think that the ontologists' delusion is endemic in the academic functional programming [1] community.
> But ever since then, i've always seen the process of abstraction in software as creating something useful rather than discovering something true.
Software, in my personal experience, is closest to the study of mathematics: there is an arbitrary part (the choice of axioms)—but, once that part is in place, you must obey those axioms and discover what facts are true within them.
If you don't obey your own chosen axioms, the system you will create will be incoherent. In math, this means it just fails to prove anything. In software, this means that it might still be useful, but it fails to obey the (stronger) Principle of Least Surprise.
The regular PoLS is just about familiarity, effectively.
The strong PoLS is more interesting. It goes something like: "you should be able to predict what the entire system will look like by learning any arbitrary subset of it."
The nice thing that obeying the strong PoLS gets you, is that anyone can come in, learn the guiding axioms of the system from exposure, and then, when they add new components to the system, those components will also fit naturally into the system, because they'll be relying on the same axioms.
> there is an arbitrary part (the choice of axioms)—but, once that part is in place, you must obey those axioms and discover what facts are true within them.
However, that's almost never the way math is originally developed. As a student one gets this impression, but that is usually on a topic that has been distilled and iterated over again and again, with people spending a lot of time on how to line out the "storyline" of a subfield.
More commonly, some special case is first encountered and then someone tries to isolate the most difficult core problem involved, stripping down the irrelevant distractions. The axioms don't come out of the blue. If a certain set of axioms don't pan out as expected (don't produce the desired behavior that you want to model with them, but for example "collapse" to a trivial and boring theory), then the axioms are tweaked. Indeed, most math was first developed without having (or feeling the need for) very precise axioms, including geometry, calculus and linear algebra.
I don't address this to you specifically, but I see that similar views of math in education make people believe it's some mechanistic rule-following and a very rigid activity. I think it would help if more of this "design" aspect of math was also shown.
Even when mathematicians feel they are discovering something, they rarely feel like discovering axioms, more like types of "complexity" or interesting behavior of abstract systems, where this complexity still has to be deliberately expressed as formal axioms and theory, but I'd say that's more like design or engineering or the exact word choice for a writer vs. the plot or the overall story.
And if these abstractions do survive contact with reality, it's often reality that has to change to adapt to the abstraction. You find yourself saying, "the computer can't do that", not for any fundamental reason, but because the abstraction wasn't designed with that in mind. The unfortunate users have to change the way they work to match the abstraction.
Seeing Like a State describes a similar situation, where people had to give themselves last names so they could be taxed reliably. I wonder how many people have had to enter their Facebook name differently for it to be considered a "real name"? Software that "sees" the world in a particular way is a lot like a state bureaucracy that "sees" the world in a particular way, especially when this software is given power.
https://www.deconstructconf.com/2017/brian-marick-patterns-f... is an interesting talk on this topic, which is where I learned about Seeing Like a State. I've just finished reading the book and found the parallels between software engineering and the high modernist statecraft and agroeconomic policy discussed in the book to be quite striking. I thought the middle chapters were a tad repetitive, but overall it was a fascinating read and I would highly recommend it.
The French were 99% successful with their decimals everywhere (everything except time) in 99% of all countries (everywhere except the US and some freak dictatorships).
> The same kinds of "grand plans" are repeated in today's ambitious software engineering abstractions, which almost never survive contact with reality.
Unless these "grand plans" just standardize and compromise the right things. Then they conquer the world (PC, USB, TCP, HTML, ...)
Seeing Like A State actually goes into a tremendous amount of detail about what a monumentous effort it was to introduce decimal measures in France. Progress was slow and took generations. It is only in hindsight that it was successful.
> The state could insist on the exclusive use of its units in the courts, in the state school system, and in such documents as property deeds, legal contracts, and tax codes. Outside these official spheres, the metric system made its way only very slowly. In spite of a decree for confiscating toise sticks in shops and replacing them with meter sticks, the populace continued to use the older system, often marking their meter sticks with the old measures. Even as late as 1828 the new measures were more a part of le pays legal than of le pays reel. As Chateaubriand remarked, "Whenever you meet a fellow who, instead of talking arpents, toises, and pieds, refers to hectares, meters, and centimeters, rest assured, the man is a prefect."
I would imagine likewise for the other standards that you mentioned. And in a lot of ways they succeeded exactly because they weren't (thinking specifically about how HTTP/HTML succeeded because it was less comprehensive than, say, Xanadu).
But calling the "decimal time" a failure is just unfair, given that they rest of the program was a huge success getting rid of what we would call today "legacy" (of course in hindsight. When else?)
Given how long it took to transition even in France, it sounds pretty expensive. Given that the rest of Europe wouldn't have bothered with it if the French hadn't gone on a crazy conquering spree, it sounds pretty expensive.
Given that the imperial system actual works pretty fine for those who use it, and given that we are stuck with lots of discordant units of measure anyway, and new ones are being invented every day, it doesn't look like the gain was with the cost.
Feels different when "the world" is other people with their own lives, hopes, and dreams. In that case be ding it to your will called authoritarianism.
Actually, I've thought, for a few years now, that software engineering principles, like loose coupling, have a natural application to just the sorts of problems _Seeing Like a State_ talks about.
"Modernism" is basically characterized by attempts to predict and control the future, to rationalize and systematize chaos. In the past this was done with things like top-down centrally planned economies and from-scratch redesigns of entire cities (e.g. Le Corbusier's Radiant City), even rearchitectures of time itself (the French Revolution decimal time). The same kinds of "grand plans" are repeated in today's ambitious software engineering abstractions, which almost never survive contact with reality.