Back in the 90s OOP was touted as revolutionary. The next big thing, would completely change programming. If something wasn't object oriented, it was looked down upon. SQL even got on the bandwagon. It was said that very complex inheritance structures, operator overloading, and all this other stuff would (somehow) make it far easier to write and understand complex projects. Many seemed to have taken and repeated this on faith and not much else. I'd never seen any real justification for these assertions, it was never explained to me why those things would perform as claimed with sound reasoning, let alone actual data.
Were there ever any real objective [hah] studies done about how much it improved software development? And did they show a significant improvement? Even if you're still pro-OOP today, you would have to admit it fell vastly short of its promises even if it does help a little bit.
Today it seems like there's been very little accountability or learning from all this. Some people have sheepishly climbed down off the bandwagon, but there's been very little overall reflection. I'm not talking about witch hunts -- there will always be more snake oil salesmen -- I mean learning as individuals and an industry to demand data and reason rather than handwaving and assertions. The sad thing too is that a lot of the baseless hype came out of academia too (microkernels are another one that comes to mind).
I still see this today. The new languages and language features. New database concepts like NoSQL. "AI". Blockchain. All the way down to the CPU (transactional memory, various "security" features, etc). Proponents can make extremely compelling-sounding cases for these things, and make it sound like they'll solve all the world's problems. And some may well turn out to be a net win in the end. But the only thing that actually matters is the real world results, and you can only evaluate that by studying the data.
In general, if something sounds too good to be true, it usually is. Maybe the incredible trajectory of the computing industry has dulled peoples' common sense when it comes to detecting this kind of hype. It's absolutely rife in the computing industry and academia.
Wait, what’s wrong with NoSQL? It’s not good for shoving relational paradigms into, but it’s basically infinitely horizontally scalable, which, as far as I’m aware, isn’t possible with relational DBs, not at the same performance at massive scale, anyways.
A bit annoying when people shove a relational DB into a NoSQL schema though.
Why do you think it's impossible to scale relations (aka tables) to infitine scale? It is totally possible, just look at various analytical SQL-ish DB-likes (Apache Hive, Presto, BigQuery, Snowflake, etc).
Now, what's harder is to provide some of the stronger ACID guarantees, say, fully atomic distributed commits. Most of the time it's just a question of time it takes to reach full concensus in a distributed context.
But this has nothing to do with the relational data model itself, which is just tables of uniform rows referencing each other. Say what you like about SQL, but the core model is perfectly fine.
For a few years back there, it was going to take over the world and we were all going to throw away 'old fashioned' DBMSs because they were slow, clunky and overcomplicated.
Like many of these overhyped technologies, when the dust cleared about 5 years down the line, we are left with something useful that definitely has its place, but isn't like wow huge it's taken over everything maaaaan. Meanwhile SQL is still with us and still good at what it does too.
I don't believe I said anything was wrong with it or anything else there. Most of the things I listed have their uses. That was completely not my intention to say they're bad, I hope that's not the point people are getting from my post.
The point is how uncritically some of these things get taken, and how easily people will believe fantastic, unfounded claims. And not just a few gullible idiots, but huge swaths of academia and industry.
Nothing is wrong with NoSQL except for how it (often) gets used. NoSQL is just a dumping grounds for less-structured data that allows startups to accumulate tech debt more rapidly, while providing enough functionality to be useful.
Where I've seen it used is to delay the decision making process of adding structure to data, or a prototype database, before you are certain what your application's needs are. For simple disconnected data in low performance applications, they provide a low barrier to entry. But eventually people start embedding foreign keys into documents and the whole thing goes South.
> A bit annoying when people shove a relational DB into a NoSQL schema though
This is what is annoying with NoSQL the same as it was with OOP and now with FP.
People learn this as the new better way of doing something mostly because they heard at a conference a FAANG dev sharing it and then everything should be built with it.
I saw a lot of projects where the developer(s) used NoSQL just because it was available or it was hot or it was what they learned in a bootcamp/article. But then they added relations so now a User has Projects and each project has categories and with constraints on relations and more ...and everything is glued together with NoSQL and suddenly they are reimplementing relational DBs logic in code with NoSQL being only a pure data storage.
> Were there ever any real objective [hah] studies done about how much it improved software development? And did they show a significant improvement?
I think years of hard experience across the industry found out that, for example, multiple inheritance and operator overloading caused more problems than they solved. Both features were taught and advocated back in the day, and now "there be dragons" signs have sprung up and most of the literature today warns the journeyman programmer to avoid them.
Bullshit, it was the weakness in those developer's minds that screwed their use of this perfectly fine language feature.
This is a propaganda war, people. We are being told we are too dumb to handle knives. And the truth is, our industry lets incompetents play our roles, and we (those smart enough to use knives) must suffer the ramifications of those who stab themselves repeatedly and they cry out "it's the language!"
Right. What I want to know is, what was the basis for claiming all this would be so great in the first place? It appears to have been almost entirely free of any evidence, as far as I've been able to tell.
It's mind boggling to me when we see the kinds of people in the industry and their demands for data and evidence when it comes to other subjects.
> operator overloading caused more problems than they solved
[citation needed]
Just because operator overloading can be abused doesn't mean that it isn't a massive boon in certain problem spaces (e.g. math libraries, SIMD libraries, etc.)
I mean, the standard way to do IO in C++ involves spamming the left shift operator (<<), I can only assume because it looks like an arrow? This is definitely a shallow thing, but I'd argue that for this single reason operator overloading definitely causes more problems than it solves (in C++), due to things like:
1. translating format strings to other languages is extremely difficult because the position of expressions in the message is fixed.
2. modifying how things are printed requires modifying global state, and it's easy to forget to reset the flags on std::cout after setting the precision of floats or something.
There's also the famous question of "what does the multiplication operator do on vectors?" problem, but that's something that could be solved by simply having a standard "vector" interface that defines it in a particular way. Overall I don't fully disagree, but seeing as it happened once with C++, I can imagine it can happen again in some equally widespread language (Javascript with it's + operator on strings maybe?).
> 2. modifying how things are printed requires modifying global state
This only applies to std:: cout and std::cerr. Other stream interfaces, like std::fstream or std::stringstream don't have this problem. Also, it is orthogonal to operator overloading.
Were there ever any real objective [hah] studies done about how much it improved software development? And did they show a significant improvement? Even if you're still pro-OOP today, you would have to admit it fell vastly short of its promises even if it does help a little bit.
Today it seems like there's been very little accountability or learning from all this. Some people have sheepishly climbed down off the bandwagon, but there's been very little overall reflection. I'm not talking about witch hunts -- there will always be more snake oil salesmen -- I mean learning as individuals and an industry to demand data and reason rather than handwaving and assertions. The sad thing too is that a lot of the baseless hype came out of academia too (microkernels are another one that comes to mind).
I still see this today. The new languages and language features. New database concepts like NoSQL. "AI". Blockchain. All the way down to the CPU (transactional memory, various "security" features, etc). Proponents can make extremely compelling-sounding cases for these things, and make it sound like they'll solve all the world's problems. And some may well turn out to be a net win in the end. But the only thing that actually matters is the real world results, and you can only evaluate that by studying the data.
In general, if something sounds too good to be true, it usually is. Maybe the incredible trajectory of the computing industry has dulled peoples' common sense when it comes to detecting this kind of hype. It's absolutely rife in the computing industry and academia.