I’m Danish so my opinion on this will be coloured by the fact that most developers here have a form of formal CS education. But we teach people to overthink and abstract things. So so think that it’s only natural that people are actually going to do exactly what they’ve been taught.
I have a side-gig as an external examiner for CS students, and well, a lot of the stuff we grade our students on are things that I’ve had to “unlearn” myself throughout my career, because usually complexity and abstraction aren’t going to work out over a long period of time for an IT system. This obviously isn’t something that’s universally true. I do tend to work in non-tech enterprise organisations (or startups transitioning into enterprise) and in this space a lot of what is considered good computer science just doesn’t work. It’s everything from AGILE, where you project processes will trip you over as you try to juggle developers who both need to build and maintain things at the same time. To how we try to handle complexity with abstractions, and how those abstractions sometimes lead to “generic” functions where you need to tell a function 9001 different things before it knows how to perform “because it just evolved”. It’s in everything really, like, we teach students to decouple their architecture and it’s absolutely a good thing to teach CS students, but the result is that a lot of them “overengineer” their architecture so that you can easily swap which database your system is using (and similar) in a world where I’ve never actually seen anyone do that. Anecdotal, sure, but I did work in the public sector where we bought more than 300 different systems from basically every professional supplier in our country, and you’re frankly far more likely to simply replace the entire system than just parts of it.
But how are you going to know that when all you’ve been taught is the “academic” approach to computer science by teachers who might have never experienced the real world? Being self-taught isn’t really going to help you either. I can’t imagine how you would even begin to learn good practices in the ocean of “how to” tutorials and books which are essentially little more than the official documentation on a language and maybe a couple of frameworks.
> The developer's incentive to maximize their own lock-in factor and billable hours are powerful forces
This part, however, I disagree with. Again this is very likely coloured by the fact that I’ve mainly worked in the space where developers both build and maintain multiple things, often at the same time. But I’ve never met developers who wanted to do this. In fact I only meet developers who genuinely want their code to be as easily maintainable by others as possible because we all absolutely hate breaking off from actual development to fix a problem. That being said, I do think there is a natural risk of ending there accidentally if you haven’t “unlearned” a lot of the academic CS practices you’ve been taught. Especially because there is a very good chance you didn’t really “learn them right”.
I have a side-gig as an external examiner for CS students, and well, a lot of the stuff we grade our students on are things that I’ve had to “unlearn” myself throughout my career, because usually complexity and abstraction aren’t going to work out over a long period of time for an IT system. This obviously isn’t something that’s universally true. I do tend to work in non-tech enterprise organisations (or startups transitioning into enterprise) and in this space a lot of what is considered good computer science just doesn’t work. It’s everything from AGILE, where you project processes will trip you over as you try to juggle developers who both need to build and maintain things at the same time. To how we try to handle complexity with abstractions, and how those abstractions sometimes lead to “generic” functions where you need to tell a function 9001 different things before it knows how to perform “because it just evolved”. It’s in everything really, like, we teach students to decouple their architecture and it’s absolutely a good thing to teach CS students, but the result is that a lot of them “overengineer” their architecture so that you can easily swap which database your system is using (and similar) in a world where I’ve never actually seen anyone do that. Anecdotal, sure, but I did work in the public sector where we bought more than 300 different systems from basically every professional supplier in our country, and you’re frankly far more likely to simply replace the entire system than just parts of it.
But how are you going to know that when all you’ve been taught is the “academic” approach to computer science by teachers who might have never experienced the real world? Being self-taught isn’t really going to help you either. I can’t imagine how you would even begin to learn good practices in the ocean of “how to” tutorials and books which are essentially little more than the official documentation on a language and maybe a couple of frameworks.
> The developer's incentive to maximize their own lock-in factor and billable hours are powerful forces
This part, however, I disagree with. Again this is very likely coloured by the fact that I’ve mainly worked in the space where developers both build and maintain multiple things, often at the same time. But I’ve never met developers who wanted to do this. In fact I only meet developers who genuinely want their code to be as easily maintainable by others as possible because we all absolutely hate breaking off from actual development to fix a problem. That being said, I do think there is a natural risk of ending there accidentally if you haven’t “unlearned” a lot of the academic CS practices you’ve been taught. Especially because there is a very good chance you didn’t really “learn them right”.