If your style of programming or the way in which you approach designing solution to a problem has been severely affected (positively) after learning a programming paradigm, which one is it? Why and how do you think so?
Functional programming. I learned OOP in school, and always thought it overcomplicates things, why can't this comparison operator in one page of Java just be a function, I thought. How can you make the state of objects manageable, especially if it's private, how do you test, without opening up too much of the API, couldn't we just make this a set of functions with an explicit state so it's easier to understand? How do people understand a method that is spread over a hierarchy of 15 layers of inheritance.
For years I thought there must be something I don't grok about OOP. Some point where these design patterns click and make sense. Until I set out to learn OCaml, switched to F# because I was on Windows, and found https://fsharpforfunandprofit.com. All these concepts had really foreign sounding names, but they clicked right away. I finally found that there are people that think like me.
I'm happy functional programming is slowly taking off, influencing almost every modern language. So I guess I should have just trusted my intuition.
I feel the same way for OOP paradigm but I found it's usually hard to argue against it because it seems to be so ingrained into people's minds. After all that's what we are taught in universities and presented as a first go-to toolbox so it's no wonder.
I'm guilty of that defending attitude too but after probably several hundreds kLoC of OOP style and utilization of whatnot design patterns I've came to realization that it's mostly great for overcomplicating stuff for no materializable advantage.
I think the problem is that OOP is mostly taught as a separate paradigm, when it should be seen as a convenience layer over procedural imperative programming. It is about bundling procedures together in groups, associating them with mutable state, and protecting and encapsulating that state from other pieces of code.
Yes, I agree, my experience is also that OOP is taught as something orthogonal over procedural imperative programming whereas it should be taught as an extension to it. But even with that said I honestly cannot think of many examples where OOP provides more elegant solution to the problem. Quite the contrary, any larger and more complex OOP-style codebase is quite an unreadable mess despite all the standard promises about the readability, maintainability, "testability", clean abstractions etc.
I think that the biggest bend in my mind was when I set myself a goal to try to digest "Elements of Programming" by A. Stepanov and "Modern C++ Design" by A. Alexandrescu. Don't let the "modern" in the title trick you, it's a book from 2001.
But irrelevant of the programming language, which in my example happens to be the C++, these two books triggered me to start challenging the tunnel vision I had up until then. It took me some time and a lot of experimentation to basically unlearn the OOP way of approaching the problems and trying to embrace a totally different, I'd say, almost mathematical way of looking at the things. It felt like a relief and it really had a huge impact not only on my code style but the way I think about the problems.
Since then I think I started producing much simpler and, what I think, non over-engineered code of better quality without a lot of unnecessary cruft.
The bigger problem is that OOP isn't even consistently defined. Kay's (who is oft credited with inventing the term) definition revolves around message passing, which negates the idea of it being a layer over procedural imperative languages as Erlang fits his model of OOP.
The problem, I think, is that OOP is teach with lots of inheritance. And people think is all about inheritance, and is the only valid tool.
Correctly (sparingly) used is ok, but 99% of the time is over used. That is the "All has to inherit from something" illness.
Another thing that is overused is hiding: all has to be private, all accessed through getters/setters, all so deeply hidden as possible...
OOP has good ideas, but they have their place. They are way way way to much overused.
Agreed. OOP is a valid tool and we use tools to solve problems but the moment it becomes a wrong tool to use is when you find yourself trying hard to retrofit the original problem only so that it can match your abstract OOP puzzle. That's just calling for more problems in my experience.
As a C# developer, F# is something I long to learn. I heard about F# until I had to adjust some build script which I thought is elegant [1]. I did try some F# tutorials at free time, but nothing that would make me comfortable and "click" on how to write functional code.
But I long to bring some improvements to code correctness that can be statically checked by embracing immutability, pattern matching with discriminated unions where compiler ensures you have handled all cases, typing system which makes illegal state impossible... those are just some features I spotted could be very useful. On top of that, F# code seems so much shorter.
One of the things that made F# "click" for me was Wlaschin's "Domain Modeling Made Functional". It's largely a focus on how to think about the underlying data, and getting in that mindset helped tremendously in feeling more comfortable with F#.
Learning functional programming well is like learning algebra; it really pays off to take a theoretical course from first principles, by a teacher who can also teach you the idioms (parameter accumulators, currying, monads...)
By order of impact, not chronological introduction:
0) Macro programming: I learned TCL as a kid, so that made the impact stronger; writing programs that write programs is insanely powerful. Learning macro programming via TCL incidentally let me skip learning FP at the same time, which I would've with a Lisp.
1) Functional programming: correctness and beauty. Standard ML, Haskell.
2) Erlang: extreme concurrency via message passing. Erlang is functional, too.
3) C: manual memory management: Understanding how programs allocate memory.
After that it gets more obscure. (Prolog, linear type systems, hardware DSLs, etc.)
I feel like TCL made me numb to the Lisp bug. I might go into Clojure at some point.
1 - Smalltalk. I’ve done Python, Kotlin, Swift, C++, and Objective-C, all in various forms of real production code. The kind of OOP you do in Smalltalk is not the same as you do in other “oop” languages, which are really just Algol dirivstives with some weak sauce OO smashed on top of them like lipstick on a pig.
2 - C actually. In particular, pointer rich C. Working in some domains where we did LOTs with pointers (vms and the like) gave me a practical understanding of the general computation model of Chios in general. After I worked with pointers a lot, assembler, though arcane, just suddenly clicked.
3 - Currently Elixir. It’s pragmatic approach to functional fascinates me. I don’t have to become a type theologian to participate. I just get to try and solve problems with function pipelines, ubiquitous immutability, no return statements and everything as an expression.
And though each of these paradigms is very different, they all have pushed my mind the most. It’s interesting that whole different, they’re each “simple” but powerful paradigms maxed out in the interest of consistency.
I agree with 1 and 2, so now you got me really interested in 3 ;-)
By the way, concerning Smalltalk (Pharo), we're hiring. Honestly, it's the best way to learn Smalltalk, the tutorials on the internet are nowhere near the level of what I'm encountering in practice.
Also: be on the lookout for the Pharo Days [2] and ESUG [3], and similar conferences. I've only been to the Pharo Days, but man, it's awesome! A lot of people that are +10 years experienced in Smalltalk gather there and will bestow their wisdom upon everyone who's interested in Smalltalk and/or Pharo. I knew Pharo for not even a month, it boosted my skill tremendously hanging out with them for a few days.
3.yes it's nice little touch here and there contribute a huge defense while not having type inference. Pattern matching and guard, just two things make my program akin to .. hmm "correct"? I mean, I barely debug because there's often no bug. If something is missing, it immediately blows up at pattern match. Plus, destructuring assign is dang convenience, sort of in-place matching is clean (akin to type variable signature in some functional lang)
Interesting that you consider Objective-C to be just an Algol derivative when it is so heavily inspired by Smalltalk and is basically Smalltalk bolted onto C. The C half of Objective-C is Algol derived, granted.
A huge amount of stuff you do in any program is logic (branching), math (add, subtract, etc), and enumeration (various forms of looping). Algol/C does all these with built in language syntax. Smalltalk though, does not. It does these very elemental things, what might make up maybe 80% of a method body, with messages (or at least the facade there of). It couldn’t do logic and enumeration without closures. Objective C did not have theses so you still did that core 80% of stuff with old fashioned C.
Definitely object oriented design, in a negative way. It taught me how I do not want to think about a computer system. All its ideas sound so neat and if you have a well designed system, which never needs to change, it might even work. But someday, somewhere you might realize that some functionality means none of your abstractions made any sense.
The idea of combining functionality with state (classes) was definitely a useful one, but you shouldn't use a hammer as a screwdriver.
I have a lot of positive things to say about procedural programming. It is perhaps the easiest and most straightforward way to think about computation. The more abstract the problem gets the more I start to think about it in functional terms, it allows you to construct the flow of data in a very managable and efficient way.
Anti-service oriented architecture: continually asking myself in what way could I solve a problem with less complexity, fewer dependencies, and permitting shitty solutions as long as they are vaguely adequate to customer need.
Beyond functional programming as shared by other people on this thread, the programming style that has most affected me was separating major layers of a SW stack - in particular, the data model, the business logic and presentation.
This sounds kind of trivial, but it entails three very useful techniques:
0. Passing strictly typed immutable data structures (i.e., DTOs) that formalize and enforce the pre- and post- conditions at each such layer of the code;
1. Dependency injecting useful resources, as opposed to keeping them inside god objects/"managers";
2. Allowing each layer to use a different programming style, quality bar, logistics etc.
One caveat: I don't think that modularizing code causes it to require fewer changes when introducing new behaviors. For example, if you add a new data field, you'll have to go through all layers. However it definitely can make each of the piecemeal changes quicker and cheaper.
Minimizing and isolating state. Not eliminating it, keeping state is often useful, but not everything needs statefulness, and keeping track of state is _hard_ (hi yes I'm backend, have fun FE folk). The loose pillar of functional programming to keep state and behavior separate means that the problem space for objects and functions goes down quite a bit, and reasoning about things becomes much easier.
I don't think it'd be anything you could obviously point at in my code, but it's a day-to-day consideration that I have.
Go down the rabbit hole of the actor model & eventual consistency, it will make you a better developer. After reading everything you can on the topic, there are a couple of frameworks that implement it (Orleans Framework & Akka & others). I think it will make a comeback as it is a natural fit for multicore cpu's/gpu's.
It is also fairly easy to create, persist and resume state machines, which can help a ton for complex workflows. It also forces you to think in a bottom-up approach instead of a top-down approach like most languages/frameworks.
Another interesting thing to dig into is "code calisthenics" and "tiny types".
Just dropping `binding.pry` anywhere in the code, run it and then get a full REPL with all the variables and instrospect everything with `ls`.
This is why I am still coming back to Ruby and writing everything like it. In most other languages you spend a lot of time actually trying to find a way to debug properly.
Happy to hear recommendations how other languages are doing it. (I know that there are debuggers :P )
For C#/Visual Studio you can attach to a remote computer process and step through code. Lately I learned that the exexuting cursor can actually be moved back/forward and code writtrn while debugger is active (and continue execution). Neat.
As of javascript, you can hit F12 right away and dive into debugger for 3rd party site. :)
Functional programming, the idea of state machines and finally how Rust extends structs with functionality. Also experiencing an overengineered internal system that was microserviced, really made me reconsider the whole "microservices are epic" mantra. 'Cause, oh boy was that annoying and difficult to debug.
When I learned functional programming, in my head, classes suddenly became slightly worse versions of partially applied functions.
When I learned about state machines classes also now seems like a failed attempt at achieving the same thing.
OOP, hands down. Though I never learned Functional (and never learned Smalltalk, for that matter. I think the OOP I learned wasn't half of what Smalltalk offered.)
I learned to program on punch cards, and the options were 'top down' ("The best way"), and bottom up ("The bad way").
Top down was hideous, because it ended up being decomposed as the next level as (1) Initialize things (2) Do what's needed. There was no clear path to subdividing the functionality.
Bottom up was worse. Ya' know you're gonna need a routine to do such-and-such, so go ahead and write it. You get to feel productive. The huge down-side was that you couldn't figure out in advance precisely what was needed, so those routines always got re-written to be amenable to what was needed at a higher level once the higher levels were written.
OOP gave me a way to think about the middle layers of a program in a way that isolated some element of what was needed and let me code it 'usably'. I could sit and think about an what kind of interface I wanted to provide, and get it right. It was also a big win to realize that even C (NOT C++) could be used in this paradigm, in terms of a boundary around the interior or some piece of functionality.
The ability to freely mix programming paradigms and to bend the language towards the problem instead of the other way around. It's the thing that Common Lisp is kind of famous for.
Depending on the programming sub-sub-task that I'm working on right now, I can have simple state machines written with explicit GOTOs, algorithms working on immutable state and written in functional style, minor operations in these algorithms implemented with a functional shell but imperative core, the ability to do declarative programming and define my own syntax for the declarations while I'm at it, and the ability to always branch off to a task-specialized DSL whenever it brings direct benefits to the table.
YAGNI, unlearning OOP, embracing simple languages, accepting that the requirements/hypotheses about external world is the most important signal.
Before: I was endlessly creating useless abstractions and going for cute and "elegant".
After: Simplest possible code.
The downsides:
1) One has to know good code from bad to evaluate if "simplest possible code" is not too simplistic and won't cause you problems a month, a year or 5 years later;
2) One is always bombarded by the "cute", "elegant", or plain meandering and unclear code. Unless one is at the top of technical peeking order of the company all the attempts to keep sanity will be in vain.
For me, more than any paradigms, the following outlooks were the most useful -
1. YAGNI - plan for the future if required but only implement once it arrives.
2. KISS - Keep things as simple as possible. For example, f you have to generate 50 forms for 50 different use cases, build 50 forms. Don't try to create a generic dynamic form.
3. Tests - Not necessarily TDD, but there should be tests for when you think your module / function has been done. This will ensure your edge cases are considered. Try NOT to fix a bug without adding to your tests.
For me the biggest impact I've seen in my every day programming is creating a list of utility functions like lodash or underscore.js for both Js and Php.
I have these 2 huge files with hundreds and thousands of small unit tested functions that do everything under the sun like sorting a js hash by key, to making ajax requests, to all the things I've needed to do over the years. This has reduced my time to code by so much I can't even begin to describe and also I feel confident of not making silly mistakes since the functions are tested and reused many time. Thanks to tree shaking in js it also means I do not increase the bundle size.
On other thing that has helped me a lot with programming is learning zen css and zen html, i.e. div.test>span>b type stuff. Such macros really help a lot to avoid mistakes and help speed up things.
First pure functional programming (not just functional programming) and then type driven development with a typesystem that supports dependent types.
In combination they are a huge productivity boost that you can't really imagine without having tried it. I still remember the days where I wrote tests over tests and ran the system to check if it works. But "if it compiles it works" is just a sooo much smoother workflow.
Scala as a mainstream language probably comes closest, but it lacks a bit on the dependent types part. Idris or F* would be languages that fully support both but they are rather academic.
Perhaps that's already pretty far for practical purposes for a lot of people.
Idris does actually try to be a programming language and not just a proof system.
F* tries to be like Standard ML with proofs. It really is ergonomic for an ML. But if you haven't written Standard ML (or OCaml), this feels very much like you're a mathematician trying to prove facts about programs rather than run programs.
From my experience it's very far apart. I know that there is Haskell Liquid but that is still does not make it a real value-dependent type-system, even though in many cases it allows a similar expressiveness. If we talk in the context of programming paradigms, I don't think I would count Haskell in.
Not so much a paradigm, but learning TDD from the engineers at Pivotal Labs really changed the way I approach problems.
I spent a few weeks constantly pairing and doing TDD with them, and while I don't think I could pair forever (maybe 2 days/week), I was amazed at how TDD could shift my approach to a problem. These days I'll still use it, even on personal projects (usually only integration tests), as a great way to reduce unnecessary complexity and increase my confidence in my own code.
The Turbo Pascal IDE. At first I thought it was stupid to have to compile a program instead of just writing it in BASIC, which was what I was used to. Then I learned to love that almost instant compile and insane speed compared to BASIC.
When I was forced to move to Windows, Delphi was there to make it damned easy to build GUI apps, which was about a similar step in productivity as Turbo Pascal. They used the perfect amount of OOP to get the job done, but not bury you in abstraction.
I think it would have to be machine language, assembly language, and C used to program machines at the hardware level. This was on an Atari 8-bit except C being cross compiling to M86k. In all cases I was programming to hardware interfaces video games and laser printer firmware. What it taught me was as sense of 'you can understand it all down to the metal'. That hardly seems valuable now, but what that means is that I've always had the curiosity and desire to know how things work which leads to deeper understanding in our field.
Learning recursive programming was an eye-opener because it clearly illustrated the correspondence of programming and math (proof by induction). [Dynamic programming was meh when I learned it--memoization seems a more fitting term--as I expected to be learning metaprogramming.]
The thing I wish I'd been taught or learned much earlier is functional programming. And maybe data-oriented design if that's a name of the thing that prioritizes data schema and layers operations on them. I'm always surprised by coding-first developers not able to put fields at the appropriate place considering cardinalities, or never even think that it's important to get this right.
Put the data all the way over there. Put the view all the way on the other side over there. And then start tying it togther in the middle. I was pretty young when I learnt about it, but it's been by far the most influential on me.
Also I'm not sure if this is a programming paradigm, or just good practice, but strongly typed parameters + throwing errors if they're not the right type. I thought TypeScript was the bees knees when it came along.
> Put the data all the way over there. Put the view all the way on the other side over there. And then start tying it together in the middle.
The fun part is that what you're describing is not actually MVC. Just like RESTful APIs, the original MVC concept has been mostly forgotten and its name reused for different flavors of the model-view-presenter pattern.
There have been some interesting discussions and submissions on MVC here on HN.
"Reactive" sutff. Bad impact. Programmer for 30 yrs and while it theoretically makes sense, pratically it's all a shitshow so i made a very simple js framework thats all "imperative" like Android (findView, setText, etc etc.) just for myself.
Been making a living migrating react and vue overengineered projects that are supposed to be small to this with much succe$$.
Only paradigm isn’t enough. Language design, ecosystem and pragmatism is essential as well and Clojure has all that.
After a couple of years of writing Java in my late teen, early 20s, I observed considerable overhead and inefficiency while working on large Java projects.
I thought we shouldn’t have to recompile the whole project upon every single change.
Classes and private public fields, methods felt like they were solving the wrong problem.
Other fancy words, that contained fairly simple ideas, felt like the output of clever language designers who enjoy the sound of their voice.
Those were the issues I observed as a newbie, but still I started looking for alternatives and found Clojure fairly soon. Sense of satisfaction steadily grew as I dug deep into the language, it was addressing so many design decisions I questioned before. At some point I started re-writing core library and it was joy to see how simple a programming language can be.
I wrote software in many different programming languages over the years but writing Clojure at work is fun, multiplied.
Easily this was the idea that maybe OOP wasn't the solution to everything. It's not exactly a paradigm, but more of a repudiation of one. I still write a majority of my code in classes. But you can see a lot of other influences as well: the command pattern (which is more procedural I would say), a lot of LINQ expressions (which are pretty functional), and a whole lot of dumb records being passed to dumb functions (which I got from data oriented thinkers). Anyway the point is, it's not just one paradigm which has made a difference for me. It was realizing that there isn't one paradigm. I can freely import influences from all of them, use them where appropriate, and get a benefit from all of them. Including OOP, but not only OOP.
Working at a company that cared deeply about correctness and reading code over writing it was critical to my current coding style and beliefs. I've written about some of this at https://programsareproofs.com/articles/singleuse_functions.h..., but I developed an aversion to single-use functions, premature abstractions, and "self documenting" code. The code I write these days is drastically more understandable and modifiable than back when I targeted terseness, minimizing duplicated code, and future proofing.
As far as languages go, learning SML and functional programming was probably the biggest change for me.
Not a proper paradigm, but I will say test-driven-development. They didn't teach that to us at University, or I was not paying attention when they did. I had to learn it afterwards.
I struggled with it for months. Only started seeing the benefits way after. An exercise on delayed gratification. I think its main difficulty is that it requires "learned intuition" - knowing which tests to write first, and which ones to build afterwards. It's a bit like solving integral equations - apply the "right" transformation and you get to the result in a few steps, fail to see the pattern and you'll have a hard time.
Perhaps I didn't find the right resource. I won't say that I use it every day, but it is certainly at the top of my toolbox.
test driven development definitely changes things. It's not just a way to reduce bugs but it also changes the way the production code is written in terms of design/architecture.
I suggest you to also look into type-driven development (it is orthogonal to test-driven development). Essentially it does the same but on the type-level not on the implementation-level.
Object oriented programming, though not the Java BeanCakeFactoryFactoryInterfaceProvider approach. It isn't exactly Java's fault though, it is just common there - but there are exceptions (see Swing vs SWT, with the latter having a much simpler API or LWJGL which has a very simple API itself). Also i don't really do the whole patterns thing (unless accidentally) nor all the acronyms (i don't care about things like LSP, SOLID, etc, i mainly keep related things together and use inheritance for specialization).
Of course i don't do OOP for the sake of OOP, i stick with simple procedural code where it makes sense, but for some things (e.g. GUIs) other approaches feel like trying to shove a square peg through a round hole.
The imperative paradigm. Because most or all of my code is imperative. Of course most of my code is also object oriented but it's more of an add-on and in practice classes function more like modules. I don't think much about inheritance and polymorphism.
In a negative way: the current JS ecosystem and the way we build web apps in general. I started programming in late 80s and was very scrupulous in my work. I learned how to profile and optimize my code; I basically knew what each byte was doing. When I discovered Linux it was such a joy as I could basically trace the whole execution path and optimize everything even more.
Also, I love order. I enjoy clean and elastic design that keeps things in order but gives you the flexibility to extend the functionality indefinitely. In spite of some reservations, I enjoyed building apps with GUIs in the 90s, the process was quite pleasant and neat.
Never brought any Erlang project to production, I've just toyed around with the language, but I do miss the pattern matching philosophy in the other languages that I most frequently use, that is Python and JS.
Thats what I am planning to do as well. Do you have any kind of strategies you have developed? Do you write tests for the order of assertions (entering temporal logic)?
Honestly I don’t write that many tests these days other than integration and functional tests. The assertions are incredibly good at breaking those if anything goes wrong so you need less (unit) test cases. The test cases are always brittle too.
As for strategies, I pepper asserts on inputs and outputs of functions to remove any assumptions about scale and state which are not enforceable by the contract at compile time. Additionally I will tend to assert conditions which need to be in place within the functions. Rarely does anything need any more than that.
Assertions are left in production and any identifiable errors or throws are turned into defects and resolved as priority 1 cases.
Perhaps the single responsibility principle. Before that, it was always hard to decide on a good point between either everything being a single compact block of code doing everything, or the other extreme of everything being separated out to many different abstractions and subsystems just for the sake of it, seemingly.
The single responsibility princible gives a good rule of thumb of why and where to split things up, and if you follow it, you will by design get a system which has about enough abstractions for change to be reasonably easy.
Paul Graham's book Hackers & Painters resonated with me as a teenager, more specifically what he wrote about how code is like paint, or clay. Writing software is like writing essays: you make drafts, iterate, kill your darlings, and even though you may have a single goal in mind, a point to make, things seldom have a definite form. It's an art. It's more of a hacker's mindset, perhaps, but I keep it close to heart still because it's how I stay sane in this business.
Uncle Bob's "Extract Till You Drop" which suggests breaking down large chunks of code into multiple, single-task functions (i.e., when describing a function, you should rarely if ever be able to say it does this and this).
I went from creating messy, frustrating code to well-organized, easy to debug modules that (have) run w/o issue for years. At first it seemed like overkill, but the sheer speed of bug fixes 10x'd overnight for me. I haven't looked back in 5+ years.
Ironically enough I'd say extremely old-school imperative code has made a large impact for me. I've learnt OOP and to some extent FP as my default. Doing some OpenGL and other rendering stuff was refreshing and allowed me to explore much simpler ways to think about problems. Letting go of abstractions is tougher than it sounds. Killing your darlings type stuff.
I never really got into systems languages. I guess that's the main way other programmers get the same insights.
Is so simple, that for like a decade I was using it with great success without having a clue about it.
Then when I start looking for my hobby programming language, was the one that pop-up as the easiest to implement with the greatest level of potential and expressivity. There is when I truly pay attention to the actual theory.
And the shocking part is that the theory is so small that you can put in in a single piece of paper.
No premature optimization. Don't wrap code in complex patterns unless you're sure you're gonna need it within the next hour.
E.g. if you're building some kind of file system access and know you want different "drivers" (e.g. local, Dropbox, Google Drive), using an adapter pattern is fine. If you just need local access right now, don't waste your time. Refactor later if needed.
From these, the most important is DI. Separating the set up of runtime dependencies between parts of the software from the logical dependencies is a total game changer.
I'm curious about Erlang, which I didn't try yet, since it seems to capture almost everything in one language. Pure functions, inmutability and FP idioms on the micro level with strong OOP and separation of logical from runtime dependencies on the macro level.
If I have to pick only one it procedural low level programming, like good old C. Taught me a lot about how memory is laid out, how software works under the hood, about lower level concepts.
Going from C to higher level languages like C++, C#, JS or Python means that I have a glimpse into why things work the way they work and having that understanding is helpful.
I would call it "dataflow-oriented imperative programming". Think about your data structures first. Then think about the manipulations you need to apply on them. The program then becomes a collection of procedures, with locally-scoped "const" variables, in an almost SSA-like form.
A lot of things that have been mentioned like FP and TDD and whatever can bring new perspectives because they force you to break your existing programming patterns, but I also think that is the main benefit, not the paradigm itself. This is also the benefit from (the increasingly contentious) Clean Code. It's basically a bunch of rules that force you to break habits, even though it's easy to conflate that with benefits from the paradigm-change.
It's difficult to improve when you're basically just building the same program over and over. That's what I'll argue is the benefit from switching paradigms; you can't keep doing that, so you're nearly forced to learn new things.
It's interesting to see how many people in this thread have had negative experiences with OOP. As much as I don't do OOP that much anymore, I think it had a positive impact in me. But like every popular practice, there is a lot of bad patterns you have to navigate through.
Things like depending on interfaces as opposed to concrete implementations; or prefer message passing over direct data access are practices that I learned in OOP that I still value. The "Small Talk crowd" from the first team I worked in and influential authors in the topic (specially Sandi Metz) still have a dear place in my heart for how they improved the way I view software design.
I agree. Hating OOP is somewhat mode du jour, but if you started out from languages like FORTRAN and C as I did, you organised your code around abstract data types and operations to work on them (including mechanisms like passing 'this' as the first arg, composition of structures and common operations). OOP was a completely natural extension.
Pure functions. It totally changed my game when I realised that state was "evil" and that it should be treated like the dirty little possum that it is.
The whole functional programming paradigm made a big change for me. Coming from the 90s you know, we were fed Object Oriented paradigm down the throat up to mid-2000s. If you didn't do OO on your resume, no point applying.
Then slowly the whole industry started looking at global state and considered any kind of shared state bug-prone (which it is). So Object Oriented took a big hit in the face. It wasn't the dream of reusability we were sold.
Some went ape in the functional direction and became obsessed with it. Some like me went into a more pragmatic understanding of what state is and how to keep as simple and compartmentalised as possible. So we still use OO in parts, completely avoiding inheritance if possible, using composable objects instead, with pure functions. That's another big shift I took from game development: choose composition over inheritance.
Code layout-wise, we went from hellish nightmare, to MVC, to MVP (presenters) and finally MVVM (thanks Silverlight).
Then came the reactive UIs. That's also been a big shift, but in the same direction. State is state, and the UI flows from there. As long as you don't get into a spaghetti fest of system-wide events, you're probably going to be OK.
I think we're becoming closer and closer to Model - View, with a bunch of render methods and utility methods on the side.
I'm not sure what the "next" thing is, but we seem to start to have this whole UI thing finally under control. Except tailwind, anyone in their right mind needs to see what a clusterf that is. There definitely needs to be more work done on the design-systems and CSS frameworks side, it still feels like cowboys and indians in this age. You'll see, in a year or two, a monolithic CSS will make a come back, probably partially server rendered. Anyway, this is only for the web.
As for the native stuff, I'm really digging the SwiftUI thing. It took them a while, but it's undeniably amazing to be dealing with a language that is both a rendering DSL and a statically strongly compiled language close to Rust, but very terse. If only they could fix the compiler errors... I could finally drop my deck of tarot cards.
But as far as impressive language goes, I have to give it to Swift now. As a polyglot of most (yes most) languages, Swift takes the cake. Sadly it's biased...
So to sum up? Pure functions, composition over inheritance, reactive rendering, strongly-statically type languages used for both rendering/logic that are as powerful as Rust and as terse as Python.
> I'm not sure what the "next" thing is, but we seem to start to have this whole UI thing finally under control. Except tailwind, anyone in their right mind needs to see what a clusterf that is.
> composition over inheritance
I agree here, so I view the use of utility classes (like Tailwind) as the composition approach and the traditional CSS approach of using complex cascading rules as the inheritance approach. It took a while for people to see that big inheritance trees don't scale well in OOP languages. I'm seeing the same thing repeat with CSS here where people cling to the old way because it's what people have been told is best practice.
When you have to review code you notice sometimes people that dislike OOP are doing a lot of messy code. Procedural code or procedural code hide in a object is not OOP and it's horrible to review!
'Fail early' and 'throw exceptions in exceptional circumstances'. You need good top level error handling and logging of course. I still work with engineers that try catch exceptions and return some sort of more 'graceful' error, for problems that can't really be handled in any other way but to let the system fail. This practice makes it clear when problems arise and where they come from.
Functional programming combined with the process dependency tree of Erlang.
Up until a point functional programming with PHP or Python were always littered with small state time bombs which could change at any point in time. Only after thinking about state explicitly in Erlang based programming, was it clear how the state enclosed in OOPS is a total digression from reasoning about state and behavior separately.
Clean Architecture. I’ve only tried to implement it once, then I realized I was over engineering. But I still use it as a template to design softwares, but I implement something more crude.
Laravel’s use of conventions and facade. It was quite nice to see how a carefully designed developer experience can improve productivity. For most app, the code basically write itself.
TDD. It gave me a framework to write easier to understand code and solved my verification issues. Not sure I would still be a coder if it wasn't for that.
For years I thought there must be something I don't grok about OOP. Some point where these design patterns click and make sense. Until I set out to learn OCaml, switched to F# because I was on Windows, and found https://fsharpforfunandprofit.com. All these concepts had really foreign sounding names, but they clicked right away. I finally found that there are people that think like me.
I'm happy functional programming is slowly taking off, influencing almost every modern language. So I guess I should have just trusted my intuition.