Code-reuse via "Implementation Inheritance" is completely unnecessary for specialisation or polymorphism.
When someone says that "inheritance is bad for code reuse" they're not talking about interfaces, or using inheritance for polymorphism. They're strictly talking about sharing code using implementation inheritance, which is the thing that has been widely criticised for more than 30 years now.
One can argue that even the "Template Method Pattern" doesn't fall into "implementation inheritance", since the implementation lives in the subclass.
If you read the posts, discussion is way more nuanced than "inheritance bad vs inheritance good".
Here is more specifically what I meant with my comment about specialization above. There is a class with four methods, three of which are exactly what you need but the fourth one, you need to modify.
Solving this with inheritance is trivial (extend and override).
Solving this with any other paradigm is... much harder and requires a lot more boilerplate.
In functional programming you'd just create a new function that takes a value of the same type of those other 3 methods.
But anyway, creating a new method without changing any of the methods of the super class I think it's generally ok. The problems arise from modifying methods that the super class already implemented.
But code that uses the old function wouldn't magically start invoking your new function instead, something that inheritance and polymorphism give you for free.
Nothing will magically start calling your new function. If you are defining a new function that didn't exist before, then you'll have to actively call it somewhere. What you're describing instead is overriding an inherited function. However, that is full of pitfalls, I would not call that "for free" by any means. There are example of the problems in this very thread. Anyway, that's distinct from polymorphsim, which is present in functional programming.
You may be missing the point that's being made, though. No one is arguing against interfaces, but overriding concrete methods from a concrete class. Those need to be well thought out as extension points for you to have any chance of having stable software. Not quite for free.
Somewhere deep in the code is calling a.foo(), but when you pass a subclass of A that overrides foo(), then this code "magically" calls that new implementation.
This is where specialization shines and no other paradigm allows this so elegantly and so simply.
That is not exclusive to OO and it wasn't invented by OO either. However, the common pitfall of changing the implementation of a method that was not designed to be changed I would argue to be more common in OO than in other paradigms. That's a common pitfall, though. Not a place where OO shines. Though, to be fair, that's a problem in Java, Python and other popular OO languages, but not inherently a problem of the paradigm per se. C++, for instance, avoids that common issue by not making methods virtual by default.
When someone says that "inheritance is bad for code reuse" they're not talking about interfaces, or using inheritance for polymorphism. They're strictly talking about sharing code using implementation inheritance, which is the thing that has been widely criticised for more than 30 years now.
One can argue that even the "Template Method Pattern" doesn't fall into "implementation inheritance", since the implementation lives in the subclass.
If you read the posts, discussion is way more nuanced than "inheritance bad vs inheritance good".