They're theoretically orthogonal but practically not. You can deduplicate code without abstraction per se, but the result is generally unreadable and unmaintainable. As such, all reasonable code deduplication relies upon abstractions. However, not all abstractions involve code deduplication, and may instead have other goals (such as making it easier to reason about local state, invariants, etc.)
> If you write a new function like `fun addThreeTimes(i) = i + i + i`, I don't see that as a new abstraction at all.
If you only call it once, it's not code deduplication either.
What differentiates addThreeTimes(i) from sqrt(x) or average(x,y) or pow(x,y) or multiply(x,y)? Not how many call sites it has, nor the presence of a dedicated operator to the function in the language. Instead, I'd say: the function's reusability, composability, commonness, ... or to put it another way: addThreeTimes is an "abstraction" - it's just a poor garbage unreusable unremarkable unrememberable abstraction with no expressive power.
However, poor abstractions aren't the only result of overeager code deduplication. Sometimes you end up with "good" abstractions misapplied to the wrong situations - e.g. they solve issues your current problem doesn't actually have. As an example, turning your list of game entities into a list of (id, aabb_f32) tuples might be exactly what you want for a renderer culling or broad phase physics pass - but completely counterproductive for implementing the gameplay logic of a turn based game! If you've already got a list of tuples, you've a few choices:
1. Modify the tuple (add tile position information that's useless to the renderer/physics, muddying the abstraction)
2. De-abstract (e.g. perhaps change several function signatures to pass in the original entity list instead of the AABB list)
3. Re-abstract (perhaps your gameplay logic should take something else that accounts for things like the fog of war instead of a raw list of entities?)
> What differentiates addThreeTimes(i) from sqrt(x) or average(x,y) or pow(x,y) or multiply(x,y)? Not how many call sites it has, nor the presence of a dedicated operator to the function in the language. Instead, I'd say: the function's reusability, composability, commonness, ... or to put it another way: addThreeTimes is an "abstraction" - it's just a poor garbage unreusable unremarkable unrememberable abstraction with no expressive power.
I agree that call sites or presence of language operators is not the defining distinction here. But I disagree that reusability, composability, or commonness (is that not "call sites"?) are somehow defining features of an abstraction, either. Obviously, those are good qualities for code to have, but that's not related to what I'm thinking about.
The difference in my example is specific to the ladder of abstraction from addition to multiplication. When I was taught multiplication in early grade school, I was taught it as basically just being another way to write addition. When I first learned it, I would do exercises that involved taking an expression like "3 * 5" and translating it to "3 + 3 + 3 + 3 + 3" and then evaluating that. However, after time, I've stopped thinking about multiplication as addition. In my mind, I just think of multiplication as its own thing. I've fully internalize the "abstraction" because I don't even think about addition anymore when I see multiplication.
So, when we take a Year, Make, Model, and Color and group them together and call it "Car", we're making an abstraction and it has little to do with code duplication. It has much more to do with wanting to think about higher-order constructs. You and I agree here, as per your first paragraph.
If I have some kind of rendering engine and I find myself often rotating, then shifting a shape, I can write a `rotateThenShift(Shape, angle, distance) -> Shape` function and not feel like I've abstracted anything. I'm still "talking" about a shape and manually moving it around. Even if I just rename that function to `foobinate(Shape, angle, distance)`, I feel like I'm closer (but not quite) to a new level of abstraction because now I'm talking about some higher-order concept in my domain (assuming "foobinate" would be some kind of term from geometry that a domain expert might know).
All other points about good or bad abstractions apply. I just don't think every single function we write is a new abstraction.
I realize it's been 8 days, but I've mulled over the distinction and figured out the point I'm trying to make - and it's a matter of concept reuse vs code reuse. I might write a once-off, project specific, completely nonreusable function, with exactly one call site, but it still might be named after and based off of reusable concepts.
A concrete example that comes to mind: I often write a "main" function, even in scripting languages that don't require it. This lets me place the core logic at the start of the script for ease-of-reading/browsing without having all it's dependency functions defined yet. I then invoke this main function exactly once, at the bottom of the script.
This is clearly not code reuse nor code deduplication - but it is concept reuse, the concept being "the main entry point of an executable process."
I might write a mathematical function like "abs" or "distance" as a quick local lambda function without intending to reuse it as well. I might later refactor to reuse/deduplicate that code by moving it into a common shared library of some sort. I might then later undo that refactoring to make a script nice and self-contained / standalone / decoupled / to shield it from upstream version churn / to improve build times / ???
> multiplication
If you'd only used multiplication exactly once, it wouldn't have had much staying power as a useful abstraction. That it's a repeating, common, reusable pattern that can be useful in your day to day life is part of what makes it a useful abstraction worth internalizing.
> If you write a new function like `fun addThreeTimes(i) = i + i + i`, I don't see that as a new abstraction at all.
If you only call it once, it's not code deduplication either.
What differentiates addThreeTimes(i) from sqrt(x) or average(x,y) or pow(x,y) or multiply(x,y)? Not how many call sites it has, nor the presence of a dedicated operator to the function in the language. Instead, I'd say: the function's reusability, composability, commonness, ... or to put it another way: addThreeTimes is an "abstraction" - it's just a poor garbage unreusable unremarkable unrememberable abstraction with no expressive power.
However, poor abstractions aren't the only result of overeager code deduplication. Sometimes you end up with "good" abstractions misapplied to the wrong situations - e.g. they solve issues your current problem doesn't actually have. As an example, turning your list of game entities into a list of (id, aabb_f32) tuples might be exactly what you want for a renderer culling or broad phase physics pass - but completely counterproductive for implementing the gameplay logic of a turn based game! If you've already got a list of tuples, you've a few choices:
1. Modify the tuple (add tile position information that's useless to the renderer/physics, muddying the abstraction)
2. De-abstract (e.g. perhaps change several function signatures to pass in the original entity list instead of the AABB list)
3. Re-abstract (perhaps your gameplay logic should take something else that accounts for things like the fog of war instead of a raw list of entities?)
4. ???