>> Experience from languages that support those features has shown that what we gain in the very few situations where those extensions make sense
Nope, this is just incorrect. The C++ is completely dominant in large swaths of the software industry largely because operator overloading allows the writing of generic algorithms (which allows you to write large scale software without losing C-like performance). Without operator overloading, it is much harder to write a function that can be specialized on types that weren't specifically designed for such use.
In Python, Numpy, the Decimal class, I could really give examples for days of cases where operator overloading is essential. Go doesn't have it, Go doesn't have a lot of things, and Go will always be an also-ran language that isn't adopted outside of a very narrow domain.
The fact that built in types are 'special' and only they can support operators such as [] is enough for me to avoid the language. The complete lack of generic programming is more than enough.
Without operator overloading, it's hard to make generic numeric algorithms palatable.
Compare (fake Go-with-Swift-generics syntax):
func Distance<N>(x N, y N) N where N Number {
return x.Mul(x).Add(y.Mul(y)).Sqrt()
}
Versus:
func Distance<N>(x N, y N) N where N Number {
return Sqrt(x * x + y * y)
}
If your reaction is "well, Distance doesn't look too bad like that", I can replace it with matrix multiplication or point-inside-triangle testing. Not having overloading quickly fails to scale.
Of course, you frequently don't want operator overloading for matrix multiplication. Avoiding allocations is a big deal for speed, and so c.Mul(a,b) is often the right answer even with operator overloading.
I'm not aware of a language that allows operator overloading for matrix operations, but denies you access to c.Mul(a,b) or the equivalent when you need it. I find operator overloading pretty crucial for prototyping this sort of algorithm even if I'm later going to refactor it for performance.
edit: rereading the earlier comments, this may be orthogonal to the GP's claim that operator overloading is crucial for generics. Ooops. :)
For the distance formula? That's about the simplest graphics routine in the world. If you can't readably write distance without temporaries, the language isn't really usable for (generic) graphics programming. Replace distance with bilerp or point-triangle intersection tests (as I did in a sibling comment) and you'll see what I mean.
(It's totally fine for a language to be not interested in that domain. But that doesn't mean operator overloading is bad. Overloaded operators are essential for some domains.)
In the same way that we can decompose our code, we can also decompose
the concept of "operator overloading": it gives you is the ability to
use one-letter (1), fixed-arity and precedence-following (P), infix
(I), operators for your own or someone else's operations (G).
In languages that support 1PI properties, you'd often overlook such
decompositions because it's quick and easy to write sqrt(x * x + y * y). To
read it, also, but then you find yourself doing more and more complex
calculations in-line. Reading suffers. You may end up with something
that's worse than the corresponding code in a language that encourages
defining small functions instead. (Lisp, of course, lets you use any
combination of these properties, but the latter style is the one
normally used.)
Yes, this is a Go thread... but I'll leave it here anyway.
And you're suggesting that nowhere in what used to be called the C++ Standard Template Library is there a place where they didn't have to come up with a naming convention for functions?
I think you may have the causality swapped here; I think <algorithm> looks the way it does because they limited themselves to things that could be easily expressed with operators --- for a long time, to the detriment of the language; see: STL associative containers, operator<, and the longstanding lack of a standard hash table.
How was <algorithm> preventing the introduction of a hash table? The hash function is a template parameter, predefined for standard types, no operator needed.
Josuttis claims that hash tables didn't make it in C++98 due to lack of time.
I think we're talking about different things. Can you give an example of how operator overloading improves one's ability to write a function that can be specialized on types that weren't specifically designed for such use?
Now, implement a function 'average' that can work on any of these three types.
If you can get everyone in the world to agree to a convention of how to express 'addition', then there is no difference, except we already have a convention for 500 years, and it is the '+' operator. Why you think the '+' operator is confusing but .plus() is not, that is what confuses me.
Or how about 'minimum'. In C people end up writing a minimum C macro, because there's not even a way to write one function that works on int, long, float! What a sad world that is, where you have to meta program to implement min(x,y).
Operator overloading is often problematic because the operators come with semantic baggage, such as properties they maintain, and implementations of those operators often don't maintain those properties. For instance, addition is associative, and has no additional side effects beyond the value it produces, but an overloaded operator won't necessarily implement those. Abstractly there's nothing wrong with that, but in reality it has proved difficult. Operators also often come with an order-of-operation hierarchy designed for mathematical operations that map poorly or confusingly to what the overloaded operator is doing, which causes further practical mismatch.
Operator overloading works best when being used in domains where the original constraints hold; for instance, adding a matrix to a matrix is mostly the same as adding two numbers, though if your type system can't enforce that the matrices are the same size at compile time, you still added the ability for + to throw an exception, which it will never do with ints. Operator overloading got its bad name from cases where people were overloading the operators to do something entirely unlike what the original operator did, causing a mismatch between the user's expectations and what it actually did and therefore bugs.
Operator overloading isn't really "right" or "wrong" per se, but it's probably a bad idea for anything that isn't able to fully implement the contract of the operator, including "associativity", "no side effects", whether exceptions can be thrown, etc.
If you read carefully, you'll generally see operator overloading arguments have two groups talking past each other, one cursing things like C++ streams that basically overloaded the operators in a meaningless way for nominal convenience that causes a lot of long-term headaches, and the other praising the benefits of overloading for math, since math is the big case where it works correctly.
Back on topic, Go correctly does not have operator overloading because Go's authors, as near as I can tell, have no intention of Go being good for mathematics.
> Operator overloading isn't really "right" or "wrong" per se, but it's probably a bad idea for anything that isn't able to fully implement the contract of the operator, including "associativity", "no side effects", whether exceptions can be thrown, etc.
Operators are not special here. What you are saying is basically "don't claim to implement the interface if you didn't implement it".
This exact problem exists widely in, for example, Java. How often do you see a Java object where someone overrode equals() but not hashCode()? I've seen that problem vastly more often than anyone doing something crazy with operators.
"Not quite true, the language could implement checked over- or under-flows (I believe Swift does)"
I am one of the apparently about ten people who thinks that should be the universal default. The vast bulk of people disagree, and I was trying not to poke the sleeping dog. :)
Well, that's that then; now I just need a Rust project and I'll start learning it. (I'm only an early adopter of languages, not a bleeding-edge adopter.)
Ah, I see what you mean. You can see this issue in action in Go's math package, where "Min" is defined on only float64.
I was lumping this in under the general category of the "Go has no generics" issue; without generics, Go can't do this anyway (and the closest solution you would have would be to define an interface Algebraic that specifies functions algebraic types need to support, then implement operations like min and average atop those interfaces). I'm still of the personal opinion that (as other commenters noted) what you gain in being able to define operator+ you lose in boost developers getting clever and implementing operator/= on paths; even if we had generics, I'd personally find a Go-like solution of declaring the function package an algebraic type had to support (via an Interface) preferable.
I think that if you begin with the idea that you have multiple conventions and you cannot avoid that, therefore you won't solve the problem with "+" operator.
You have just one more convention.
Type 1 defines: .add(x),
Type 2 defines: .plus(x),
Type 3 defines: operator+,
...
If you assume that you can convince people to adopt a convention, you can use .add(x) and avoid the problem in the first place.
Go for example tries to have always one obvious way to write things. The Go standard library it's the idiomatic Go bible.
> Now, implement a function 'average' that can work on any of these three types.
I'm not convinced this is such a large problem that its solution is worth the myriad downsides that operator overloading is chained to. There's lots of schoolbook examples like this, but I've almost never seen operator overloading used well in practice. There's a few examples, like boost shared pointers are somewhat easier to read, but they're few and far between
As someone above was mentioning, I don't think that go was ever intended for heavy math programming. It's a great language for writing network applications that is more performant and deployable than scripting languages, yet easier to use than low level languages. Use rust or c.
Please elaborate on the myriad of downsides that operator overloading is chained to. I am honestly not aware of any downsides that are unique to operator overloading
Hint: Things like "they can be abused" or "you can do crazy things like have + return a dot product" are not unique to operators. I can very easily define .clone() in Java to return a dot product as well, or have .equals() do in-place addition.
At this point, operator overloading would simply be another convention, would it not? Your type would not work with mine if I used .add instead. You did not solve the problem of having to get everyone to agree on a convention.
Nope, this is just incorrect. The C++ is completely dominant in large swaths of the software industry largely because operator overloading allows the writing of generic algorithms (which allows you to write large scale software without losing C-like performance). Without operator overloading, it is much harder to write a function that can be specialized on types that weren't specifically designed for such use.
In Python, Numpy, the Decimal class, I could really give examples for days of cases where operator overloading is essential. Go doesn't have it, Go doesn't have a lot of things, and Go will always be an also-ran language that isn't adopted outside of a very narrow domain.
The fact that built in types are 'special' and only they can support operators such as [] is enough for me to avoid the language. The complete lack of generic programming is more than enough.