But better than this is would be if my go tooling didn't default to trying to pull codebases through a proxy
Forgive me for being cynical, but I don't like that out of the box the toolchain tries to pull all the code at `example.com/business/logic` through google servers.
As it happens, we considered sum types quite seriously in the early days. In the end we decided that they were too similar to interface types, and that it would not help the language to have two concepts that were very similar but not quite the same. https://groups.google.com/g/golang-nuts/c/-94Fmnz9L6k/m/4BUx...
When I did research on this topic ages ago I read both of these links, and I'm very familiar with the arguments.
I also vehemently disagree with them, and I think that the way code is factually written in practice is on my side: People who propose sum types commonly refer to the Option<T> or Result<S,E> types in Rust. These are types which are almost exclusively used as return types.
Interface types are the opposite. They're used as input types, and almost never to distinguish between a concrete, finite list of alternatives. They are used to describe a contract, to ensure that an input type fulfills a minimal set of requirements, while keeping the API open enough that other people can define their own instances.
The fact that Go in fact does not use interface types for its error handling is a pretty good argument in favor of that, I'd say.
The thing is, at this point it doesn't matter, sadly. Adding sum-types to the language now would be unsatisfying. You would really need proper integration as the primary tool for error handling in the standard library (and the ecosystem), and that's unlikely to happen, even less likely than a Go 2.0.
EDIT: Just to make it clear, I think not wanting to add sum types to the language is understandable at this point. The real shame is that they were not in the language from the beginning.
Go made a deliberate decision to use multiple results rather than Option<T> or Result<S, E>. It's of course entirely reasonable to disagree with that decision, but it's not the case that the Go language designers were unaware of programming language theory or unaware of sum types. Although you suggest that you want sum types as the primary tool for error handling, Go made a different decision, not due to accident or lack of knowledge, but intentionally.
(Separately, I'm not entirely sure what you mean when you say that Go doesn't use interface types for its error handling, since the language-defined error type is in fact an interface type.)
Fwiw, I didn't mean to imply that they don't know any language theory, just that the language doesn't seem to reflect it. I don't think this itself should be a controversial statement, by the way, Go aims to be a simple language, and the last thing it needs is monads.
Frankly, I'm just the type of person who doesn't understand why it is possible to silently drop error values in Go (even accidentally), see
while the language is simultaneously very strict about eg. unused imports.
It seems like a pretty severe flaw for a language that takes pride in its explicit error handling, and a deep dive into why this flaw was acceptable to the creators would be really interesting.
For now though, instead of sum types we ended up with features such as named returns (???). I imagine some of the complexity here was about not wanting to introduce an explicit tuple-type, since a Result<S, E>-type doesn't compose with multiple returns. (I feel like there should be some workaround here. Maybe anonymous structs + some sort of struct/tuple unpacking, but I could see it getting gnarly.)
> I'm not entirely sure what you mean when you say that Go doesn't use interface types for its error handling, since the language-defined error type is in fact an interface type.
What I meant is that this specific usecase of sum-types (ie. error-unwrapping) is not something that interfaces in Go are used for. Error-handling in Go is done via multiple return values. This goes against the common claim that "sum types and interfaces are too similar/have the same uses", and should count for something, considering that explicit error handling is a big component of Go.
I'm not going to claim that Go has the ideal approach to whether an error can be ignored. In general, in Go, some errors can be ignored, and some can't. For example, fmt.Fprintf to a strings.Builder can never result a non-nil error. It's fine to ignore the error returned by fmt.Fprintf in that case. On the other hand, fmt.Fprintf to a os.File can return a meaningful error, and for some programs it's appropriate to check for that error. (Though the issue is complicated by the fact that complex programs probably use bufio which does its own error handling.)
I'm not personally concerned about examples like os.Open, where the result must be used. Sure, you can ignore the error. But a failure will show up very quickly. I'm not saying that this is not an issue at all, but I believe it's a less important one.
Part of the reason for Go's behavior is the idea that "errors are values" (https://go.dev/blog/errors-are-values). And part of it is that the simple fmt.Println("Hello, world") doesn't require any error handling. And part of it is the desire to make error handling clear and explicit in the program, even if the result is verbose.
Having worked on large golang codebases, it shows quite quickly how badly error handling is, the example you mentioned being one of them, as well as others not detected by the compiler or linter. Using multiple return values to model errors is fundamentally broken, as with other things they chose with the language, either deliberately or not.
There isn't a code review for the changes on the dev.go2go branch (though you could construct one using git diff).
The dev.go2go branch will not be merged into the main Go development tree. The branch exists mainly to support the translation tool, which is for experimenting with. Any work that flows into the main Go development will go through the code review process as usual.
Remember that they are only pinned to an old language version. They will still work fine with new release of Go, they will just be built with the old language semantics. So what's the harm?
I agree that unmaintained libraries that don't adapt to modules could be a problem. We'll have to see what happens.
Go could introduce a moving GC without requiring that map keys not be interface types. The simplest approach would be to use read barriers as we already use write barriers, and forward pointers during the moving phase of the GC.
(Efficiency costs might prohibit that, but it could be done.)
Another approach would be to use two different map implementations, and use a less efficient one for older code that used an interface type as a map key.
It's an interesting example, though. You're right: if there is an old language feature that can not be supported by a newer runtime, then there needs to be some sort of shim to let the older code keep working. That necessity may prevent us from making certain sorts of language changes.