Hacker News new | past | comments | ask | show | jobs | submit login
Next steps toward Go 2 (golang.org)
334 points by rom16384 on June 26, 2019 | hide | past | favorite | 214 comments



Go so far has always prioritized the things I care most about:

- performance & high concurrency - a small language feature set so I can hire & train awesome programmers with a different background - practical solutions, with a focus on getting things done

I feel confident that with Go 2 they will continue to go down this road. The `try` syntax looks kinda confusing with the implicit return. On the other hand, nothing too crazy.


The "Go is performant" gospel seems to be coming from people who came from Python and JavaScript to Go. I don't think Go prioritizes performance more than most other popular statically-typed language Java[1], Haskell or Swift (or less popular ones like Ocaml).

And Go obviously doesn't prioritize performance as much as C++ or Rust. Case in point is with very same error handling proposal discussed here. You're going to have to pay a performance price for using a handler since the Go compiler not only avoids inlining defer blocks (if I understand correctly), but even allocates dynamic memory for each block. Meanwhile, equivalent RAII in C++ has no overhead over manual code.

Yes, the Go compiler team is now working on working on fixing this issue, but defer is not a new language construct and they should have started fixing that long ago, considered how often it's being used. What's worse, they still didn't make it a stated goal to eliminate the overhead of using defer completely, but only talk about reducing overhead in common cases.

This approach makes me sad, because Go does have the engineering power and stated design goals to be more serious about performance. Yeah, I feel confident Go will keep improving, but just the same as Java does (without the same hype).

[1] Although until recently they did have radically different choices about which kind of performance to optimize, e.g. Java went up with throughput (Throughput-optimized, JIT tricks for fastest execution after warmup), while Go went for low worst-case latency and faster startup time.


I don't think anybody argues that Go is the most performant language available.

If performance is your main requirement, at the expense of everything else, then Go is definitely not a good choice.

Many consider it a good compromies between performance, readability, syntax, ecosystem, concurrency support etc..., and I believe it is what their designers aimed for originally.

And I'm not saying that other languages do not offer similar compromises, it's nice to have options.


I'm not GP, but I like Go because it prioritizes performance in addition to compile times, ease of use, safety, etc.

I don't think anyone is claiming that Go will outperform well-optimized C++, if you're willing to write your server software in it.

Of the languages you mentioned Java would be the most typically considered alternative for server software. You mentioned Go's low latency but there's also some other strengths like more control of memory layout, slices make it easy to avoid copying in APIs, Goroutines can be lighter weight than Java threads, etc. But as long as Go is roughly comparable I think it's fair to consider Go performant, since Java itself is one of the most mature and optimized languages for server software.


An interesting question is how the compile times in Go compares to those of Java and C#. If the language is somewhat easier, but not very easy, and compiles faster but not really that much faster, it might not be a good way forward for Java and C# developers.

I've always assumed that the compile time comparisons were made against C++ since Google had a lot of large systems written in C++ that they said took a long, long time to compile.

Today I think C# using .NET Core is a good general language. It is easy to learn, fast and compilles pretty fast. It also handles concurrency with async/await or threads. It still has longer start up time than Go, like Java, and might use a bit more memory but should work for most systems.


I totally agree with you on C#, I don't think there's a really strong reason to chose Go over C# for a lot of applications.

Some of the core advantages of Go over Java are less pronounced with C# since C# has async/await, value types, easier C interop, unsafe, etc. The tooling for C# is great too.

C# also has bindings through Xamarin/Mono for iOS/Android/Windows/Mac development that the Go ecosystem doesn't really have an equivalent for. And of course there's Unity for 3D stuff.

There's some minor things like Go being a little less verbose or having a little easier learning curve, but I didn't really find those to be significant problems when using C#.

The main reason I'm using Go right now is mostly the GC properties. It's easier to avoid allocating memory but also the pause times are much lower than most other GC languages including C# [1]. For anything where latency is important Go is an interesting choice these days since it's hard to go any lower without adopting some form non-GC memory management.

https://cdn-images-1.medium.com/max/1600/1*_Nom6vNYqIAqozgK0...


Download the community version of Delphi or Eiffel.

Then check their compile times, language features vs Go.

Or even better, get Turbo Pascal 7 for MS-DOS and run it on DosBox.

It is only impressive for developers that never used such languages.


JIT time disappears after a warm up, but cold start of a .net application is massive.


AOT has always been an option on .NET, even if until Windows 8 it lacked some love.

Additionally it has been available in Mono and other runtimes as well.


Defer already allocates on stack, in the case where it is only ever called once in a function, in HEAD I believe.

Anyway, cherrypicking a single feature of Go and claiming it's an example of why Go doesn't care about performance feels unfair. There's definitely been a huge amount of engineering effort spent on Go performance, especially latency in the GC. Defer is "slow" but it is probably rarely a bottleneck for anyone's use case. Try/catch are pretty slow in many languages.


> Yes, the Go compiler team is now working on working on fixing this issue, but defer is not a new language construct and they should have started fixing that long ago, considered how often it's being used. What's worse, they still didn't make it a stated goal to eliminate the overhead of using defer completely, but only talk about reducing overhead in common cases.

They constantly improve defer's performance. If it's all you have against go, then I guess it's good enough for you.


i get the feeling that Go is quite well rounded. like good overall but best at none.


I have to say, as time goes on, the more I like contracts. Go concrete types can satisfy go Interfaces. Go concrete types and Interfaces can satisfy contracts.

I would actually feel better if the proposed generics were a little less powerful. Composing generics can get ugly. This could be countered by community norms, however.


Yes there are languages that are easier to work with, and yes there are languages that are faster. If you want to support hundreds of millions of users and have a relatively small dev team, Go is one of the best options out there since it's both performant and productive to work with.


So wait, they're introducing magic "function" that has a variable number of generic return arguments? Huh. In Rust the try! macro makes sense because A. there's a precedence of macros and B. there's a precedence of generics. But in a language with neither macros nor generics, this just seems odd. Kinda like making the Justice League movie without any of the requisite standalone movies.


In fairness to Go, there is a precedent for built-ins to be "special" in ways like this. So e.g. while there are no user-defined generic data types, the built-in collection types like array are generic.

The Zen of Rust is very much in the direction that standard operators shouldn't be more "special" than necessary, and a lot of fundamentals become "library-able" or at least library-extensible. Go simply doesn't take this approach. It's a more opinionated language, and it some cases that means you get the tool it makes for you, rather than getting to craft your own tool for yourself.


Isn't "generics for me but not for thee" already a core part of the Go philosophy?


No, you are trying to justify odd builtin functions choice in retrospect. Nobody has ever said that Go "secret generics" are a core part of the Go philosophy but you.

The reason they exist is because Go language designers could not do without them. The same people that have claimed for years that Go didn't need generics.


I don't think "for me but not for thee" is an attempt at justification, fyi.


I can confirm. It was not an attempt at justification.


> The same people that have claimed for years that Go didn't need generics.

Considering all the successful software that's been written in Go, they weren't exactly wrong.


I don’t think that’s a valid argument. Firstly, successful software has been written in languages that we would call atrocious nowadays.

Also, even if go were better than all other languages, it still might not be perfect.

Let’s say there are 10 ‘objectively good’ programming language features.

If most languages have 7, but go has 8, you still can complain about wanting the two other ones.


> Considering all the successful software that's been written in Go, they weren't exactly wrong.

Considering all the successful software written in PHP...? Or Javascript...? Java...? Your argument absolutely has nothing to do with my point.


I think you're confusing "want" with "need"


Successful software has been written in every mainstream language. That does not mean that these languages all have no room for improvement.

"Langues do not need function calls, successful software has been written in assembly" is a pretty weak argument.


Seems like the infrastructure built with it requires it at a certain point.

https://medium.com/@arschles/go-experience-report-generics-i...


>Considering all the successful software that's been written in Go, they weren't exactly wrong.

How much of that software doesn't use Go's internal generics?


Once upon a type I was also writing successful software in Assembly, Clipper, Turbo Pascal, Turbo Basic, C, ....

It is such a lousy metric "if it is used it is good".

Well PHP and JavaScript are also used, a lot, even more than Go, most likely bringing home much more revenue as well.


By that logic, why update Go with _any_ new features? Just because something can be used as-is doesn't mean that it can't be improved


Go already has "append". It would be similar in that sense.


append, close, len, …

I guess the new bit is the variable-arity result from a function, not just a language feature (e.g. <- or range).


panic() and recover() are a built-in function, so there is already some precedent for doing this.


Also append(), copy(), new(), etc.

Technically ("well, acksually"), Go doesn't "lack generics". It lacks user-defined generics. The language implementation has several generics in it, you just can't add your own. So if you hear "Go doesn't have generics" that can give you a slightly incorrect impression about the language itself, but of course what people do generally mean is that it lacks user-defined generics and that that is bad, so there's still certainly a valid criticism there. It just may not quite be what you thought it was.


The problem with built-in generics is that they are quite ad-hoc and hard to reason about. Each generic built-in function is re-implemented by hand in the compiler.

If we call this type of ad-hoc implementation "generics", we're going down a very deep rabbit hole, since then almost any programming language can be said to have generics. For instance, Write[Ln] in Pascal, sizeof() in C, all array functions in Java pre-1.5. In other words, technically when people say "Language X lacks generics" they always mean "Language X lacks orthogonal, predictable, non ad-hoc, user defined generics".


I don't think this is a useful perspective. Every language with a static type system has some "built in generic types". Even Pascal and C have array types.

So there's no point in requiring the "user-defined" qualifier. For "generics" to mean anything useful, there must be some languages that the word applies to and some that it doesn't. If you treat "user-defined" as implicit, then that works. If you don't, then it says nothing.


> Technically ("well, acksually"), Go doesn't "lack generics".

Yes it does, generics are types. You bet all these append(T[A],A)T[A] tricks are hard-coded in the compiler in an adhoc fashion, and not part of Go's type system.


I don't think I'd bet that way...



I’ve heard this « go has generic it just doesn’t let you access it » argument so many times, it would be really good if someone could actually point to the piece of compiler code that define those generic capabilities.

My guess is they don’t have anything usable by the end user any way, otherwise they wouldn’t spend so much time thinking about a design for go 2. It probably means whatever is in the compiler isn’t that elegant, nor reusable.


Implementation choices are just that. Arguments about whether the language does or does not have generics should refer to properties of the language, not properties of the compiler.


The argument indeed isn’t about the language, but about the reason why the language doesn’t have generics. The fact that some predefined functions are developped using something looking like a generic type system, or not, is an interesting factor in this conversation.


I dislike the try implementation, one of Go's strengths for me after working with Scala is the way it promotes error handling to a first class citizen in writing code, this feels like its heading towards pushing it back to an afterthought as tends to be the case with monadic operations


It's just syntactic-sugar. From the proposal (https://github.com/golang/go/issues/32437):

    f, err := os.Open(filename)
    if err != nil {
        return …, err
    }
is simplified to:

    f := try(os.Open(filename))
> I dislike the try implementation

What would you suggest instead?


One downside to this is that doesn't work for functions those return values aren't disjoint.

io.Reader is an example where the function can return io.EOF and still returns valid data together with a count in its first return value. If it returns a count > 0, the buffer has valid data. A lot of Go projects get this wrong because it's not obvious -- it goes against the behaviour of almost all Go functions.

Go is unfortunately full of little surprising warts like this where you have to tread super carefully. Nil channels is one of my favorites.


What a thorn the design of that Read method is, because it has caused perennial FUD against the validity of sum typing in Go circles.


Has it come up a lot in Go proposals discussions?


Dunno, I meant reddit flamewars etc.


It doesn't matter what we suggest. There are 100s of suggestions on that Git ticket, and it's a fait accompli masquerading as an RFC.

Down-vote if you want, but Go modules package management was handled in the same way. It's not without precedent.


And hopefully try will have the same outcome, i.e. something significantly better than existed in the ecosystem before.

Basically I agree that the process for modules may not have been ideal, but the outcome was pretty good. We could do a lot worse than getting the same again.


I mean, everyone is entitled to their own opinion, but I actually liked dep and never had a problem with it. With Go modules, it has been a massive problem converting our microservices over to Go modules, especially because some dependencies like Resty caused a lot of headaches and I lost several man weeks going through the process. Dep worked and modules doesn't seem to have brought us any real benefit, but again, my use case is my own and perhaps for you module was a better experience.


Go modules have some (minor, in my opinion) issues, but Dep was a total nightmare.

Dep was made by one of the authors of Glide, which coincidentally suffered from many of the exact same bugs, maybe because it reused some of the same code. Our entire team constantly had "dep ensure" failing in unpredictable ways (locally or on the CI server), usually caused by the solver not understanding something. These failures were completely incomprehensible and not fixable by the user. Dep's solver was also very slow.

From what I could tell, it was a combination of shoddy engineering and trying to do too much; if I remember correctly, Dep and Glide both tried to automatically detect and convert dependencies that used competing tools like Godep, which didn't work very well. For a long time, Dep didn't work at all with the Kubernetes client library, for example.

We've had almost zero problems with Go modules. I've encountered some minor bugs, but unlike Dep, nothing worth tearing my hair out.

The drama around Dep vs. Russ Cox shouldn't have happened, of course.


Any good documentation on using go modules day to day?

I have found the package management story around go so confusing its really stopped me from trying to use the language.

For context, I'm used to how composer/npm install the modules into a folder locally. I just can't seem to figure the go way of doing modules where I don't get completely confused.


The official wiki is good: https://github.com/golang/go/wiki/Modules

This article looks like a good intro: https://roberto.selbach.ca/intro-to-go-modules/

The equivalent of "npm install" is "go get" (optionally "-u"). You can also edit go.mod manually. Like NPM, this must be done within an application's root.

A point of confusion might be the difference between a module and a package. A package is just a folder that declares "package foo" at the top in all its Go files. A module is the closest analogue to an NPM package. Similar to NPM, a single Git repo can contain many nested modules. Unlike NPM, modules can be imported with Git -- no need to publish to a special registry.

When a Go file imports a package, the referenced package might live in a module outside your own. It knows it's outside your module because go.mod defines the full root path (e.g. github.com/foo/bar/baz) of your module.

You might experience some confusion when you look at how the new module system interacts with the old GOPATH way. Go current supports both modes.


Indeed, I had exactly the reverse experience; dep never worked well, it was always moaning about obtuse errors ("project is not part of any GOPATH" etc) and was pretty slow even when it did, whereas go mod just worked almost flawlessly. I guess our mileages vary :)


Why did you go through it? You didn't have to, you could have waited a couple of years. dep still works. A custom script to populate a GOPATH with specific versions or forks of all needed modules still works.


Yes, in fact, I like Go modules. But that's not the point, the point is that the community had precisely zero say in how that feature works.

Likewise, the community doesn't have any say in how try works, and that ticket asking for feedback is window dressing.

You can tell that this is window dressing by looking at the timestamps on when that ticket was opened, comparing it to when this blog post was released, and then considering how the hundreds of comments on that ticket were completely ignored.


I feel like there's also some "not invented here" syndrome going on.

Dep was pretty good, and there was absolutely the possibility of just making that official (and maybe tidying away a few of the points raised during the module discussion). But no, that wasn't good enough.

Dave Cheney's errors package (pkg/errors) is a great contribution to Go error handling, used by a lot of gophers. Adopting that would have been simple, backward-compatible, and given us better error handling. But no, that wasn't good enough.

I'm so used to "if err != nil" that it's punctuation for me. Having some code use "try" because it doesn't need to be wrapped, and other code use "if err != nil" is going to break that. I'll have to actually read the error handler now, in case it does something unexpected.

Or not use "try" and stick with errors.Wrap. That seems way more sensible to me.


I disagree. I think Go modules are the result of listening to the community. GOPATH, for example, would still be there if the Go team would not have listened to the community.


Why must “the community” necessarily have any say at all in how this new feature is designed?


simply, because "the community" are a set of people who have spent (collectively) millions of hours using the language in a massive variety of use cases and environments.

it's not democracy, but they'd be stupid to just ignore all that expertise.


modules as a way to download stuff, and to remove the dependency on go path.

Sure

Semantic import version and the rush of shit to the head in their attempts to break vendoring on the other hand.


It is different from the original proposal and takes user feedback into account. At the end of the day one has to make the call.

I think I can live with it.


The thing is, they don't have to make a call. This is a non-problem where a change to `go fmt` would suffice (i.e. changing fmt to eliminate the line breaks in `if err != nil { return err }`).

This is the C# 9.0-ification of a language. Let's add more constructs because we're compiler developers and that's what we do.


Choosing nothing is still making a call.


The problem with this, with Go being a pragmatic language that encourages people to "just write simple code that works," is that people will inevitably use `try()` more often than needed without properly handling, inspecting, and/or wrapping errors.

I'm saying this as someone who has helped a large organization of primarily Python programmers onboard with Go. Proper error handling is already something people don't do well (until we enforced lint rules throughout the company, there were already way too many `f, _ := os.Open(...)` types of error dropping).

I'd much prefer something more strongly typed with respect to error handling, rather than less. Like if Java had disallowed extending RuntimeException so all exceptions would be part of function signatures (and therefore have to be caught).


also, using the same word for a completely different implementation isn't great. people coming from the try/catch world are going to be confused by this, purely because it makes "try" do something completely different.


It's syntactic sugar masquerading as a function call. At the very least, I would use new syntax to make it clear that try does not have the sematics of a function call.


This I can agree with. Rust, appropriately, has special syntax for similar semantics.


I like the idea of throwing better. The whole point is reducing boiler plate so that a conditional return can be made. Something like this:

    f, err := os.Open(filename)
    throw err

This allows alternate signatures easily. I also think the syntax makes a little more sense, as you're "throwing" the error up the chain unceremoniously.


I'd make it a keyword at least (things would be fine with lots of warning, just call it Go 2 and accept the incredibly minor breakage), so you don't spray nested brackets everywhere.

f := try os.Open(filename)

Or perhaps better, leaving the working code alone and concentrating on the bits people actually don't like (the error handling boilerplate).

f, err := os.Open(filename)

check(err)

There is no need to attempt to put the error handling on the same line as the function, and if you do it just gets in the way of reading the actual function call, permits nesting etc. This approach would have the advantage of leaving normal code alone and only touching the error handling.

I'd even be fine with check as a new builtin function, since it would typically sit alone anyway, I don't think it needs to be a keyword, whereas with try I think all the brackets will make it harder to read.

In fact I've just found a variant of this proposed here:

https://github.com/golang/go/issues/32811


Do nothing!


To me, that sounds a lot like, when try() is introduced in Go 2, don't use it.

(Though I guess it's a bit harder because you'd have to have your whole team not use it...)


And also all libraries, and community coding habits (which educate new users of the language), and....

"don't use it" is definitely an option, but it's nowhere near "it doesn't exist". It's opting into a constant struggle against the tide.


Code reviews can help. For me, all examples uses of try would be great with properly contextualized error messages ("could not open file" as opposed to "could not read file": it is two very different operations after all!)

At the same time we are surviving the type aliases quite fine (who has ever seen them used? Not me...)


> At the same time we are surviving the type aliases quite fine (who has ever seen them used? Not me...)

You night not have used them directly but you might have benefited indirectly from type aliases. For example, when the "context" package has been moved to the stdlib aliases allowed existing libraries that still used the old "x" context package to be used together with code that used the new package.


> What would you suggest instead?

Implement exceptions. I'm honestly at a loss as to why the Go community hates them so much.


There are a number of good reasons not to like exceptions. And yes I'm fully aware that good coding practices and some implementations can mitigate these problems.

* Exceptions create a type of non locality that is hard to reason about.

In a language with exceptions every call to a function might result on the calling function exiting. Flow control might jump up serval stack frames unexpectedly. This works against the principles of structured programing and the normal expectation that a function when called will return a value.

This means the programer, the compiler and the runtime environment must always take this into account. This can create unexpected behaviour and creates a whole host of special cases for the compiler and runtime.

* Uncaught exceptions cause crashes

Even when the error is not something that would cause a problem if an exception isn't caught the process will crash.

* Exceptions are often too heavy for what they do.

In many situations an error is an expected/frequent outcome. Consider a disk based cache of some sort. The cache would attempt to open a file containing th cached result and if the cached result was not found then it would then do the processing to create that result.

In this case in most typical exception systems the exception would propagate up the call stack to the exception handler capturing the stack in the process. Once handler is reached all this work is thrown away.

* Exceptions create bad error messages

Too often I've seen a buggy java process spew stack traces into the logs where a single line error message saying what happened would have made the operator's 2am dealing with emergency much more manageable.


Go community is big, divided, and has completely different opinions on "exceptions in Go", covering "we have exceptions already, it is called panic()" to "exceptions are evil"

Oh, and by the way agreeing on what an exception IS brings whole another dimension into this discussion too.

So, I would go as far as "Go community hates them". People have asked for them more than once. No, we did not get them.


*wouldn't go as far, sorry


I think it's going to be fine.

It doesn't encourage skipping error handling, it just allows skipping annotation where it doesn't add much. Sometimes in a chain of several calls, a function in the middle can't really add much context, and is better just to bubble the error up one level without annotation - that is what try is for.


Once the linters get involved, everything that is `return nil, err` will need a special comment to silence recommendations to "use try here". That constant, nagging social pressure to reduce linter warnings will result in skipped error handling.


Try doesn't allow skipped error handling, the error still needs to be handled, just one level up.

I disagree that linters should or will recommend it except in cases where there is no annotation and there is therefore no functional difference in using try.


We'll see.


That sounds like a personal problem. Nothing nags me when I write Go. Don't use those linters.

There is a reason Go doesn't have compiler warnings.


Do you avoid `go fmt` as well?


What complaint do you currently see from go fmt that is closest to "you should use try here"?


Why are you surprised? lint, vet, and fmt are common parts of the ecosystem tooling. Many projects require they be run as part of the contributor workflow.

These tools are so common and so normal that while they aren't part of the compiler, they're part of the "social" toolchain.

So, it's odd that the author doesn't use a linter when the general tendency is to use a linter, a licensing checker (glice), a security scanner (gosec), a formatter (fmt), a static analysis tool (go vet), etc in your CI/CD pipeline.

It's so odd that it begs the question - does this author avoid common tooling? And by extension, do I care what they think about how Go should be written if they aren't using the standard tools?


I didn't state I don't use linters. I use go fmt, go vet and golint constantly. I don't consider fmt a linter so much as an auto-formatter. I don't consider the output of go vet or golint to be warnings, more like errors -- I don't ignore them.

Even if the try proposal were to result in eventual go vet errors, then surely go fix would be able to rewrite on your behalf, so this would be a non-issue, just as go fmt is a non-issue. But I don't think this will happen any time soon.

> everything that is `return nil, err` will need a special comment to silence recommendations

I don't use linters that can be ignored with special comments. I don't recommend them. This is a problem in other languages/ecosystems, but in my experience it is not a problem in Go, where it's considered an anti-pattern. This feels like FUD to me.

> nagging social pressure to reduce linter warnings will result in skipped error handling.

Yes and no? Social pressure to reduce linter warnings is a good thing, why use a linter if you are going to ignore its output. This problem only exists if you have linter warnings.

> a licensing checker (glice)

I've never encountered this. I don't think a linter that hits the Github API is a good idea.

> a security scanner (gosec)

This looks like a pretty opinionated linter. Some example rules:

> G103: Audit the use of unsafe block

> G104: Audit errors not checked

> G201: SQL query construction using format string

> G105: Audit the use of math/big.Int.Exp

> G501: Import blacklist: crypto/md5

Are these actionable? Probably not. There are perfectly valid reasons to do all of these things. Maybe this is useful to paint a picture of a code base you're evaluating, or to flag new changes as suspect, but I don't know what purpose this would serve in CI.

Don't use linters which just create noise and non-actionable output. They're not valuable, they're a waste of energy. Make them manual, or maybe run them against new changes only with the expectation that they're informative but often ignored.

These are all personal problems. Most Go users don't have these, I suggest you choose to not have them also.


This. Fmt or imports or whatever will bug you about stylistic things “don’t use snake case!” “Why are you using caps?!” “Don’t touch me!” But I’ve never seen it recommend using built in functions. Hope it stays that way.


In what way does Go do errors better than Scala though? Error handling is _more_ first class in Scala, not less.

I don't think applicative/monadic error handling is an afterthought in any way. There are combinators to recover, for instance. But it's optimized in such a way that you can focus on the happy-path of your code and you get a sensible default for error-handling (short-circuit..just like Go's idiom!)


I don't know why you're being downvoted. Error handling in Scala is strictly superior to golang's.


To me, the `try` implementation is brilliant.

It achieves what a real monadic Either / Result implementation would achieve in error handling, it stays simple, and it's backwards compatible.

While I'm not a fan of Go in general, I can't help but admire at achieving all this so simply and cleanly (as much as the underlying return-two-values thing can be considered clean).


Looks like they really plan to move forward with the `try` function... I'm personally not a fan of it, I hope they listen to the feedback they received on the GitHub Issue and not just push it through like the error inspection [0].

[0]: https://github.com/golang/go/issues/29934#issuecomment-48968...


This looks pretty much exactly like the old try!(expr) macro in Rust, which has since been replaced with the postfix `?` operator.

One crucial difference is that the proposal uses a named `err` return value that can be manipulated in defer. This is supposed to allow cleanup and wrapping of the error type.

Rust solves wrapping is by allowing auto-conversion between error types (if they implement it).

My first intuition is that it would be an improvement over omni-present `if err != nil {}`, but it feels somewhat awkward and tacked on. Especially the mutable `err` return value.

Of course there also was a reason why try! was replaced with `?`: awkward nesting and chaining. Go would have the same problem.


> Of course there also was a reason why try! was replaced with `?`: awkward nesting and chaining. Go would have the same problem.

Thanks to tuple returns being a bolted-on afterthought on the language, Go already features awkward and cumbersome nesting and chaining!


To be clear, you can name the return value whatever you want. People just typically name their error values `err`. Also, the mechanisms to mess with named return values in defer already exist: https://play.golang.org/p/7nFyiuAa3Ra


Just to clarify, I was aware that named returns already exist in Go.

My main point is that this somewhat weird pattern is encouraged by the proposal since no other wrapping mechanism exists. And IMO wrapping is really essential for debuggable code.


I totally agree that good error wrapping is essential. In fact, I've been using this exact system for wrapping my errors for years, and it's been great (https://godoc.org/github.com/zeebo/errs#WrapP).

I suspect it's only "somewhat weird" due to lack of familiarity, and that adding an additional mechanism to wrap errors when one already exists is not in the spirit of a language built from small orthogonal components.


I don't like that it's mutable state that is easy to shadow accidentally. (tooling can warn you about it, but it's suboptimal).

Also, in a function with a couple of different error types, do you end up checking the error type manually and reacting accordingly, all in one final defer? That seems error prone. And it doesn't work at all if multiple statements can produce the same error - you won't know which statement caused it.

And, last but not least, this would loose proper backtrace information: the backtraces all point to the defer line rather than separate lines for each error.


You can’t naked return with a shadowed named return variable. The compiler disallows it. Thus, at any return site, you locally know that you’re either returning the only err in scope (the named return), or the specific value listed in the return will be assigned to the named return variable.

I don’t know what error types has to do with it. If you need to annotate differently for every exit point, sure, a defer doesn’t work, but neither would any other proposals I’ve seen. In those cases, do the more verbose thing because the verbosity is apparently warranted. In my experience, it is rarely warranted, and a stack trace with annotation about the package/general operation is sufficient.

The stack traces contained still include what line the return happened on when queried from inside the defer. They retain the information about which return executed. My, or any, helper could, if desired, explicitly skip the defer stack frame, and it would be indistinguishable from capturing at the return itself.


> I don't like that it's mutable state that is easy to shadow accidentally.

It can't happen. The compiler forbids it: https://play.golang.org/p/65bFHrgGblb

> Also, in a function with a couple of different error types, do you end up checking the error type manually and reacting accordingly, all in one final defer?

If a function needs to check the error returned by another function and act accordingly, then don't use try() and use a if statement.

If we just need to decorate the errors with proper context before returning them to the caller, then we use try everywhere and a single defer statement for the decoration. We can use a single defer statement because we expect that the error context is the same in the whole function. See this comment by Russ Cox: https://github.com/golang/go/issues/32437#issuecomment-50329...

> this would loose proper backtrace information: the backtraces all point to the defer line rather than separate lines for each error

No. In the deferred function, you can get the line of the actual return: https://play.golang.org/p/7MVZupCLh5F


In Rust you can also do arbitrary operations on the error before it is returned with `Result.map_err()`.


It looks like instead of making a better error handling system, they just made it easier to not type `if err != nil` everywhere. Then there's all that handler stuff that looks very much like Java's `try ... catch` in reverse order.

Pretty underwhelming for what's supposed to be a modern language.


"Then there's all that handler stuff that looks very much like Java's `try ... catch` in reverse order."

The key characteristic of Go's error handling is that you have to handle errors in the scope in which they occur, vs. exception handing which is designed around throwing errors up the stack until something finally handles it, often quite distant from the point of the error and lacking context.

"try" just re-spells that. It isn't a step towards exception handling; it's exactly as "exception-handle-y" as if err != nil { return err } already is, whatever value you may consider that to be. Part of the goal is to make correct handling where you actually do something with the error that much easier, instead of having to do something essentially unrefactorable for every error, through a combination of allowing error handling to be factorable in this new scheme, and some other changes to errors to add more structure by convention to create official ways of composing them together and examining these composed errors in sensible ways.


The confusing part is the underlying assumption that there exists a way to "handle" errors, understood as a way to workaround the error and somehow continue execution. This is most of the times futile, there is nothing to "handle", other than abort the execution and report the condition.

For example, consider a bug that causes a data structure invariant to be violated. The correct "handling" of the situation is to fix the bug and rerun the code, not add layers and layers of error "handling" code ahead of time.


One of the things the Go community has encountered is chaining together too many "if err != nil { return err }" basically returns you to the exceptions problem of having no context around the error anymore, except now you don't even have a stack trace to help you out. (This is what I meant by the bare "if err != nil { return err }" being "exception-y", only, if you like, unambiguously worse if the code base is full of that literal block.)

"Handling" an error includes further annotating it with information about why the code in question couldn't fix it, and this is probably the most common case. (I hedge only because we humans are actually really bad at judging such things, with our availability heuristic bias and other biases. I'm fairly confident this would indeed be the #1 case, but I've been wrong about this sort of thing in the past when I actually went to check.) We don't mean "fix" the error, just... "handle". Ideally you end up with a composite error object (as I said, in conjunction with a few other library-level things that are likely to get pulled up to official support and culture) that contains much more information about the error, and that if you do end up flinging it up to higher level code, you're leaving it with more options for understanding the resulting problems and dealing with them.


The task of an error "handling" mechanism could be then described as:

A. Create an error object, enter "error mode".

B. Annotate an error object with contextual information at each call point.

C. Return the error object up the call stack.

D. Translate from "error mode" into "value mode", producing the annotated error object as a value.

For languages with exceptions, B is automatic, but limited to stack traces, C is automatic. A and D are obtained via special language constructs, for example "raise ... / try: ... except E as ex: ...".

For Go [please excuse my almost total ignorance], B is manual, but arbitrarily expressive, C is manual, but terse via "try(...)", whereas A and D are done using a combination of standard language constructs and style conventions.

Assuming the above is correct, perhaps there is some reasonable design that automates B for most common use-cases. In particular, logging the invocation arguments at D makes it trivial to re-run the offending code in a debugger, with full stack trace and invocation arguments. Wrapping most function calls with "try(...)" is annoying, but manageable, whereas thinking what information should be carried by the error object on a case by case basis is a waste of brain cycles.


That's literally the case that "panic" is there to handle.


The example was a worst case example. There are weaker versions thereof, for example the familiar:

    def handle(request):
      try:
          return Response(200, process(request))
      except UserError as ex:
          return Response(400, ex.message)
      except InternalError as ex:
          return Response(500)
The principle stays the same, there is not much to "handle" and no option to recover without external help, either by providing a well-formed request for 400 errors or by providing well-behaving code for 500 errors.


If an error is unrecoverable, panic. If all you're trying to do is generate a 500 error, net/http even handles the panic for you. If you're a library author and unsure of whether your callers want a panic or not, provide a normal and an (idiomatic) "Must" version that panics on the error.

There are legitimate criticisms to be made of how Go's error handling works, but I think the language already handles the case you're talking about.


Is Go “supposed to be a modern language”? I'm not sure even the language designers would agree with that characterization.


What is a modern language anyway?


One that takes into account features that have made into the mainstream during the last 20 years, instead of feeling like an Algol-68 subset.


Hey, that's not fair for ALGOL 68. It had sum types, pattern matching and everything-is-an-expression [1].

[1] https://dl.acm.org/citation.cfm?id=356671


I guess that's why Go is a subset :)

Although to be fair to Go, I don't think ALGOL 68 had a (slightly broken) implementation of CSP and HTTP/2.0 support in the standard library.


I see Javascript made it mainstream for all sort of clients and servers, Electron desktop apps made it to mainstream. So I remain skeptic to argument just because new features have been invented they are good or need to be implemented everywhere.


JavaScript enjoyed being the only option with regards to browser as platform.

Go is only inevitable for those that need to deal with Docker and Kubernetes, the NoSQL hype successors.


Been a fan of JS since before The Good Parts book. Used it server-side in Classic ASP, Netscape Server and a couple more obscure runtimes before Node.js.

Personally, I find Rust as more approachable and easier to wrap my head around opposed to Go. Though some of the syntax changes I don't like as much. Waiting on async/await to land in a couple months though.


No surprise there. Rust is most loved language in so many surveys. Most devs planning to learn in near future. Many say it will be polished and ready by next year. I personally feel it may be ready for general developers like linux desktop will be ready for general users next year.


I think it's already ready for a lot of use cases... though, I think baked in async/await will carry it across the line for many. It's good for low-level duties and even has some decent/good web server frameworks. The webassembly flow is better than a lot of tools as well. I think that there will be some work around nicer UI/UX tooling for graphical apps, but it's still very usable as it is.


I'm learning it now. The async / await stuff is great to have in the core, but isn't something that was otherwise holding me up from learning it.

I believe that after I fully integrate the Zen of Rust, that I will be writing multi-threaded programs with fewer bugs than I would in my alternative languages (C, C++, Go, Python, Lua).

The annoying things in Go that have accumulated in my mind over the last few years are all dealt with in a superior fashion in Rust today.

I'll still be using Go for some stuff at work, but I won't be starting personal projects in it, like I might have in the past.


I don't care about the features in the last 20 years if I'm able to do my job efficiently which Go as a language provides. Never wonder why those great academics languages with a ton of features are not adopted?


So now languages like Java, Swift, Kotlin are academic languages?


Java is like 25 years old, you know.


I know, but apparently it has too many academic features not worthy of Go's adoption.


If you don't care about literally decades of progress you're just a bad software developer.


Well, for one, it handles text as being UTF-8 by default, even at the lowest level (and in no case assumes that byte = character !)


I would say it is a modern language or attempts to be one, as it was designed recently, and was able to take lessons from a variety of older languages. Rust and Go are probably the most popular general purpose, good performance modern languages. Swift and TypeScript are also pretty modern.


Go is new, not modern.


It still needs to catch up with CLU, released on 1973.


In what way?


Generics to start with.


Which can be solved with a snippet bound to `e`.


The problem `try` is addressing isn't typing the boilerplate, it is that it clutters up the code making it harder to read.


it's not as if `if err != nil` doesn't clutter up the code!


Sorry, I said that poorly. I mean that the problem `try` is fixing is that all the `if err != nil ..` clutters up the code and they are introducing `try` to clean that up.


They aren't really moving forward with it, they are implementing an experimental version of it so people can try it in real code. Many things are added temporarily when a new version is in development. Eg. the error handling changes they mentioned were implemented, and then later most of it was removed before the feature freeze.

So there's still plenty of time to stop the proposal.


My main beef with this and other proposals is that they don't clear up a fundamental flaw in Go: Multi-value returns with errors as a poor man's sum type.

Most functions in Go have a contract that the return values are disjoint, or mutually exclusive — they either return a valid value or an error:

    s, err := getString()
    if err != nil {
      // It returned an error, but "s" is of no use
    }
    // s is valid
This pattern so ingrained that there's hardly a single Go doc comments on the planet that says "returns value or error". The same goes for the "v, ok := ..." pattern.

But this is just a pattern and not universally true. A commonly misunderstood contract is that of the `Read` method of `io.Reader`, which says that when `io.EOF` is returned, the returned count must be honoured. This is an outlier, but because the convention of disjointness is so widely adopted, many developers make this assumption (it's trivial to find repos in the wild [1] that make this mistake), and so this is, in my opinion, bad API design.

(As an aside, it's also true that multi-value returns beyond two values almost always become cumbersome and impractical, especially if said values are _also_ mutually exclusive. Structs, having named fields, are almost always better than > 2 return values.)

This kind of careless wart is typical of Go, just like other surprising edge cases like nil channels (or indeed nil anything).

I would much rather see a serious stab made at supporting real sum types, or at least mutually exclusive return values. For example, I could easily see this as being a practical syntax:

    func Get() Result | error {
      ...
    }
This union syntax showed up in the Ceylon language, and it's a neat pattern for a conservative language that doesn't want to venture into full-blown GADTs.

Such a syntax would be a much better match for a try() function, since there's no longer any doubt about the flow of data — there's never a result returned with an error, it's always either a result or an error:

    result := try(Get())
or simply support existing mechanisms for checking:

    if err, ok := Get().(error); ok {
      ...
    }
    if result, ok := Get().(Result); ok {
      ...
    }
    switch t := Get().(type) {
      case Result:
        // ...
      case error:
        // ...
    }
I'd love to see a `case` syntax that allows real local variable names:

    switch Get().(type) {
      case result := Result:
        log.Printf("got %d results", len(result.Items))
      case err := error:
        log.Fatal(err)
    }
And of course, you could have more than two values:

    switch Get().(type) {
      case ParentNode:
        // ...
      case ChildNode:
        // ...
      case error:
        // ...
    }
The Go compiler can be strict here and require that every branch be satisfied or that there's a default fallback, although some might prefer that to be a "go vet" check.

A full-blown sum type syntax would be awesome, though I know it's been discussed before, and been shot down, partly for performance reasons. Personally, I think it's solveable. I'd love to be able to do things like:

    type Expression Plus | Minus | Integer
    type Plus struct { L, R Expression }
    type Minus struct { L, R Expression }
    type Integer struct { V int }
[1] https://github.com/search?q=%22read%28%22+%22if+err+io.EOF%2...


If you haven't yet, take a look at rust... ;-)

https://doc.rust-lang.org/rust-by-example/custom_types/enum....


I like Rust a lot, and I'm doing a couple of projects in it.

But Rust is an advanced language. In the company I work for, Rust would be a no-go simply because some developers would struggle with it too much.

Go hits a nice sweet spot. You don't need to worry too much about whether something is on the heap or stack, or what the overhead of copying a struct is. It's conducive to incremental "sculpting": You can write a broad, naive implementation where you even wing it a bit with types first, and then slowly fill in detail, refining types (to the extent Go lets you), locking down performance, and so on.

My feeling with Rust is that you start and end with just one level of granularity: You can't really defer the implementation of lifetimes and copying semantics and so on until later; you have to add clone() at the beginning, whereas Go is all copy by default (with the exception of interfaces).

But yes, Rust's enums are much nicer than what Go has.


It looks like a monad for Either or Result to me, and I think it's an improvement.

The good news is if you hate it you don't have to use it, but since it'll be in the language (if they take it) you'll have to know how it works.

I don't think this is worse than things like "for:else" in Python.


Interesting to go with Monad "by convention". They wont build in an explicit type but rather utilize convention (placing err as the last param and giving it a certain shape) to replicate that behavior


Yep. Being practical and simple with easy to remember rules seems to be more of a value to the designers than to build a language with “features from Ivory Towers” (in the Haskell sense anyway).

The designers hope to give Go users “what’s needed to write good code” without having the learning curve of other languages.

I suspect this balance is hard to strike at times and I rather admire their efforts.


The statement looks like an assignment, and unless I was really used to seeing it, I would never expect an implicit return.


What the hell happened to the catch/handle draft proposal. That looked a lot more useful


It was abandoned in favor of the try proposal, mostly because the handle statement was coming partly redundant with the defer statement, but with subtle differences.


I use golang quite a lot. The way this should be used is being ignored by all of the examples I have seen so far.

When an error occurs people should be doing what they already should be doing. That is to say, immediately log/wrap the error with context (file + line information) so that the error can be immediately found. Then if there are any callers of this failing function, they are the ones who should use try.

  func A() error {
    if err := do(); err != nil {
      log(err)
      return err
    }
    return nil
  }

  func B() error {
    try(A())
    return nil
  }
To be honest though, I don't personally like this proposal.


The proposal for `try` made my eyes bleed. It combines a number of features from metaprogramming and generic types but in a hard-coded manner and doing a number of assumptions about functions signatures, etc.

I guess that's the idea of Go, to restrict the language power by giving just the functionality the designers think the programmer should have, but to not allow anything too fancy.

However, if you put some thought to it, it really sounds silly when you know that this will only limit your ability to actually use the language and not really "simplify" anything, since it only adds to the general cognitive load of the language - in this case, one more exceptional case.


One of the most common issues (from surveys, etc) that Go users want to see addressed is verbose error checking, and this proposal is to add some syntactic sugar to make one of the more common error checking idioms less verbose, in order to address that specific issue.

You are suggesting that they implement a metaprogramming system (and then build this solution on top of that) instead? On the grounds that it will reduce cognitive load overall?

Surely that is a drastically more difficult solution to the problem this is meant to address. I think it might be difficult to implement that in a way that maintains key properties of Go (like consistently fast compile times), and it might add a lot more work to tooling that needs to understand code like editors etc. And although that would allow each Go project to invent their own error handling mechanisms, and some projects might benefit from that, there's clear downsides to that as well.


Having read a decent amount of go code I'm really quite convinced the restrictions are beneficial. Personally I find it easier to read than just about any other language.


Can I politely suggest that the fact that you have read a decent amount of go code is directly related to the fact that you find it easy to read? I’ve read and programmed in many languages and apart from some outlayers the more I’ve read that language the more readable it is.


I'm of the opinion that making a language too simple results in the worst code. Sure it's easier to learn, but you end up with crap when you're trying to do something complex and there's no good way to represent it.

BASIC and VBA are extreme examples but they support my opinion :).

Go has a ton of shortcomings. No good popular package management system with versioning, hacky JSON parsing, no generics. Rust is becoming a good counter example, they listen to the community and pump out features that everyone is excited about. Like async recently.

Google has a habit of ignoring the community. Angular is a great example. I've been following a feature request for close to 3 years that has over 100 comments asking for it. A simple thing too. Many people have opened PR's and begged for it in vein, even as recent as a week ago.


it's much less magic than defer, panic and recover - it's basically just a macro. if that irks you, all of the above should make you hate the language.


My biggest concern is how they will implement generics and if it will lead to generics being abused and making codebases worse. FWIW, eventhough Go is less expressive than certain languages, it wins in clarity and readability.


> Go is less expressive than certain languages, it wins in clarity and readability.

Not having to deal with inheritance is clear and readable enough. Generics wouldn't make the language less clear and readable. Furthermore, it's not an ALL or NOTHING case. Generics could be restricted to generic functions for instance, not types per say. Go already makes a lot of trade-offs, i don't see why this trade-off would be controversial.

But to me generics are the least of Go's problems. A lot could be done to make the language easier to use, like co-variant interfaces for instance, a tons of other stuff could be added in a relative "invisible" fashion that would not impact the language's syntax.



Some don't like the magic, and some are worried about losing annotated errors. If you can tolerate just a little bit more magic, you can solve the error annotation problem:

    f := try(os.Open(filename), "opening config file")
Place the context string just after the last, error parameter. That is, make try also also allow the pattern: try( a1, a2, ..., error, string ). Bonus points if the last arg could be a func(error) error, that way you can pass a local closure to wrap the error in a scoped context and also avoid pre-calculating an expensive error string even in the non-error case.

It's just a language feature, you can make up any interface you want. We're already on the magic train, might as well ride it.


All of these have already been discussed in the GH issue. It is all being taken into consideration. Check out https://swtch.com/try.html for a thematic grouping of the issue.


That's a heck of a reference, thanks for sharing! And there it is https://swtch.com/try.html#args under Alternative: Add arguments to try. Seems I have some reading to do.


An interesting Go2 proposal that didn't make the cut is "change int to be arbitrary precision", i.e. make the builtin int type a bigint to avoid integer overflow and conversion bugs:

https://github.com/golang/go/issues/19623


Hmmm so Go is getting monadic behaviors with "try" to hide error handling issues with examining every single one with "if".

I think this is a rather large improvement to error handling. I need to read more about how it works with "go" and "defer" (it's all in there).


An improvement, really? It seems to me that it reinforces a bad pattern (returning an error directly without annotation) and will make return analysis of functions in code review a lot harder. For what, to reduce typing? Doesn't strike me as a good tradeoff, and certainly feels at odds with Go's principles so far.


I'm not sure I'm getting what you're saying.

Try can only be used in functions that use error types as the last returned type. The compiler should be rejecting uses outside of that. So why is this any harder than thinking about "go" or "defer"?


Currently the only way out of a function is via the `return` keyword, or, exceptionally, panic. Those are easy to spot and easy to understand. With try, especially if it's nested in a complex statement e.g.

    fmt.Fprintf(os.Stdout, "%s: %d\n", try(getID()), try(getCount()))
a function may return in surprising places.


I'm not sure that they are easy to spot.

   fmt.Fprintf(os.Stdout, "%s: %d\n", getID(), getCount())
Could have hidden panics in it and you would never know.


Yes, but panics are exceptional, and not something you need to worry about when doing a normal code review. Try would be commonplace.


It looks like a big improvement but it's definitely not monadic. It's a hidden return.


If you sequence (>>=) a series of Maybe's in Haskell you have hidden returns as well.

Since we're talking about the application of "bind" you could think of the "invisible semicolon" in Go as the same kind of bind between "try" statements in Go, and get very similar behaviors - conceptually.

At least, that's how I think I've understood "try" to work.

(pseudocode... )

f := try(open(...))

num_bytes := try(write_some_bytes(f, ...))

I hope my function bailed out early at "f := try(open(...))" because the line "num_bytes..." literally makes no sense at all if f hasn't been even declared "f :=" vs "f =".

In the end, if I don't like it, I don't have to use it, but I still have to understand it if it's in the language and I'm reading other people's code.


There's a lot of talk in these comments about the new 'try' built-in and error handling in general which gives me the sense that errors are more controversial and in need of criticism than the proposed generics design.

I see it the other way around. I'm in favor of generics for Go, but I can't support this design and I think it will have a negative effect on library stability.

Rather than narrowing the scope of contracts to remain mostly orthogonal to existing features, they have it totally open-ended. They clearly didn't want to put too much restraint on what can be stipulated by a contract, and consequently it just feels like a solution looking for a problem (or like they didn't want to to make any hard decisions for which they might be critcized). With contracts as they are, they are incredibly powerful and stomp all over the type system and interfaces in terms of their practical utility. This is not orthogonal design, or really any sort of design, it's a blank page. They feel like a complete abdication of the designers' responsibility to make Go a simple language that tries to only pull in minimal features while reducing overlap. It's context.Values again but far worse. Please stop trying to make everyone happy, it's impossible (and I realize the irony here).

I liked generics when they were going to be super strict (and consequently super simple from the user's perspective), no upper or lower bounds on the type, just essentially a glorified gogenerate. This feels like you all are trying to answer to the critics rather than to the code.


Not a huge fan of the try built-in spec as proposed but I struggle to understand the reasoning behind people arguing that the current error checking/handling paradigm "at least makes the code readable/understandable". It doesn't. The whole issue with golang error checking boilerplate is that your brain will start seeing it as noise, which may obfuscate issues within the code. This is especially true for code you have not worked on yourself.


Yeah, and this new change doesn't alter how explicit the code is. The new "try" just means "return error from here without setting the LHS if the RHS returns an error". It is the standard boilerplate captured.

When this is in common use, it will greatly improve the visibility of non boilerplate error handling.


Go 1.14 will include the following "smaller" proposals:

- A built-in Go error check function, “try”.

- Allow embedding overlapping interfaces.

- Diagnose string(int) conversion in go vet.

- Adopt crypto principles.


> - Adopt crypto principles.

Turn a profit by pumping crappy tech to the masses?


If you are actually interested in the proposal: https://github.com/golang/go/issues/32466


It has nothing to do with cryptocurrencies?


Here's a short example comparing existing error handling, the check proposal, and the experimental try proposal

https://gist.github.com/kyleconroy/e48c83425349f2d3954d4e764...


Why is there a defer in the "try" example? To just show wrapping even though the "existing" one doesn't wrap? Also, the "try" example doesn't defer-close "w" in case copy fails. Seems that they aren't the same example.


try() implicitly returns when there's an error, and sets the error in the case of an error. Then the defer statement would handle the error, as the return is happening.


I'm saying it doesn't match what "existing" is doing which doesn't wrap the error. If you want comparable examples, they need to be doing the same thing, not one wrapping w/ an Errorf and another not.


LOC reduction of 10% with check/handle statements and 20% with try()/defer along with somewhat reduced clarity of what will happen on error.

I'm not a fan of the verbosity at present either, but seems like the check/handle syntax is less direct and the try()/defer func syntax is less flexible.


Funny how it's also wrong.

  handle err {
      w.Close()
      os.Remove(dst) // (only if a check fails)
  }
the error returned from w.Close() is not handled.


That `Close` call only happens in the case that an error happened during either copying or closing. The point of checking errors on `Close` is so that you catch problems during flushing of any buffered writes. In this case, the `Close` is not to ensure the data is flushed, but to ensure that the file descriptor is released in error scenarios, justifying that the error need not be checked.

On the other hand,

  w := try(s.Create(dst))
  try(io.Copy(w, r))
  try(w.Close())
  return nil
this code has the problem where if there were an error in the `Copy`, the file descriptor would be leaked.

So, yes, one of the examples is wrong.


Funny, I thought I would like the ”try”, but looking at these examples I prefer the Go1 version.


I am not a big fan of "magic" happening behind the scenes.

I understand that the current error handling is a bit too verbose but at least it makes the code readable.

I am not sure if this is really the best way to fix this issue TBH.


I'm not familiar with golang, but I did read about a concern with variable mutability and race conditions when communicating across channels: https://bluxte.net/musings/2018/04/10/go-good-bad-ugly/#muta...

""" As we saw above there is no way in Go to have immutable data structures. This means that once we send a pointer on a channel, it's game over: we share mutable data between concurrent processes. Of course a channel of structures (and not pointers) copies the values sent on the channel, but as we saw above, this doesn't deep-copy references, including slices and maps, which are intrinsically mutable. Same goes with struct fields of an interface type: they are pointers, and any mutation method defined by the interface is an open door to race conditions.

So although channels apparently make concurrent programming easy, they don't prevent race conditions on shared data. And the intrinsic mutability of slices and maps makes them even more likely to happen. """

Is this still a concern, and if it is, would it be addressed in Go 2?


That isn't really the sort of thing Golang is trying to address, at least, not directly. Go is very unlikely to add immutable data structures at this point. And Go is not built to prevent race conditions. There is a race detector that can detect many race conditions in running programs, but nothing in the type system.


IMHO type systems are pretty much impossible to change after the fact (see: Python's `str` type and the whole 2/3 migration fiasco), which is why I thought to bring it up since it's a major version change. I think a lack of support for efficient immutable types gets in the way of `golang` being a concurrency-first language, if the language endorses CSP with native support for goroutines and channels in the first place.

How you can write simple, reliable, and efficient user code (https://golang.org) if using your favored concurrency model results in race conditions?


> How you can write simple, reliable, and efficient user code (https://golang.org) if using your favored concurrency model results in race conditions?

You can't. And the language is not expressive enough to be able to write generic immutable data structures as you have in Scala/Java/Kotlin/etc.


Dry answer: In practice, you try to not mutate values you receive unless you know you own them. Usually it's clear from the type signature. If ints are sent over the channel, or structs each containing an int and string, you know you can do whatever you want with them. If it's ambiguous, the sender should document whether the receiver has ownership.

I think with 1 sender and 1 receiver the common solution in Rust is the same, to copy (or maybe move) the values sent... the type system provides some safety guarantees but I doubt it would be much different for performance. You wouldn't be able to send an immutable reference over a channel in Rust, since the compiler wouldn't know where the reference's lifetime ends. You could resort to refcounting, but you would probably only put in the effort if the data structure was large enough that copying was prohibitively expensive.


99% of languages don't have race condition prevention and yet they're doing fine. Go has a pretty good race condition detector.


Indeed, I believe it's impossible to statically prevent race conditions in a turing complete language. Data races can be prevented, but that's just a subset of all possible race conditions.


I think Erlang does? I don't really know Erlang, but AFAIK there's no way to share anything mutable between processes. Any data which is shared is (semantically) copied when it's sent to a different process. (Ignoring FFI, and such.)

Same thing applies to the STM "subset" of Haskell -- conformance to which is statically verified by the type checker[1].

[1] There's no particular magic going on wrt. checking -- STM is a monad. It just happens to have "magic" runtime support.


Those disallow data races, not race conditions. I find that this blog post explains the distinction well: https://blog.regehr.org/archives/490


Ah, yes. My bad... I missed that crucial detail. Still, maybe a non-TC protocol language could fulfill the criteria?


The purpose of channels is to provide synchronisation and messaging between goroutines, not to prevent race conditions on shared data.

Go also provides mutexes that are more suited for that.

Sure, passing a pointer through a channel does not in itself provide any protection to the data it references but such protection is not always required. There's a separation of concerns that allows more flexibility.


> such protection is not always required

Safety should be the default, with a way to opt out for people who have thought really hard about the consequences. If I have to opt into the safe version, I need tooling to complain about everywhere I didn't do so.


A pointer is neither safe nor unsafe and in my view people should be expected to know what they are doing when passing pointers across concurrent threads of execution.

It's not rocket science.

Goroutines are supposed to be lightweight so it makes sense to allow lightweight communication.


On the other hand how often have you read in documentation “Do not mutate this buffer” or the equivalent? It is very possible to create traps for people who pick up your code later on. Having the compiler detect those problems rather then it causing a runtime error is something highly desirable.


> This means that once we send a pointer on a channel, it's game over

Yes, as long as you keep sharing the references. So just don't reference that stuff you sent across a channel. Usually there is little reason to use pointers in Go anyway [1] except when building methods to mutate the associated type (think: struct).

That said, it's fairly easy to build cascaded structures that are automatically deep-copied during assignment by not using pointers. So that's a pretty safe way to go, moreover one should keep in mind that Go doesn't try to be functional in any way, so immutable data is not such a thing there.

[1] https://medium.com/@vCabbage/go-are-pointers-a-performance-o...


Until you blow out the stack.


Does someone know if there is a good reason why try uses parenthesis? I mean, wouldn't it look even cleaner without them?

  f := try os.Open(filename)


They did not want to create a new "keyword" that would possibly conflict with some existing variable name. So try() is a built-in function, similar to len(), and if you already have a variable named "try" (or "len") then it shadows the built-in (takes priority in that scope).


It's mentioned in the design doc under "Properties of the proposed design": https://go.googlesource.com/proposal/+/master/design/32437-t...


One possible reason is that funcall semantics will probably change (Go 2 is likely to have generics), and so simple funcalls keep the language orthogonal; new operators are forever.


> Go 2 is likely to have generics

Big if true. It's one main complication keeping me from adopting Go for more things.


It's mentioned in the post we're commenting on.


I sure am glad that no one did a "Go 2 considered harmful" gag.


came here just to make that joke :(


I'd rather have nice command output, that could use a lot more love than this try() stuff, and maybe ditching the weird go.mod format it doesn't really add value—if anything it hurts because you can't manipulate those files with regular codecs.

When it comes to error context it would be nice to facilitate structured logging, rather than arbitrarily formatted strings.


Argh ... check/handle is so much better


Ah, contracts seem ugly. If only operators could be method names we would not need contracts, we could just define generic types like an interfaces:

    type T generic {
      +(T) (T)
      ==(T) (bool)
    }
Donno where to suggest this.


(User Defined) Generics support.

edit: clarify.


Why I would switch from Perl 6?


I have been using for a year. It's a glorified bash script. Let's hope it will become something more similar to a programming language.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: