Hacker News new | past | comments | ask | show | jobs | submit login
Go Concurrency Patterns: Context (golang.org)
117 points by campoy on July 29, 2014 | hide | past | favorite | 17 comments



From the article:

> At Google, we require that Go programmers pass a Context parameter as the first argument to every function on the call path between incoming and outgoing requests.

This pattern demonstrates a fundamental difference between Go and languages that encourage thread-state data. The arguments for thread-state include not requiring all functions to pass such a context variable, since it is (hopefully) already populated in the thread state. The hopefully part is where thread-state becomes unpleasant, because if you want to do concurrent work, you must pass one thread's state to the other. The other thread has no guarantee that it is there, so the programmer must either hope it is or verify it, which is annoying.

The context-passing style doesn't require that kind of hope, but does require each function to pass the context to another. When I was working on a medium-sized go project, I would often need a context deep in the stack, and be forced to add it to many functions up the tree. Hence, having the standard that all functions get the context always seems excellent because it is no longer something to think or worry about and doesn't increase code length by an unreasonable amount.

I heartily agree with this recommendation and will be using it going forward in my go projects.


>Hence, having the standard that all functions get the context always seems excellent because it is no longer something to think or worry about and doesn't increase code length by an unreasonable amount.

how about new paradigm - context-oriented programming :) where, like "this/self" in OOP, ctx is always present. And looking through the posted article Google is somewhat there with context [instance level] kind of inheritance (the context tree)


> how about new paradigm - context-oriented programming :) where, like "this/self" in OOP, ctx is always present.

This is how OOP in C is done though usually its done with the context being an opaque object.

Having functions with multiple arguments that depends on other functions with multiple arguments that depends on yet other functions with multiple arguments will result is very hard to manage function dependencies sooner or later and "context-oriented-programming" solves this problem very nicely.

A function that takes a single argument as a pointer to a struct can be extended indefinitely without breaking the function API.A function that takes its arguments by individual arguments will break everytime it need to be extended by an additional argument.


>The hopefully part is where thread-state becomes unpleasant, because if you want to do concurrent work, you must pass one thread's state to the other. The other thread has no guarantee that it is there, so the programmer must either hope it is or verify it, which is annoying.

Right, so then you are in a situation where you can always rely on it implicitly, until you can't.

Much better to be consistent and force the inclusion everywhere as you mention in your second paragraph.


How is this Context supposed to work with middleware packages that build on the standard Go http interface? Seems to me that you have to declare Context as a parameter to your functions t o use it. Am I misreading something?


In Google, we use two approaches:

1) add an explicit Context parameter to each function that needs one; typically this is the first parameter and is named "ctx". This makes it obvious how to cancel that function and pass stuff through it, but it's a lot of work. We are developing static analysis and refactoring tools to help automate tasks like this.

2) use a package to map http.Requests to Contexts. This requires that whatever server handler you're using register a Context in a map for each http.Request and remove it when the request completes. You could do this using the gorilla context package, for example.

My personal preference is the explicit ctx parameter, since then libraries are agnostic to the framework being used. Different frameworks can provide their own Context implementations; middleware should not care.


Probably something like:

    // h already has a context.Context in scope through similar
    func MyHandler (ctx context.Context, h http.Handler) http.Handler {
       return http.HandlerFunc(func (rw http.ResponseWriter, req *http.Request) {
           // do stuff before handler
           h.serveHttp(rw, req)
           // do stuff after handler
       })
    }
then you can just chain your handlers and pass through a shared context


In Interpose, that's exactly the pattern I use to pull in methods that require context while still satisfying the http.Handler interface (e.g., https://github.com/carbocation/interpose/blob/master/example... ), so it seems that this pattern would work well with Google's Context.


Based on a cursory glance I'd say those middleware packages could also use that unless they've rebuilt net/http entirely. Some of the examples demonstrate that.


I am not too sure about all of this. I like the idea of some sort of format for specifying when long-running potentially blocking function calls might end, but creating an entire external package for that purpose feels to me a bit architecture-astronaut-like.

I did think it seemed nice as a pattern, but not as a library... then I got to the idea of a black box for context values. That really concerns me. If a function and a function called by that function both depend on some sort of parameter, why not just specify it as an argument like everything else? If it were somehow possible to only have request handling related functions for a specific application require a context value, that would make sense, but requiring every function called in the process of handling a request to have a ctx value as its first parameter? That seems dangerously infectious, and like using a rake to scratch an itch.

I'll have to wait and see how libraries implement this in practice, but I hope most libraries will continue to just follow the simple, elegant route - if a function may block, document it, and let the caller handle putting it in a goroutine, instead of creating yet more boilerplate.


It's not type-safe of course. Each function taking a Context has an implicit interface (the required key-value pairs that the caller must provide in the context) and failing to provide an expected binding will likely result in a runtime error.

This is why dependency injection frameworks like Guice and Dagger went with a different approach: they provide environment values to a class's constructor using reflection. Then the framework can do type checking at startup (for Guice) or compile time (for Dagger).

However, that has different costs: it requires a class per component instead of a function per component, and the framework is more complicated. This is an engineering tradeoff and apparently the developers of Context decided on simplicity.


This isn't exactly dependency injection, this is a dynamic scope. It does have some functional overlap with request scoping in Guice, but you could imagine a world in which you have both context propagation and dependency injection.

However, even on moderately complex Go projects, I have not yet found a need for dependency injection. It's major value in Java was testing, which is usually done another way in Go. (Fewer mocks, fewer fakes, more access to private package parts in tests.)

Context propagation on the other hand, has proven valuable. We get cancellation and timeouts that work reliably across RPC boundaries.


I read that they have static analysis tools to track the flow of Contexts, making it much easier to verify that Contexts are threaded through correctly, which can make things a bit safer statically. It seems Go-like I guess: instead of language support for that sort of type system, simple enough lang and good enough tooling to implement the analysis externally.


Yes, we are working on static analysis and automated refactoring tools. We will make them available publicly when they are ready, but that won't be very soon. We wanted to publicize Context now to encourage people to start using it and incorporating it into new code and frameworks.


What about composing together the various environments? Suppose we have a stack of three wrappers, A (the base provided by the framework), wrapped by B, wrapped by C. A declares an interface for what it provides, and provides a context object that implements it. B declares the additional interface it wants to provide, and composes in the object from A. (B's interface should probably compose in the A interface.) C does the same thing to B's interface. (Or, possibly, merely composes in A's interface.) In the end you end up with an object that is fully type-safe and statically checked, and has all the bits you want, with just a bit of management in the framework.

I'm not sure why none of the frameworks seem to be pursuing this, so I'm interested in why this doesn't work. (And yes, it's not completely as slick as a dynamic language... I'm comfortable with it taking a bit of manual, explicit work, since none of the other options are completely automatically working with no negative tradeoffs either.)

Go definitely doesn't have the machinery in place to dynamically create an interface for C that might compose A or might compose B, such as Haskell might do, but it seems like it could still work to me.


I didn't know this existed until now. I believe this might be exactly what I need to finish my personal project https://github.com/pothibo/irrigation/tree/master/lib/osmosi...


Does this mean new packages/bridges that supports context.Context getting released for back-end datastores like elasticsearch, aws etc?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: