Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How are you using Go to write production-grade back end services?
52 points by llegnaf on April 16, 2020 | hide | past | favorite | 24 comments
As the title suggests. Interested to see how (and what) companies are using Go to make production grade backend services.

- What are you using for tests?

- Do you use dependency injection or any mocking frameworks?

- Are you using any routing frameworks?

- HTTP web frameworks?

Would love to know the ins and outs! What is good, what do you not like about Go, any pain points? Anything you would want to improve?

Thanks!




For our backend[1],

- We are using the default test framework that comes with Go; go test.

- We usually follow Postel's law which translated to Go, would be: `Accept interfaces, return concrete types`[2] This enables us to pass in fake interfaces during tests. I haven't checked what kind of performance cost(if any) we may be paying by passing around interfaces instead of concrete implementations; but performance has not been a problem so far so we are just happy with our approach.

- We do not use any http web frameworks, we are just using stdlib's net/http. We pair that with certmagic[3] for automated TLS certificate issuance and renewal.

I like the performance of Go, it is easy to pick up and it comes with a pretty great stdlib.

What I do not like is the fact that it has nil pointers, and you tend to run into one or two nil pointer dereference errors once in a while.

1. https://errorship.com/

2. https://blog.chewxy.com/2018/03/18/golang-interfaces/

3. https://github.com/caddyserver/certmagic


- Web: No framework, I mostly use https://goswagger.io but for basic stuff then just the standard http library + https://github.com/julienschmidt/httprouter

- Testing: https://golang.org/pkg/testing/ + https://pkg.go.dev/mod/github.com/stretchr/testify

- Mocks: https://github.com/golang/mock

- Dependency Injection: None, current user of https://github.com/uber-go/dig and I regret it.


I've built a pretty complex and high performance go-based microservice architecture (~12 services) as backend with testify and gomock for testing. Pretty happy with those choices, never had any issues. The backend doesn't use HTTP, so no opinion there.

I recently started migrating some of the services to Rust for performance reasons. I would say that Go's biggest strength is perhaps its biggest weakness as well: It "just works". Types are loose (no generics), concurrency is extremely easy. This means I can write working Go code really quickly, but as project complexity grows, code tends to become kind of a mess.

For example, channel-based concurrency can become hard to reason about if you have a complex service. A few times I ended up putting mutexes at various places just to make it work despite knowing that it's not the "right" thing to do. Mutexes then come with their own issues. Once you have a deadlock or race condition, good luck debugging it. There exist multiple packages and tools to detect race conditions and deadlocks, so this seems to be a common problem. I must've spent days or weeks worth of time looking at pprof output. You may say it's my fault as a developer to write sloppy code, and that may be true, but the Go language encourages such code with the decisions it made.

The same goes for types. I never needed generics, but the fact that typing is so loose means you can get away with a lot of sloppy code without being punished for it. This can be great for moving fast, but may come to bite you later on.


My work mainly involves setting up internal enterprise applications.

- The standard testing package

- Manually setup dependency injection. Generally a flow like config=>databases=>models=>routes=>server.

- https://github.com/gorilla/mux for routing and https://github.com/swaggo/swag for documentation

- As for HTTP frameworks I just use the standard package and I really enjoy https://github.com/sirupsen/logrus for logging

We use IIS. I really wanted to use go so, I found a way to run go applications on top of it. That was not fun to figure out, but I can make use of the windows authentication underneath IIS. Required a custom module that forwards some headers to my go applications. Also the version of IIS we use doesn't even support http2 which sucks.


I remember setting up Python to run under IIS.. what a ballache that was.


Do you have a writeup/reference for Go under IIS?


- Ginkgo and Gomega for tests, though sometimes just stdlib on smaller projects. I like Gomega very much, great matchers, lots of useful helpers.

- Manual DI, passing in values and objects. We've hand mocked a handful of components: just what's essential for testing.

- Mostly not using HTTP. But a few projects do, some simply use basic stdlib, others use Echo.


At a previous job we did a 'tech experiment' to write a Go REST service. As far as I'm aware it's still in use in a production-level capacity. It used the following:

- gotest

- No DI, but in hindsight this would have been the right thing to do. The dev team is 100% Python, so mocking was more the talk of the office than DI/IoC.

- For routing and HTTP the service used httprouter https://github.com/julienschmidt/httprouter

I think it's a fantastic language, gofmt and gotest are both great utilities, and the short time to create an executable made development turnaround a breeze.

However, I think for the purposes of a simple REST service I would probably use Python/Flask. Less boilerplate and for a Python team, it would have made more sense ...


Testify for assert and require. GoMock for mocking, but only when it really adds value over a handwritten fake. Table driven tests wherever possible. Don't need heavyweight test frameworks.

Plain HTTP/JSON encouraged for edge-facing services only, typically not in Go. Between services, GRPC, integrated with our metrics, distributed tracing, and auth. When plain HTTP does happen, gorilla/mux.

Wire or FX for DI.

Implicit satisfaction of interfaces is the core of the language. The type system will feel maddeningly obtuse until you learn to use them effectively. It took me a few days to learn the language's structure and start shipping code, but a few years to grok the implications of that simple structure and design software in harmony with it. Interfaces are the key. Think Haskell typeclasses, not Java.


I'm using Go with Redis for various aggressive caching needs. I batter Redis and Go performs very well. A few other languages will work mostly fine for my purposes, however I like working in Go and have always had a great experience with its performance.

No traditional testing. Standard library and Redigo. That's it.

No pain points for what I'm using it for. I usually try to avoid complexity in anything I build. This is a rather simple system that is only meant to take a high volume beating, cache to Redis (content later retrieved & presented by another part of the application via another language) and be reliable.


I've been using standard router and Gorilla Mux for about 5 years now and have so many snippets that I can compose apps with.

I recently built an lru based rate limiter [0] that is compatible with both - it might be useful! Obviously it would need love for multi host but PRs welcome

[0] https://github.com/17twenty/gorillimiter


I'm using gotest for testing, my testing is really primitive right now, no mocking frameworks. The only non-standard library I'm using is gorilla/mux for routing. Go feels a little verbose and restrictive coming from Python, but after being acclimated, I love it. It's extremely productive, performance is nice, deployment is so easy. The pain points for me were understanding the package system and structuring my project, which I worked through eventually.


I've been wanting to do a blog series on how we use Go. Here is a short version.

Tests:

We use the testing package for unit tests and (maybe too much) use interfaces as arguments so we can create test fakes that behave the way we want so we can validate error paths, logs (yes, we assert on logs), metrics, and, of course, green/good expected behavior.

We then have acceptance level testing. These are ensuring the system works as expected. We leverage docker-compose to spin up all our dependencies (or, in some cases, stubs - but only rarely). We then have a custom testing package built atop the stdlib one. It behaves very similarly, but allows for the registering of test suites, pre and post test suite methods, pre and post test methods, and generates reports in json/xml for QA to keep track of test cases, when they ran, pass rates, etc. As part of our SOC2 compliance, we have these to back up our thoroughness in testing. Tests also can have labels so we can run all tests for a given feature only, or a given suite. These tests hit the running binary of our service under test, so if it works here, it will work when deployed.

Before a service makes it to prod, it lands in staging. There, a final suite of tests go through user features and ensure that things are ok one last time. Total black box.

Dependency Injection / Mocking:

I am very, very much against mocking. For that, I did write a blog post; though, I think the thing it highlighted most is I need to write more :). You can goolge "When Writing Unit Tests, Don’t Use Mocks" if you want to read it. When you mock, you create brittle tests that are tied to the assumptions in your mock. Instead, we use "fakes." These are test structs that match an interface and allow us to control their behavior. You might ask how that is different than a mock. Mocks have assumptions and make your tests more brittle and subject to change when you update the code (which is what Martin Fowler concluded in Mocks Aren’t Stubs). People tend to write "thing called 4 times, with arguments foo and bar, and will return x, y, z... blah blah blah." Instead, when you use a fake struct that matches an interface, you can make them as simple or complex as needed, and usually simpler is better. Return a result or an error. Validate the code does what you need. We also avoid functions as parameters for just testing. IE, your test code uses a custom function that is not the function used in prod. These are easy to cause nil panics and are kludgy. Fakes get us what we need 99 times out of 100.

Routing frameworks:

We have folks who don't use them, or use gorilla mux, or chi (my fav). They are convenient and make things easier for passing URL parameters. You can, of course, do this without a customer router. I like Chi because it is stdlib compatible.

HTTP Frameworks:

Nope. I could see, maaaayyyybee if we were writing a bunch of CRUD apps, but we don't. The services my team makes tends to have few routes and not all the CRUD stuff. Even then, if we do a lot of CRUD work, it will only eat up a few days. What we do have, however, is a project skeleton generator so our projects all start out with the same basic directory structure and entry points. Everyone knows that your app starts in cmd/appname/main.go for example.

Logging (and errors):

The other one we leverage in place of HTTP Frameworks is a custom logger and an experiment we are doing with custom error types. We have logging requirements to play nice in our ecosystem at work. Logs all are structured json and have some expected keys. The logger generates all that. We looked at all the log packages and none matched exactly what we needed. We can store key value pairs on the logger and pass that logger around (so you only have to do logger.Add("userid", userID) once and now all logs going forward in a request will have it. You get timestamps, app name, and a few other fields for free. You can create a child logger that will have its own context kv pairs so you don't pollute its parent (helpful for when you go into a function and want to add more details to logs based on errors specific to that function). The other one that we are playing with now on our new project is a custom error type that stores a map of key value pairs so we can just bubble up our custom error type, wrap it with more kv pairs on each bubble up, and then only log at the top, and then when the error is logged, we use our logger to extract the KV pairs and bingo, structured logs with context for each error bubble up point with potentially relevant kv pairs that are only known down in deep levels.

BuildPipe:

We run our tests locally usually. But when we create a PR, a build is kicked off using BuildKite (plugin system is really nice). A PR cannot be merged to master until the test suite passes, which includes the acceptance tests from earlier. After merging to master, a fresh build is run again and that creates artifacts that are then used by ArgoCD so we can roll our code out to our kube cluster.

I love Go. It is my favorite language I've worked in. There are warts, for sure. You can get that list anywhere. There are some oddities when assigning to structs in maps, nil interfaces, shadow variables, non-auto-checked discarded errors, and others. The biggest wart right now is the module system. I think that will improve over time.


Appreciate the thoroughness of your comment (if you expand further in blog posts, I’ll read em). How did you settle on your standard project structure and what does it look like?


Here is a fairly typical project structure you would find in one of our projects

    /.buildkite
    /.github
    /acceptance # for housing the acceptance level tests
      /bin # scripts to help
      /config # an anti-pattern, but we like a config package
        /config.go
      /tests
        / # misc go files for testing
      /$foo # any other packages directly related to tests or the environment
    /bin # any scripts to help with anything
    /cmd
     /$appname
        /main.go
      /$bar # any other installable binaries
    /config # an anti-pattern, but we like a config package
      /config.go
    /$raz # packages that support our service
    /internal # stuff we want to import but not let others import. Seldom used.
    /server # or $appname, our main service code
    .gitignore
    .Dockerfile
    README.md
    docker-compose.yml
    go.mod
    go.sum
    makefile


- Test -> stdlib + testify/assert + vektra/mockery

- Dependency injection -> manual DI passing values and objects down

- Routing framework -> gorilla/mux

- HTTP web frameworks -> stdlib + go-kit for the general architecture

We then have a pretty large internal library for all things shared


Heh, production grade depends more on the people in charge than the tech choices. The lack of exceptions in Go is the biggest stupid thing in software. Exceptions have always been a much less risque feature than multiple return values. In Go every single "function" has multiple return values and the code is litered everywhere with if err != nil... garbage. It will never change because it is apparent that exceptions are to Go as collection literals and operator overloading are to Java.


I've worked in exception based code bases such as Python (Twisted specifically) and Go. Unlike many, I've used both in the same organization working on similar problems for nearly a decade while the scale of number of engineers working in the code and the number of requests it serves have gone up considerably. Hundreds of contributors. Billions of daily requests.

I humbly suggest that indexing on the lack of exceptions as a quality measure is a poorly lacking criteria. Ditching exceptions in Twisted Python and porting code to Go has created vastly more readable, maintainable, and performant code in multiple cases for us.


I understand golang was an attempt to fix c's shortcomings and not to create the next c++ or java. But I also understand programmers get their way of doing things that are bad. Case in point, people who do println("entering function doit()"); println ("exiting function doit()"); for every function in the entire codebase because it's how they learned to "debug" and they never tried anything better. Error returns in golang are like that. It has to be for every function in the codebase and balloons the code by a non-trivial percent.

As for performant, as of at least ten years ago it became impossible for a person to perceive a performance increase by removing exception handling. A human can't perceive the microseconds.


> As for performant, as of at least ten years ago it became impossible for a person to perceive a performance increase by removing exception handling.

You are really hung up on the exception handling. I never claimed that the error handling made Go more performant. I wrote "performant" as in "there are other things that are important that should be taken into consideration aside from use of exceptions". It might not be as apparent to others now as you edited your original response that was along the lines of "not using exceptions is the worst software decision of all time." (note I don't recall the exact thing you wrote, but that feels close).

As for the new edit you have:

> litered everywhere with if err != nil... garbage

I just grepped one of my Go code bases. It is 40,636 lines of code. It has 326 "if err !=" lines. Less than 1% of the lines are for error handling preamble, and we take our error handling very serious. In fact, a lot of those error checks don't directly bubble up. They perform fallback logic, logging and/or metrics, or set sane default values. Sure, some bubble up, but it is less than half as grep suggests 134 bubble up.

What I value about local error checking is that everything you need is right in front of you as a reader of the code. When I worked in Twisted Python, one particularly bad case of exception handling made it so I literally could not use ErrBacks in one part of the code because in some other module someone used an exception for a non-exceptional error.


Does go actually generate machine code that continually checks for err != nil? Because that can be more expensive than generating an unwind table and ignoring it until an exception actually happens.


Exceptions aren't the only way to handle error. Rust leverages algebraic types to good effect, for example.


sounds like someone is too focused on aesthetics instead of maintainable code.

exceptions have far worse impacts than multiple returns and if statements to check. particularly that they are basically new age gotos. throw an exception, up and up and up it goes where it stops who knows!


That's funny because Java devs for decades said that multiple returns and tons of if statement trees were the codings of satan. I am doing golang code right now and all I see are pages and pages of "if err != nil {" or "if err := doSomething(); err {". Oops I forgot to check err, guess my code is not as maintainable and my process is going to panic and die in the middle of a request. Guess I will write a defer-recover for every function in the call stack. Sure wish there was some handy notation to make it easier to try that and catch the failure, wink wink.

Every generation has their plaid pants. For c programmers it was ASCII art box headers for every file and function. For C# it was IMySuperLongClassInterfaceTypeOLEActiveXInterface. For java it is no collection literals, operator overloading, and fights over lambdas for a decade. For golang it is error return values EDIT: and also no generics/templates.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: