Well yeah, the default stdlib HTTP handlers are very basic. The cool thing is that the standard library is replete enough to build tons on top of that, though. CORS headers using nothing but stdlib for example:
You end up writing a ton of boilerplate like this if you're sticking with the standard library, but having a ton of awareness and control over every aspect of what you're doing is really just so easy in this language. If I don't understand anything in the standard library I just go to the definition of it in the golang source code and usually end up figuring out what I need. It's really impressive how much you can get done without using any packages if you're writing anything net-facing that just needs to do one thing and do it well.
This is a good example of how standard libraries should be. They are fine for simple use cases, and they provide the basic primitives for more complex third-party libraries to build on top of. Stuff like basic HTTP functionality.
Anything more complex than the example can be streamlined or abstracted away by a third-party library. People get very opinionated about how those kinds of abstractions should work, so it's probably better for the core Go team not to pick a model and spend time building it.
I like the way Axum for Rust handles this. All response handlers return something that implements IntoResponse. Lots of types implement IntoResponse, thus making the simple cases really streamlined.
let user = User {
id: 1337,
username: payload.username,
};
// this will be converted into a JSON response
// with a status code of `201 Created`
(StatusCode::CREATED, Json(user))
}
If the handler is complex enough that branches can't all return the same type, you can just add .into_response() to each return site, and now you're just returning a response, which also implements IntoResponse (as a no-op).
This avoids the problem discussed in this article while also avoiding much of the boilerplate of manually building responses.
I’ve not yet gotten to use it, but I understand Axum also implements IntoResponse for Result<A: IntoResponse, B: IntoResponse>, which is very convenient for the sort of error code of the above.
It’s one of my biggest annoyance with Warp: it only implements Reply for `Result<T, http::error::Error>`, so if you have a faillible handler with more complicated error types (e.g. with a json body, or even a 200 response for RPC handlers) you’re in the awkward position of having to join the paths back (usually by converting everything to a Response by hand), which makes mistakes easier.
Correct, I spent about half an hour implementing IntoResponse for my own project's error type, but now I can just use normal error handling and I get presented with an error page on any issues. It's quite magic (not really, very streamlined).
Thank you! I ran into this branching problem the other day, ended up trying to return a complicated boxed future and gave up.
In retrospect it makes sense to do it the way you mentioned. I think because I’m not as confident with traits and rust’s brand of async I tried to get it to work the same way I would in other languages, which is to mess with the function’s return type myself, rather than let the compiler figure it out.
No, that's exactly the opposite of what it does. The article above explicitly criticizes gin for not returning a value.
Echo is closer to the pattern described above, but:
1. Echo lets you return an error response, but doesn't let you customize how the error is rendered _in a modular way_. You can only have a single global custom error handler that decides how to serialize the errors, and it needs to have knowledge of how to properly serialize every kind of error in your app. I have to admit this is generally good enough for most of the apps, and I don't think you can do any better in Go, considering it doesn't have type classes like Rust.
The best that you can do is to let errors implement a CustomHttpRenderer interface on their own and try to cast into it — but this is already doable with a custom error renderer in Echo.
2. Echo doesn't let you return a non-error Response from the handler - instead it's written directly to the ReponseWriter. This approach makes writing unit tests several orders of magnitude harder and also means you can forget to write a successful response in some cases. You can also both write a response directly and return an error, which cannot result in a valid behavior (Echo will take the safe route and just ignore the error code and do nothing).
A lot of HTTP frameworks let you violate the HTTP spec to a surprising degree. Though with Rust you can just double check the return data with a middleware for Axum.
The author says that because we have a complex function, and the HTTP handler is designed the way it is, we cannot avoid running into problems. I disagree. This are two separate things.
You could take a more functional approach and separate the main logic and the side effect. Which is writing the response.
func aMoreAdvancedHandler(w http.ResponseWriter, r *http.Request) {
res, err := helper()
if err {
http.Error(w, err.Error(), err.Status())
}
w.write(res)
}
func helper() ([]byte, err error) {
// todo
}
Where now the helper function has the problem of dealing with multiple operations that can potentially fail. Which is exactly the problem in the example. Not the HTTP handler itself...
For example, we could use the defer function call with a named error to check if something fails and guarantee that we always return an appropriate error. Similar to the pattern used to free resources... I don't know how is this commonly called. But again, the problem is the helper function not the handler.
I don't want to use the FP terminology, probably most people are familiar with this pseudo-code:
It's very funny that you trying to invalidate the problem OP is explaining and you made the exact same mistake in 10 lines of code :) You forgot to return from the error handling branch.
I agree, I wouldn't have been against a required return, but the mitigation is relatively straightforward by using custom handlers like mentioned at the end, if needed.
I really like Go's default HTTP handlers because it offers that straightforward flexibility. Nothing rocket-sciencey.
The beauty is in the simplicity. `http.Handler` is all about what is needed to handle HTTP and nothing else. It doesn't intend to make things pretty, or safe, or anything really, except to express the minimum surface area required to interact with HTTP while abstracting the commonalities. Beyond that, as the article mentions, it's quite easy to abstract and implement your own sugar layer.
Actually a bunch of router/web helper libraries do this in different ways which makes it a huge pain to compose middleware from different libraries.
Even Mux's middleware library is not consistent in how it does this for different middlewares so they have to be applied different ways or even have wrapper functions written
This. Middlewares typically expect to wrap an http.Handler so you lose all the middleware composition ergonomics by adding a return value to your custom handler signature
There are definitely edge cases here (like what if the handler already wrote a body and it was sent on the wire, etc), but this is absolutely the pattern to use, and one of my favorite parts of a type system that allows methods on function types.
Well yeah, I think it’s important to let you write bytes to the socket and close it when you want to, and if people are paranoid about making obvious, easy to spot bugs, they can use some abstraction to protect themselves.
So the problem with these "good" examples is that they expose the end user to Errors, messages likely not written by you or your team and moreso not intended for general consumption - you need to be really careful about doing so because an unexpected error could expose lots of troubling things like credentials and general information about your system you'd rather not expose.
We've got a general rule of thumb never to expose the message of an Error to the end user.
Yep, that's why you'd want the error handler to try and type assert the error to something like an HTTPError where that's an interface you control. If it is an HTTPError, then you can "trust" the code/message and write that back to the user. Otherwise, 500 with a generic response body.
It's not something I even thought about, but I guess I see the point, I just don't agree. In the HTTP handlers it makes sense that you don't have return values, because: What would you do with that value exactly?
The HTTP handler should always ensure that "something" is returned, at the very least a return code. Once something has been returned to the client, can you really argue that there's an error?
There's a point to the risk of missing return value, you just run the risk of complicating the request flow by adding it. I'd argue that most people would view the code path as having ended, once the client receives a response. The request shouldn't really continue on the server end after that. There can be background processing, go routines or something like that, but that's a bit different, as it doesn't involve the original requests-response.
"In the HTTP handlers it makes sense that you don't have return values, because: What would you do with that value exactly?"
I agree that the baseline net/http implementation is correct. There is no appropriate default error (return) handler that the framework could implement, at the level it lives at, that would be correct and wouldn't be limiting.
However I very often in my own Go code immediately create an abstraction that does exactly this, because it is a good default for everything I'm doing. It just isn't suitable for everything (e.g., what do you do with this in the case of a protocol upgrade to websockets? the HTTP handlers need to be able to just terminate gracefully without being forced to "return" something).
What I would say is, net/http is a little too low level to directly use. You don't need a full framework necessarily, though if you want to use one that does this more power to you. But if you understand HTTP, it doesn't take much wrapping to make net/http useful for "normal" APIs or websites. It takes a little, though.
I agree with you that an error return would be superfluous and probably misused, but middleware with the default handler could be a lot more usable if you could just query a ResponseWriter's written status code.
Instead everyone tries to do it by putting in their own implementation of the ResponseWriter interface proxying back to the 'real' one, which works perfectly but with a lot of extra code about 80% of the time, sort of works but often takes some performance hit 19% of the time, and blows up utterly when it composes poorly with someone else's 1% of the time.
I agree totally, and it's worth highlighting part of the problem for those who don't write Go. http.ResponseWriter superficially looks relatively simple, but it has a number of optional interfaces attached to it that allow for things like "hijacking" the connection (for things like websockets, that initiate an HTTP connection but use it to transform into something else). Writing a wrapper around the easy and declared use case is not that hard, but blows up in weird ways if you don't also handle all the optional interfaces, and that gets even more fun if you start having frameworks trying to layer multiple of these together.
Go interfaces are fun and all but optional interfaces can get a bit spiky when you start trying to use the decorator pattern on them. And in Go, decorator is a fundamental pattern in the toolkit used quite frequently, not an obscure novelty to pull out once every few years.
"net/http" doesn't seem to come with a lot of great defaults that almost every web app would use such as named path parameters which you'd need to write a bunch of code and regex to extract, middleware, routing based on HTTP method etc. This is all stuff you'd have to implement anyway
I'd much prefer using something that's an abstraction of it like Gin, Chi, Echo and the like. They're fairly performative and add very little overhead on top of the standard package
I use Gin and enjoy the conveniences. Part of the reason is that it is opinionated. However, for the standard library, I think a strong, but minimal base is more important. One could argue about how extensible net/http is, but it seems reasonable as a foundation and it has been extended in a variety of opinionated (to varying degrees) ways!
Request handlers are handed a request struct that is similar. The handler is a function that maps a request to a response (it doesn't actually write to streams itself).
I like this style for an http library for a couple of reasons:
1. HTTP resources can be viewed as functions whose domain is the request, and range is the response. Having the abstraction match that makes for really nice code.
2. If you model the request/response structs symmetrically and expose your http client as a handler itself, you can write proxies very easily. For an example of this, see http4k (https://www.http4k.org/documentation/) (a kotlin library).
I won't argue that the net/http abstraction should change, but I do agree with the author's take on the desired shape of an http library. I'd probably just write my own abstraction on top to fill the gap.
There are major performance advantages, especially once you're past HTTP/1.1, to passing a mutable response (i.e. in Go a ResponseWriter) to the handler rather than expecting the handler to return something created whole-cloth.
What advantages are you thinking of? The obvious one is that you might want to stream the response rather than construct it in-memory at once. But as I read the parent (and as I've used similar interfaces in Rust), you don't have to have constructed the whole response with this interface. You can return something whose body is a stream.
If you return something whose body is a stream you still have to construct that thing, including the stream. And if you return something whose body is a stream you didn't fill in yet, you need to create entire async thunks or threads to fill that data in. You have also gained ~nothing.
You didn't outline the performance advantage. You've got "you need to create entire async thunks or threads to fill that data in", but that absolutely isn't true.
Let's look at how we might stream a file in response in go with the current model vs this new model, and you can point out where the performance difference is:
In the happy path, there's no difference in number of goroutines or anything, right? In both cases, this all runs in the goroutine or thread or whatever that the caller _already_ created for the request handler. What difference does it make if the request handler calls `io.Copy(response, body)` or if your function does, in both case it's the same goroutine/thread.
> You have also gained ~nothing.
You were the one that claimed you lose massive performance, and that's what we're asking about. The ergonomics are a separate thing from performance
Now do a body that requires literally any transformation or generation at all. Say, a series of DB query results as a CSV. (Oh, and make sure to cancel work when the request disconnects!)
Or heck, just try adding some error handling from the `io.Copy` result to what you have.
Just return an io.Reader that wraps the DB cursor and outputs CSV data... I fail to see what makes this so difficult. If the "mutable" approach involves passing down a mutable *http.ResponseWriter, then the dual approach of passing an io.Reader upstream is equivalent.
Try to write this, really. It’s such an unergonomic mess compared to for loop in the handler. It will also allocate a very complex reader that the actual version doesn’t need to.
Maybe this is a subtlety of Go I'm not aware of, but at least in most languages, the issue with the explicit return value is that you must first return before you can send the value.
By being able to call into the response object, the handler is able to send to the client immediately, and then the handler writer can do other bookkeeping before exiting the handler.
With a return value, work would need to be fire off asynchronously before the return value is sent.
To me this reads as a positive to the default handlers, as it shows how easy it was to make it work into a format that works for the programmer's preference. I appreciated the solution to the complaint
Not necessarily? The type used for Response can include a "body" type that supports streaming. E.g., body: AsyncRead in Rust. In Go I think it would resemble a channel that emitted the contents of the body.
(As a library, though, you'd also want to make the simple non-streaming case of "just return this blob of bytes" simple for the consumer, too.)
I agree that they are awkward. I've solved it similarly by defining a custom handleFunc [1] and a custom errHTTP struct [2] which contains the HTTP response code and even more detailed error codes. It is very nice to work with.
Like so:
if err == util.ErrLimitReached {
return errHTTPEntityTooLargeAttachmentTooLarge
}
Or so:
if err != nil {
return wrapErrHTTP(errHTTPBadRequestActionsInvalid, err.Error())
}
IMHO this boils down to purity vs side effects. Given a black box function, it's a lot easier to reason about what it did if it behaves in a pure manner returning some data structure that represents some form of state, than passing some mutable complex real world thing into it and then trying to inspect what the thing did after the fact.
But the matter of purity vs side effects is completely orthogonal to the default HTTP handler design. The handler provides the inputs and the output sinks that represent the "real world". But it's entirely up to you how you manage the data therein. You can continue to procedurally consume these input/output entities turtles all the way down, or you can work in a functional style until you are actually ready to flush the result of the computation down the writer.
The low level HTTP library is phenomenal for building more friendly packages on top, think of it as assembly language for HTTP in go. What isn't obvious about that Go http design is that it's very parallel. Your callback gets invoked in a goroutine for each request, so you can write easy to understand code, linearly, without worrying about things like promises or callbacks or message queues.
I'm opinionated here, because I also think all API's should be built starting from a spec, such as OpenAPI, and to that end, I've written a library to do just that, generate models and echo handlers from API specs, and other contributors have added gin, chi, etc [1]
This isn't so much a critique of the HTTP handlers themselves but relying on (and stuffing a ton of shit into) context as a way of dealing with complex request/response patterns was a major headache.
Trying to implement a circuit breaker against multiple upstreams for a single request in Go was a nightmare.
People put way too much stuff in the context (whether that's the req.Context() or the gin.Context or whatever). It should only be for things that are request-scoped and need to cross middlewares that aren't aware of them. Most such middlewares (logging, metrics, last-ditch error handling) get put at the front of the chain anyway.
A circuit-breaker is basically by definition not request-scoped, so should not go in the context.
I usually create an abstraction like this in an API to require all returned errors to come with some sort of machine readable code. You either have to classify this response with an existing code in the list or add a new one. That list then gets generated as part of the documentation so consumers are getting possible error states straight from the code (without having to look at said code).
I do similar: I create a `handleError` function that deals with the error (logs it, reports it, redirects to the appropriate page), and call that in a return statement.
It's like 5 minute's work to code up, and I get to set whatever logging destination and redirect logic works best for my project.
I am writing a large Go web application right now and I disagree with this. I do not want to return an error. I want to write an error page or an error JSON message depending on the handler. By the time you are deep in a web application you are copy and pasting all your error blocks anyway. This [1] is a sample of one of my functions. I honestly love Go for how consistent it is.
I wrote a rudimentary static checker [2] to make sure that any function does not call both the jsonError() and errorPage() functions. This was quite a problem given how much I reuse code. I disagree with the author's take because it is incredibly obvious and quite easy to track down when you forget a return. You will have a big block of garbage in the middle of your page and JSON, and your unique error message should lead you directly to the line.
My problem is actually in executing the templates at the end of the functions [1], when you finally write out the page. If that has an error, you can't really back track. I've been thinking about changing that to write to a temporary buffer which then writes to the http.ResponseWriter but haven't made the change yet.
I can see the problem, but, I don't know, it's just not something that's ever bothered me. How big are his handlers anyway!? They should be pretty small IMO...
I've managed to write some pretty complicated REST services with just the standard HTTP library and Mux for routing.
In fact I've never used any of the bigger libraries like Gin or Echo.
While all of my Node JS colleagues are drowning in dependabot PRs we're just cruising by in the Golang team with our small handful of dependencies. Heaven!
The article points out a „pro“ of returning a response in handlers - but fails to miss any cons.
However there are a few of them. One particularly important one is that the „return response“ variants lack extremely on the observability front - which then in the end leads to services which are hard to run in production.
Eg let’s say I want to measure in my handler what „last byte latency“ is - or at least when the handler pushed the last byte into a tcp socket. With the Go approach it’s not a problem - we can wrap the handler in another handler which does emit metrics.
With the „return response object“ it won’t work easily anymore. At the time the handler returns nothing actually has happened.
The same thing applies for eg tracking the amount of concurrently executed requests to understand resource utilization (and starvation). No issue with the Go variant - hard to do with „function as a service“.
It’s actually rather unfortunate that all Rust frameworks went down the „response object“ route and made it hard to add proper observability.
So, the issue is that there’s an implicit contract that says something like “a handler must write to the ResponseWriter or call http.Error before returning”, but the compiler doesn’t enforce that contract.
The proposed improvement makes some ways to accidentally not adhere to the contract more obvious to humans, but still doesn’t enforce the contract.
I wonder whether there are languages that allow one to write a library that enforces such contracts at compile time.
(One way to do that would be to pass the Request and the ResponseWriter together with a state machine that is hard-coded in the API as a single object to the handler, with the equivalents of w.Write and http.Error declaring their state transitions, and having a language functionality that requires such object to be in an end state of that state machine whenever the function returns)
Well its a bit of an issue with escape analysis to know for sure that something is or isn't called before the handler returns. Imagine a scenario where multiple threads get involved. That would be a lot to track.
I don't think its a great pattern but just speaking hypothetically, if you ensure that only valid Response instances exist (because you organized the constructors to only make valid instances and nulls are invalid) you can force the user to return a valid instance.
Firstly, Java requires the compiler can prove variables are initialized before first use, and there’s a precisely described, fairly restricted algorithm that must be able to do that.
Similarly, Rust can’t correctly accept all code without ownership issues, so it supports only a subset of such code (in this case, AFAIK, that set is defined by the compiler, and growing over time)
In both cases the language designers on purpose limited what programs are valid so that the compiler can enforce certain constraints, eradicating a class of bugs.
I wonder whether there’s a language that works similar for this kind of constraints.
C++ destructors can do it for simple things such as “you must close this file before returning”, but that’s because the compiler will insert the call where needed.
Enforcing “you must call f before you ever call g” also is doable in simple cases: have f return a value of a type that must be passed to g or an object foo on which one g is implemented as a member: foo.g().
I’m not aware of any language that allows for more complex state transitions, though.
Edit: I think Rust can do (¿most of?) this by having a function take ownership of an object passed in, thus preventing further calls to it, and returning a new object on which only the then permissible calls can be made. I’m not sure one can require that to end with a ‘stopping state’ object, though.
... Warning ${function} took too long to check and was skipped; consider refactoring if possible or add UNSAFE.NoEscapeTracking to the function signature.
Maybe something like that to let the humans know and make higher level choices?
if api.HandleOnError(w, err, http.StatusInternalServerError, "Getting stuff for things") {
return
}
Makes the logging verbose and handling always properly taken care of, makes the error handling in handlers verbose enough from the caller but abstract enough to not be a majority of the handlers body.
I dont necessarily agree they should return a value, but I do have some issues with the api:
- by default, you can't tell if a handler has already written to the body or emitted a status code.
- by default, you can't read the body more than once
- r.PostForm vs r.Form ...that they both exist, and that they're only available after a manual call to r.ParseForm
- the http server accepts an unbounded number of connections. it'll just keep making goroutines as they come in. there's no way that I know of to apply back pressure.
there's probably more that I grump about, but those are the ones at the top of my mind. for the most part it works well, and i can work around my gripes.
- by default, you can't read the body more than once
This is a reasonable default - probably the only reasonable default. `TeeReader` and `MultiReader` are easily available if you want to spare the memory. (But you're right that the converse on the ResponseWriter isn't true, it's much more difficult to get your own implementation right.)
- the http server accepts an unbounded number of connections. it'll just keep making goroutines as they come in. there's no way that I know of to apply back pressure.
`net.Listener` is an interface you may implement as you choose. But in most cases a CPU limit is more than sufficient backpressure.
> This is a reasonable default - probably the only reasonable default. `TeeReader` and `MultiReader` are easily available if you want to spare the memory. (But you're right that the converse on the ResponseWriter isn't true, it's much more difficult to get your own implementation right.)
Fair enough
> `net.Listener` is an interface you may implement as you choose
Good point -- thanks, I hadn't considered to try and bound connections using a listener implementation.
> But in most cases a CPU limit is more than sufficient backpressure.
In many cases it's probably fine - but not all.
The issue with letting CPU be the back pressure is that it'll happen _after_ the connection is established. Unless I'm mistaken, if a load balancer can't connect, it'll usually try another host, but if it connects and times out...there may have been some side effects, so it can't try another host.
Whether the lb can safely retry should depend on the HTTP method, in practice you can configure most lbs appropriately based on application knowledge as well. (I.e. a repeat POST might be "safe enough" on some endpoints.)
> The issue with letting CPU be the back pressure is that it'll happen _after_ the connection is established.
But this is true of anything the http.Server could have done. If you want an application-agnostic client to be sure it's safe to re-issue a POST, you have to stop it before the connection is open because as soon as it's open that data is coming through. Conversely, for anything (general or application-specific) the http.Server could have done after pickup, you could do in an early-stage handler for little to no additional overhead.
It feels like you're asking for something HTTP can't really do, so of course http.Server can't do it.
> It feels like you're asking for something HTTP can't really do, so of course http.Server can't do it.
This is a fair enough point. I wasn't being discerning enough with my critique of http.Server, and when writing my initial comment, and neglected that a custom net.Listener could be supplied.
> Whether the lb can safely retry should depend on the HTTP method
to be more precise with this statement whether the lb can safely retry _after an initial connection has been made_ should depend on the http method (e.g., GET and HEAD can be considered idempotent...whether they actually are is up to the server)
> But this is true of anything the http.Server could have done.
Yes - which is why I saw value in your mentioning that a customer listener may be supplied.
> If you want an application-agnostic client to be sure it's safe to re-issue a POST, you have to stop it before the connection is open because as soon as it's open that data is coming through.
This was exactly my point.
> Conversely, for anything (general or application-specific) the http.Server could have done after pickup, you could do in an early-stage handler for little to no additional overhead.
Yes, but since the listener accepted the connection, it can't be retried against a different host.
to recap:
I'm suggesting that there are times when it's appropriate to have a listener that circuit breaks to stop accepting connections. To your point, while the need exists, that's a concern of the tcp layer rather than the http layer, so not a fair criticism of http.Server
There's a subtle problem in the proposed solution: if an error happens after you already started writing the response, you can't change the http status code.
So, what should you do with an error that happens while writing the HTTP response itself? Maybe keep statistics on how often it happens, in case happens often enough that it seems relevant for debugging. But there's no good way to report an error to the client because the network connection is likely broken.
If you're not going to keep statistics, dropping errors from writing an http response is reasonable and arguably correct.
In my team we used Gin and migrated to Echo (because Gin didn't support conflicting routes at the time, like /foo/{variable} and /foo/bar) and we got to the same conclusion. Forgetting to return with Gin (and with net/http) is an issue that actually occurs, and we've been bitten by it more times than we care to admit.
Of course it's not worth migrating to Echo just for that but it's good to know that some routers implement it differently, if it's something that bothers you.
To me it seems like a problem induced by the way Go handles errors.
With exceptions it would be something like
try
{
// Some dangerous code which may throw/raise
w.Write([]byte("Hello, World!"))
}
catch( FileException )
{
w.Write([]byte("Screw your file!"))
}
catch( ... )
{
w.Write([]byte("Screw you in general!"))
}
Putting aside discussions about Gos error handing and opinions about try and catch blocks, this specific problem (wrt the HTTP handler) can emulate the same code you’ve got above using a defer block.
var err error
defer func() {
if err != nil {
w.Write([]byte(err.Error())
}
}()
For fairness's sake, I'd like to point out that Nodejs has the same behaviour. Helping a new developer create a web server, we fell into the bugs the author is talking about a few times. I was always quick to spot the problem with the benefit of experience, but a newbie debugging a "response was already sent" error was not the simplest thing in the world.
With Rust you could write a function that consumes the HTTP client, not allowing you use it after, resulting in a compiler error for your example.
In Go you could write a function taking the HTTP client reference and setting it to null. This would at least produce a runtime error if it ever comes across the path rather than doing something else that is unexpected.
Why is it http.Error(w, ...) and not w.Write(http.Error(...))? The latter would be way more clear in showing that nothing magical happens and a return (if wanted) is still necessary (I don't write Go, so I don't know the conventions)
`http.Error` is a convenience for writing an error message and setting the code. You can just use `w.Write` if you want. The reason `Error` isn't part of the interface signature for `ResponseWriter` is presumably that everything that implements `ResponseWriter` would have to duplicate the logic.
Errors important to the server should not be conflated with http responses to the client. There is sometimes a correlation between the two, but often not.
Seems a rather theoretical problem to me because usually an external library is used. Also in the underlying http response first the return code is written before the body. This matters a lot when for instance copying a buffer into the response