Hacker News new | past | comments | ask | show | jobs | submit login
Sample cloud-native application with microservices (github.com/googlecloudplatform)
231 points by zdw on Feb 15, 2019 | hide | past | favorite | 127 comments



Completely ridiculous. I have been in the Kubernetes community for a couple years now, and there is a crazy amount of ridiculous unneeded micro-optimization for problems that nobody except the 1% of users got.

Istio mesh is a good example.


Why is everyone freaking out? This is a demo of the technology, its complexity is contrived to demonstrate various facets.

This kind of architecture is not unreasonable for larger companies with many teams, which is where the technology itself becomes useful as well. So in that context, this architecture is entirely reasonable.


Because it's a showcase from Google, therefore it will get copied by believers, thinking "this is de wae."


There's a fine line between "this is possible" and "this is how it should be".


The line is fine, but this is already miles behind it.


Those large companies don’t need this. Meanwhile many others will mistakenly use this as the normal way to do things.


What's the normal way to do things?


Monolithic apps that are simple to deploy, easy to run locally in their entirety, etc.


So is Kubernetes.


depends on the number of teams/services/languages you got. istio solves a lot of problems in that spaces when teams/services/languages grow larger.

so, i would think the percentage is larger than 1%.


Kube Goldberg application


Awesome, I'm stealing this


How the heck did we get here as an industry. The complexity is ridiculous.


Not just complexity, but quantity of tools that do the same thing. Helm, Forge, Ksonnet, Operators, now this Skaffold (haven't heard of it before)... Sure, there are some differences, but still... I'm currently moving one app from GAE to GKE and saw recipes for using all of those to describe the infrastructure.

Quantity itself is not an issue - it's great to have options. Incompatibility, segmentation and uncertainity about the future are issues, though. E.g. ksonnet and Forge are dying. Some say Helm isn't particularly healthy, too - is it just speculations or would it die in next year or two, leaving all those charts repos a dead code for archeologists?

Maybe that's just me, but modern DevOps feels like JavaScript world from a few years ago. Things are born in abundance, promise lots, and die before they're even v1.0.


I count more than 63 different ways to do scaffolding. I started keeping track because I wanted to write a comparison for my 64th solution, but never did because there are more than 64 competing products on this space

By some people from k8s: https://docs.google.com/spreadsheets/d/1FCgqz1Ci7_VCz_wdh8vB...

By me: https://github.com/dhall-lang/dhall-kubernetes/issues/10


Pretty much all of these tools feel like the wrong abstraction level for app devs.

Have to imagine some k8s-controller/git server that enables git deployments to k8s with just a node size config is the optimal end state.

I’ve mentioned this before in related threads: app devs want heroku. They want a managed platform where they type “git push” and they have an application with all the fixings. Heroku got the UX right long ago, and the rest of the ecosystem is still playing catchup


Problem there is, this isn't easy and K8s doesn't do anything about that. You need a whole CI/CD pipeline to build Docker images first.

However, there is https://gitkube.sh/ that looks somewhat similar. Can't vouch for or against - haven't used it at all.


I have the opposite impression of Helm. I'm seeing vendors and other companies cranking out and sharing Charts at a rate I've never seen before.

There are some review issues in the main upstream Helm Chart repo, but that's due to volume (which is now higher than ever).


While true, I also read about Helm v3 (that's said to be something different) and that maybe Helm is not the way to go but CRDs and Operators are... Well, at the very least, my point was - it's all confusing.


Helm is fun for demoing complex things the first time, but in production use, it’s a mixed bag. Lots of great alternatives that feel more ‘clean’ to me.


What are the cleaner alternatives?


Cloud 66 Skycap is a good alternative without most of the issues of Helm


This seems like a full-fat Build and Release management hosted solution (good job by the way, this looks great and I'll try it out).

I was more specifically asking Jeff which similar tools are cleaner, and why? Are there any helm drop-in replacements that anyone would recommend? Or is it really a shift to other ways of deploying, like these hosted solutions, or is everyone building their own k8s operators now...? Or something else?


Fair points and questions. I think most of the past couple of years was spent on making Kubernetes operation friendly. Now everyone has a managed k8s service so there are many solutions coming up to deal with their issues in “Day 2” like deployments or secret management



This would be a fantastic way to expand headcount. Of course any team supporting this many languages and services will need at least one person for each, and then a few extra to coordinate the chaos, and then a few more to clean up the damage from the previous few.

Only being slightly tongue-in-cheek.


Reminds me of the JavaScript craze of 2010-2015. With a new framework each week on HN.

Give it time it'll mature and people will be burnt out from all the changes


It’ll be something else then. We can’t tell motion from progress so we just keep moving and moving.


When were things not this complex? Did Google or Yahoo run out of some dude's laptop back in the day?

Things are simpler now, not more complex. The tools are better, more standardized, and way more broadly available. It's up to you whether you need or want them or not.


I am not sure that is true. I ran fleets of FreeBSD machines 20 years ago and we dd'd disks to spin up new hosts, PXE booted some, local and remote (NFS) storage. Even in the 90s, machines were cattle.

The pieces are largely the same, but in a lot of ways, the tooling has _enabled_ more complexity than is necessary. Things have become specialized in incongruent ways that makes things harder. Simplicity is a constant struggle and a quality all of its own. Sometimes you have it by accident, where it then occupies the unknown known quadrant.


> I ran fleets of FreeBSD machines 20 years ago and we dd'd disks to spin up new hosts, PXE booted some, local and remote (NFS) storage

To me all of this sounds like crazy complexity vs a basic Kubernetes cluster. It all depends on what you are comfortable with I guess.


> Did Google or Yahoo run out of some dude's laptop back in the day?

https://www.independent.co.uk/life-style/gadgets-and-tech/33...

Didn't Facebook also start in Harvard's dorms?


As someone who was writing analysis for my CEO when we were involved with the POSIX definition, let me say that POSIX was driven by the complaints of users, who insisted that they would stop buying product if the vendors didn't stop the arbitrary fragmentation. So POSIX was forced on vendors who then had to identify areas with mature features to standardize - so that every UNIX admin and UNIX programmer wasn't using a completely different environment. We need to do the same for Linux, the same for Kubernetes, and the same for any mature technology. The vendors are striving for lock-in, and we need to fight it.


For Kubernetes, please see the CNCF's Certified Kubernetes program: https://www.cncf.io/certification/software-conformance/

(Disclosure: I run it.)


It's the same with how people think the pasts didn't have as much crimes. It's because you didn't knew about it.

All companies had their own way of hacking and making their system worked. But they didn't share it. Twenty years agoes, what big company shares their infrastructure knowledges as open sources?

Now things are being shared and learned together.


But microservices!


Could someone please point me in the direction of some solid documentation about deciding on when / how to split out microservices? So many of the cases I see them used are overkill and just make development and devops far more complicated for the scale of the application & amount of data/users being processed. I find myself usually comfortable with splitting out Auth (Cognito?), Payments (Stripe?), and not much else.


My favorite way to think about this is to apply a reverse Conway's law. Architect your microservices like your org chart.

One of the main benefits of decoupling a monolithic application is getting the freedom to release, scale and manage different components separately. Each team becomes empowered to define their own practices, procedures and policies that are appropriate to their business requirements.

Have a mission-critical component that needs 5 9s reliability (auth or billing)? Great, release that carefully over multiple days with canarying across failure zones. Working on an experimental API for a new mobile app? Awesome, ship every commit.


Architecting microservices like your org chart would be Conway's Law, correct?


Yeah, I don't think it's "reverse" either.

The common quote "if you have four groups working on a compiler, you’ll get a four-pass compiler" clearly has architecture following from organizational structure.


It's reverse because Conway's Law, as stated, says that code structure inadvertently arises from org structure, i.e. code = f(org). The reversal is to architect the org structure to ensure optimal code structure, i.e. org = f(code).


That is reverse, yes.

I think your original comment was unclear, if not misstated. I had read the "like" in "[a]rchitect your microservices like your org chart" as something like "to resemble", and the rest of the comment seems to follow just fine. Other readings seem possible but I don't see one that gives the sense you want without what feels like some serious contortion.

In any case, no worries, we're clearly on the same page now - just trying to figure out what went wrong.


The way I've generally thought about it and have seen it done successfully in practice is to create a microservice for each component that scales independently. Auth and payments make sense because they scale independently. You may get authorization requests and financial transactions at a different rate than traffic to your application itself.

Similarly, if you run a website that does batch processing of images for example, the image processor application would be a microservice since that would scale independent of website load. It could be that you would need to process 100 or even 1000 images for each user, on average, and it doesn't make sense to scale your whole application when a bulk of the application processing is for image processing.


That might be good criteria. But it still depends on your application. Most web apps don't have hardly any overhead when under load. So it's essentially just as efficient to load the whole codebase as a monolith into each node as you scale up.


Correct, the hierarchical breakdown of services is orthogonal to the scaling unit of code. If every node in the cluster could execute every function, there is no need to split things out.

When deployment and coordination become an issue, that is when _deployment_ needs to get split up. But given our current RPC mechanisms, deployment and invocation are over-coupled so we have to consciously make these decisions when they could be made by the runtime.


You probably won't find much that will help you because there really is no "right time". I've done a lot of surveys, and what I've found is that if you're running microservices, about 25% of your engineering time will be spent managing the platform/DevOps/Not writing product code. That time can either be 25% of every engineer, or 25% of your engineering staff.

In either case, the best time to do it would be when you feel the increased speed and agility is outweighed by the added overhead. The speed and agility comes from small teams being able to operate mostly independently.

There are of course a ton of exceptions to this. For example, if you're using AWS Lambda or Google cloud functions, they do a lot of that 25% for you, as long as you're willing to do it their way, so now you have an incentive to go microservices sooner. Also, going microservices will probably allow you to scale up faster if you've done it right. So if you expect a huge spike in traffic, that's a good time to go microservices.

There is lots of good material on the pros and cons of microservices, but when to actually make the switch, or if you should start out at microservices, is a very situation specific question and relies on a lot of external factors.

The best I can say is look at the pros and cons and figure out what that costs or gains you in your particular situation.


Microservices solve a business organization need. It brings nothing to the table from a technical standpoint except complexity and overhead which might be required if you run a large company.

I would say, don't use Microservices unless you have 100 or more employees.


Look for domain driven design. It’s all about the data and understanding that microservices benefit from data redundancy.


Domain driven design from Eric Evans is a great resource for learning about how/where to separate out services.

You will learn about how to model complex domain models and how to decompose them into Aggregate's. The Aggregate is the key to where to create new microservices, as each aggregate should only contain domain objects that relate to itself.

Payments is a great example of an Aggregate.


I'd say a bounded context is a better border. When doing REST you can then begin by implementing each of your aggregate roots within the context as a resource.


I've never been able to make sense from including more than one aggregate root in a bounded context. What are your thoughts?


I asked this once at a K8S presentation and the person answered that he found it was about a team size of 35-40 engineers that migrating to microservices started to make sense.

#anecdata


The boundaries are between logical components of your application in your business domain. The two you identified (auth and payments) are a good fit, especially because they also rely on external services a lot.

The other heuristic to follow is how your company is organized. If you have a separate team working on a major feature then perhaps that could be its own service so they can own the full-stack.

Otherwise, stick to the monolith for the best results.


I am reading "Building Microservices" by Sam Newman. He has got some really good insights into how to break up a monolith into micro-services.


I choose to split out a new service if I need

- independent deployment of this service

- independent scaling of this service

- independent implementation stack for this service


When you split off a team to work on something, you can consider splitting it off into a microservice.


The key is pre-emptively identifying cases where that will happen before the need actually arises, since splitting out microservices from a monolith retroactively can be exceedingly painful and complex. You have to strike a balance between the short-term and the long-term.


Simple test: is a component of your application, which you could logically separate into its own thing, going to be used by at least “the app” + 1 other app?


That's one heuristic, but not the only one you should use. It can make sense to break out microservices that only have one consumer for various reasons.



my main reasons to split are the following:

- security - minimize privilege escalation and access to infrastructure

- scalability - different workloads need different metrics to scale(CPU bounds vs connection bound)

- dependencies - why burden devs with all dependencies especially some that are more difficult to setup dev environments for

- service per backend dbs/etc (overlaps with the above)

- domain - can place domain specific skills on a team


Security? Really? Too many moving parts, too many holes and places to exploit... fixing the same sec problem is 30 different places does t sound like great security.


More like giving the web facing application full access to the payment database vs having a separate internal service with a very limited API. A problem with the frontend does not always immediately compromise the database.


split any thing that can

kafka ->process message -> kafka


I know this was probably well intentioned, but I can't shake the irony of how overly complex and over-engineered this is. If you're just starting out, please, please, please don't do this.


Man, this is a frustrating and shallow criticism. Obviously this is over-engineered. It's a demonstration of how these patterns would fit together in an application that required them.


Agree it's not the best criticism, but there's at least some validity to it. I think the main issue is that this is an extremely heavyweight architecture that will incur a disproportionate level of administrative overhead - you would need an ample team of competent full-time devs to build and maintain this system properly. It's like an advertisement for a giant excavator where the excavator is featured crushing a soda can. It's an impressive piece of machinery, but the task being used to demonstrate it is comically mismatched with its true capabilities.


True, true. That's a fairer way to formulate it. But like, it's supposed to be an extremely heavyweight architecture. The benefits of microservices are arguably only apparent when you have an ample team (or teams!) of competent full-time devs.

You're right they could have... beefed up the soda can, so to speak, but I don't blame the (presumably) DevRel folks who put this together for hand-waving it, "now imagine a mountain here".


> The benefits of microservices are arguably only apparent when you have an ample team (or teams!) of competent full-time devs.

I would very much agree with that. Although I'm not against microservices, they are by no means a quick or efficient way to get things done. Hedging against future theoretical scaling concerns comes at a high cost - a cost that's very much worth it if true scale is achieved, but a high cost nonetheless.


Depends on the scale of this online store, if it saw 0.1% the amount of traffic as Amazon then it is justified.


Microservices solve a number of engineers scale problem, not a traffic scale problem. Tech scale is drastically more simple in a single application, because you don’t introduce an unreliable network and related complexity into all of your problems


Interesting, in my experience we've never had major issues with our service mesh or internal networking. It definitely introduces a lot more complexities. I suppose we've always had a dedicated devops team so I'm probably unaware of a lot of these issues.

I'd be interested in some resources around this topic if you know of any.


I'll be honest - pretty hard to find objective literature on it, because most of the literature is just trying to tell you microservices will solve all of your problems. I think ultimately the best link between the two is Amazon's move to microservices and their focus on "2 pizza teams".

https://www.slideshare.net/TriNimbus/chris-munns-devops-amaz...


Microsoft has a similar reference application [1].

There are comments arguing these architectures create a ridiculous amount of overhead for what would be a simple application traditionally, and others countering that the point is to show the underlying tech in the context of a "simple" problem. I think there's a large amount of truth for both sides.

It feels like there's a great paradigm shift on the horizon, and hopefully a good set of abstractions to build with. We're programming in machine-specific assembly languages, waiting for a high-level language to come along so we don't worry about things like calling conventions anymore.

[1] https://github.com/dotnet-architecture/eShopOnContainers


I don't have peers where I can bounce off my thoughts and get answers to solutions like these. I've been very curious about serverless microservices architectures and this repo has given me a pretty clear way of doing it while showcasing polyglot microservices and persistence. I get that it looks like it has a lot of overhead for what it tries to do but I can grok scaling this up for an enterprise application.


"Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction" - Einstein


Everyone can randomly quite Einstein.


I really despise the phrase "cloud-native", but this is a cool project, because it shows how you can have a bunch of different platforms (python, go, node, etc) running together, and how you can set them up locally without having to worry about how to install them. Also there's no worry about someone publishing a package to npm that overwrites your documents, because everything runs in containers.

I'd love to say this is overcomplicated and to just use Docker Compose, but I don't think Docker Compose is the way to go.

The next thing I'd like to see is how to get this integrated with vscode or Atom to provide autocomplete without installing everything locally.


How about "cloud-amenable"?


It's unfortunate that each project has a `genproto.sh` and that there's no tool that autogenerates protocol libraries/modules for any consuming language. It's a real pain point of trying to get people to look at gRPC. I'd be amazing if there was a simple way to have:

   .proto(s) -> language code -> language binary/package
Completely automated and ready for an import into a project.


Bazel should be pretty good for that:

https://bazel.build/

https://docs.bazel.build/versions/master/be/protocol-buffer....

We use it at work to build a web app whose backend is written in Go and frontend in Typescript. All of the code gets built and placed in a Docker image using these rules:

https://github.com/bazelbuild/rules_docker


Hi author here. We wanted to keep the demo app as simple as possible for readers. Adding Bazel would introduce another layer of complexity whereas most devs can read bash scripts that are a couple of lines.


What sucks is this is all generated at compile time. I loose out on auto completr from my IDE if codegen is done during build.


Tried grpc's in a production application where one microservice had to make over a million grpc connections to another microservice. We experienced ton of memory leaks and switched over to http/json which has been working well. Implementation was done in Scalla, Akka. Curious to know if others have had similar experiences or if there is another best practice with grpc's that we're missing.


Obviously a million open sockets uses more memory than having a stateless http backend. However, having worked on the very largest RPC service deployment ever fielded I feel it is safe to say that there is no reason to have a million open gRPC channels.


The problem that gRPC solves for you is versioning your messages between your services.

As your json payloads evolve, you're going to encounter pain trying to keep your services in sync, whether it comes in the form of writing parsing code to crack open payloads and do conditional error checking based on the version (and expected fields), or whether it comes operationally in how you actually deploy updates to running services.


That's solved by protobuf, not grpc.


...why did one service need to make a million connections to another service?


I really appreciate projects like this.

It’s helpful for casual observers to understand all the pieces of the stack. You can get an immediate feel if the approach is right for you and your team.

And it’s helpful for engineers to have examples to reference copy when implementing the patterns themselves.

I’ve been working on project similar to this for go, gRPC and Envoy:

https://github.com/nzoschke/gomesh

My project doesn’t go into deployment or K8. If I needed to figure that out I’d look at the OP project.

I also have a project that demonstrates Go and Lambda:

https://github.com/nzoschke/gofaas

If these help even a few engineers learn to be successful with the tools it’s a win.


Is this satire?


You know how in most languages you can write "hello world" in a single line, or maybe a couple, but sometimes someone writes it with multiple classes that have some sort of inheritance, and maybe a factory?

That's not satire, that's trying to show you how to make classes and factories in that language.

This is showing you how to build a full website with microservices on GCE/Kube.

It's obviously overbuilt for what it does, that's not the point. The point is to show you how to build something complicated with simple examples.


Good architecture is about keeping complexity in check while ensuring a broad set of goals such as a maintainability, scalability, and reliability.

This is an example of an extreme opposite. Almost destructive.

Google has enormous mindshare amongst developers; and when they put something like this out there, people will actually see it as an example to follow.

Where is the blinking warning sign saying "do not try this at home"?

This is nothing like creating too many factories in a Java application.


If you think this is overly complex, I should show you how some real websites are run. Netflix has over 1000 microservices. I think AirBnB is up there too.

When I look at this I think, "Man what a cute little architecture, but at least it shows some of the basics of a large scale website".


So no one should ever see example of when things get complicated?


As an example not to follow?

Software evolves to spaghetti over time unless one makes an effort for it not to happen.

It makes no sense to start with spaghetti.


Certainly it was an attempt to be hip, not sure if it worked.


Yes.

I know hn don’t like one line questions like that which may be read as an insult. But I honestly cannot tell.

It is obviously a lot of work. But the technology mix and design is just absurd. The choice of web shop with hipster products is tongue in the cheek but so was the pet store of year 2000.


I honestly thought it was myself until I saw that it was an official GCP repo. Unbelievable...

Sending emails is a one-liner in Django. Am I a sucker for not building my own email sending service?!

Of course being a Google demo they have to include the obligatory ad server service, in Java no less. Nonsense like this is why I'm steering my kids away from considering software development as a career.


When a company and platform grows it's very common to build out an email sending service since you can only send so many emails per second.

An example of an email sending service is Sendgrid, who have built an entire company around this service.


I'm aware, I've used Postmark for years. My point is that building your own microservice to send emails is ridiculous overkill.


Sendgrid is not alive for mails/second, that part is not _that_ hard.

They are alive because they take the spam blame for you.


It’s intended as an example for deploying a complex app using all the available tooling in kubernetes. It’s not intended to be actually used.


Sorry to say mate but maybe... I’d usually pop that on a queue and get a separate service to pick it up... maybe that’s what you’re doing?


Yes, I send emails async using Django-RQ (previously Celery).

It's still a one-liner once you've configured your async task handler. Which makes the example custom email sending microservice even more ridiculous when it can be done trivially by configuring proven off-the-shelf components and third party email delivery services.


A bit off topic (sorry) but how are you finding django-rq compared to celery? Anything you miss? Anything you gained?

I wasn’t sure how ready rq was for prime time, but I’ve hit far too many celery bugs and the maintenance hasn’t been great for awhile.


I found Django-RQ far simpler to configure than Celery. Given that I'm using Redis for caching anyway, it also removed RabbitMQ as a dependency which Celery basically requires.

Django-RQ's job decorator is basically a drop-in replacement for Celery's task decorator. Django-RQ's built-in Django admin app is very nice.

My Celery usage was pretty simple (mostly background email sending and PDF generation) but for those use-cases Django-RQ has been a very good replacement. Your mileage may vary of course.


That works well in your current situation, but when you have multiple products and services that also need to send emails it makes sense to combine them into a single service.


I hope it is


Let me try to break this down from first principles, since I'm not sure I agree with the largely negative sentiment ITT around how complex this looks:

Most software projects are long lasting, and have unpredictable and constantly changing requirements.

Agile is currently the best software development methodology under these constraints.

Effective Agile teams should not be larger than 10 people total, minus the PO and Scrum Master leaves 8 devs.

Its not possible to run a large scale, high traffic web application with only 8 devs.

Therefore, it makes sense to split large applications up into chunks small enough for a 10 person team to manage autonomously.

Since the application had to be split up, we now have to solve the communication issue. Now we need networking, DNS, TLS, have to consider latency and bandwidth, etc... We also have other issues if we’re running at scale: redundancy, monitoring, separate environments for testing and production, having a local dev environment similar to the production environment. There is a huge list of things that are not important to for an early stage startup to think about, but are very important and very difficult for most large enterprises to get right if they want to consistently and reliably deliver good software.

Google is a large enterprise that operates large scale web services that has proven they know how to get this stuff right.

This repo is a reference architecture, from Google, on how to run micro services at scale using modern tools and methodologies. If you think this is over engineered I think you're just not the target audience, and something like Heroku is much better suited for your scale.


Why both a ClusterIP and a LoadBalancer services to expose frontend 8080 as 80?

Also, no persistent data storage (no data to the store owner)? (I guess for easier example)


The ClusterIP and LoadBalancer are on separate network interfaces. The ClusterIP exposes the service on an internal interface whereas, the LoadBalancer exposes on a public interface.

https://kubernetes.io/docs/concepts/services-networking/serv...


Hi, the author here! The clusterip one is used by Istio (if you’re following the Istio instructions) but it’s harmless to have an internal LB. The other one is exposing an external LB, which is useful both if you’re using GKE (or any cloud provider) or GKE+Istio.

We deliberately wanted to leave state out as it complicates things by quite a bit, but doesn’t add much value in terms of the technologies we wanted to showcase.


ClusterIP would be to access the frontend from within the cluster. LoadBalancer would of course be for external access.


on thing that still i don't get with microservices is where to put the central control. Usually, services requires: authentication, authorization, execute, (errors), response.

Now, there should be a reverse-proxy that first call the authentication, if correct, calls authorization, if correct call the execution. But, how do people do that?


Istio (part of that demo) or Spire (SPIFFE) do that for you.


If this was just for demonstration, they could have done it with just a couple of microservices.

Why get the full web stack in the picture?


Hi, author of the demo here. We went with a complex app to show a realistic complex scenario. This way we get non-trivial trace graphs and metrics from monitoring tools.


Calm Down, Its showcase of what is possible, not what is ideal solution.


They should change the name "Go" to "if err != nil"


I don't understand why people don't like that. If you hate it so much, why not write a plugin for your text editor that collapses it? This will make read the Go code much faster, and only peek in if you are curious. The proposed `check` keyword will not remove the cognitive load this generates either.

The way I see it, I'd rather have it. Took maybe 30 seconds to write, eliminated a whole class of bugs, and if I don't like it, I will write that plugin to hide it.


What do you mean by "eliminated a whole class of bugs"?


I think he meant that it changed the nature of error handling, and thus changed the way developers think about bugs and the way we encounter them. Errors are returned from function calls, and callers are expected to check the value of that error. The error must be assigned into a variable (or intentionally ignored, which should never pass a code review), and then must be checked. This forces errors to be handled on the spot, or returned up the call stack intentionally, usually with some extra info/annotation. You don't get try/catch scenarios where an error is caught 10 calls above where it occurred.

Imagine writing Java, and wrapping every function call in a try/catch block, and inspecting the exception if one was caught, and then handling it or re-throwing it. That's kind of what we do in Go. There are no "unexpected errors" because it is clear where all errors originate and how they are to be handled.

Go2 will improve on this if err != nil syntax, possibly with some function-scoped error handler and a new language level handle/check concept. Check it out

https://dev.to/deanveloper/go-2-draft-error-handling-3loo https://github.com/golang/go/wiki/Go2ErrorHandlingFeedback


Catching all exceptions is what we used to do in Java - using checked exceptions. It turns out that in many cases the caller cannot do anything sensible with most exceptions, except to let them bubble up to a higher layer. Eventually you reach a point where something can be done - rolling back and retrying a whole transaction, for example.

Forcing every intermediate layer in the call stack to catch and rethrow that exception (check and propagate an error code) sounds like a good thing for explicit error handling, but actually in practice introduces a lot of boilerplate code that just provides more opportunities for mistakes (like errors being silently swallowed, or logged multiple times creating confusing log files).

How many times in a code review do you see `if err != nil { return err }` and ask yourself if what it is doing is actually appropriate? Most people just mentally mask out that kind of boilerplate over time.


It's really easy to let exceptions raise and pass through in languages like Python. In Go you'll typically return and handle errors explicitly. This makes it easier to reason about potential failure modes and behaviors in some cases.

But it is very verbose error handling.


I think it removes bugs caused by the try/catch pattern which can make it very difficult to follow the path of execution. In a sense exceptions have the same issues as "goto".

It's not that they are inherently bad so much as it requiring discipline to stop things from getting out of hand, and that's just something you want to avoid in larger projects. If you need to be disciplined you might as well just check the return error value.


Exceptions seem less unstructured than "goto". With exceptions, at least everything is strictly nested.

And there's a new class of bugs: forgetting to check for an error. In Rust, the compiler won't let you forget. With exceptions, forgetting to handle an exception will result in a somewhat controlled termination process. In Go, execution will just continue in a bad state.


lmaoo

Go2 will fix this btw




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: