Hacker News new | past | comments | ask | show | jobs | submit login
Ten years of “Go: The good, the bad, and the meh” (carlmjohnson.net)
193 points by jo_beef on July 18, 2023 | hide | past | favorite | 298 comments



> Again, it shows how things have changed that I praised Go’s type inference as an advance in the state of the art, but now the Hacker News crowd considers Go’s type inference to be very limited compared to other languages. "You either die a language nobody uses or live long enough to be one people complain about."

This rubs me the wrong way. Even back when Go first came out, anyone who knew anything about programming languages rolled their eyes at pretty much everything about Go's type system, including the inference. Just because Sun couldn't figure out how to do it in the 90s doesn't mean that type inference wasn't mostly solved in the 70s. Well before many people were using it or lived any real length of time, Go's always been a language people have - rightly - complained about.

That said...nothing in the original post says anything along the lines of "Go's type inference [is] an advance in the state of the art", so I might be misunderstanding the author here.


> anyone who knew anything about programming languages rolled their eyes at pretty much everything about Go's type system

And Go has succeeded despite these condescending diatribes on how a language needs to have a Hindley-Milner type system with ADTs and type classes to be useful. Go made me truly realize how insufferable the PLT community is, and why they are so absolutely lost when it comes to creating successful languages.

In under a decade Go swept up entire markets with a simple, down to earth language you can learn in a day and keep in your head. It optimized for the masses and the common cases and has absolutely eaten the lunch of these languages with lauded type systems that takes several courses in formal logic to even get started with.


Go really is great. There was a big argument around a new project we were starting whether to do Go or Clojure.

What we did to settle the debate was to ask an entry level dev that was just starting if he was interesting in writing two sample applications in each language, having known nothing of either. Nothing complicated but touched enough points (HTTP endpoints, database interaction)

A full day later was still trying to get the Clojure app working correctly.

He finished up the Go one in like an hour.

Since then we've brought new devs on with zero Go experience and they are up writing good code in a day. I can't imagine where we would be if we had gone down the Clojure route.


As a method for choosing a language, doesn't this just inherently skew to the smaller and more simplistic one, which may not necessarily be the best for all tasks or over a longer run where the techniques of the more involved language could be learned?

It's like hiring a new farmhand and saying, "here, dig two 3 foot holes, and we'll see what takes you longer, this shovel or an excavator you have no training or experience with," and then concluding the shovel is superior for all digging tasks from that point on.


> As a method for choosing a language, doesn't this just inherently skew to the smaller and more simplistic one, which may not necessarily be the best for all tasks or over a longer run where the techniques of the more involved language could be learned?

pmuch.

I migrated a team off of an aging enterprise application to the product that was rapidly gaining popularity in industry. I had some expertise and believed in it, I won't name products (and it wouldn't mean anything to HN anyway) but this was a painfully obvious change. We were the pilot project at the company, and went though 6 months of most users struggling. It was about 18 months after adoption that my boss told me he felt like the new product was both better and that the slow down had been worth it.

Life isn't long enough for too many of those experiments. Not everything takes that long. And the experiment could have failed. But the experience really put lots of decisions into perspective.

There's a saying that nobody uses technologies that their boss didn't learn in college. Factually inaccurate, quite astute in message.


It skews towards languages that the developer is familiar with.

Kotlin, for example, is a decent bit more complex than java is. It has a lot of new concepts to learn.

Yet, most Java devs can be productive with kotlin in a day or two. Even with the added concepts, it's not that different of a language.

The same is true of C and C++. A C dev could jump into C++ pretty quickly (even if they are just writing C with classes to start).

Your analogy is more like "Dig a hole, here's a shovel and here's a post hole digger". The familiar tool will likely go faster than the unfamiliar tool. And, as it happens, most devs are highly familiar with imperative programming styles, not so much functional programming paradigms.


Potentially. Devs might get up and running with Scratch even faster than Go. But I'm assuming the OP already determined that Go or Clojure would work well for the task.

Go probably has a leg up on Clojure for reputation in production projects, considering Docker, K8s, Terraform, InfluxDB, and Geth are all written in Go. The biggest ones I can find written in Clojure are Puppet and CircleCI.


Go is basically C with training wheels, THAT CAN NEVER BE REMOVED.

It accelerates junior devs to production ready at the expense of everyone else.

It really feels like a language designed by people with utter contempt for those who actually write code.


If those people who 'actually write code' wouldn't have sprinkled their code with race conditions and buffer overflows maybe C would still be used widely for business apps, but reality is C code, while highly efficient, carries more risks than languages with training wheels. It's not contempt, it's coping with a reality where you can't afford to disqualify half of the programmer population because they're 'too junior' while there is an extreme shortage of workers. There's even a shortage of programmers for languages _with_ training wheels here...


It's a language for white-collar sweatshops ... more so than Java ever was.

Ultimately this is a disservice to those with any talent as it leads to a work environment where they are relegated forever to the lower ranks (unless they give up coding and move into management).


I don't believe it's got anything to do with talent. The wiser programmers I know don't get emotional about 'training wheels' but try to prevent errors by systematically and automatically fixing weak spots. It's not about contempt but about making sure after you make a mistake nobody ever makes that same mistake again... Footguns are a waste of time regardless of how good you are at avoiding them.


I would say it skews towards languages that are easy to be immediately productive in, which is certainly one of Go's biggest strengths.


It feels to me more like giving them the choice of two excavators one with many more options that are unnecessary and one with simple easy to grok basic controls


This seems good, except that it would always push for the languages that produce now and leave bugs for production.

Python would beat anything by this measure, but static typing is amazing for stopping bugs.


Your "entry level" dev finished a non-trivial Go app in about an hour without knowing anything about Go? Literally how?

Also he learned enough Clojure to mess around in, again with 0 knowledge about it. Again impressive. Or is this industry standard competency in some fields?


A data dip via a single http endpoint is very easy to implement in go. Like 6-8 lines of code easy, with tons of examples available online


That's such a interesting experiment, but I can't help but think that whichever language's project was tackled first was playing at a serious disadvantage, even if the projects were somewhat different...


I definitely agree that learning curve is important and Go is decent IMO but that still seems like a crazy way to choose a language. You don't hire a dev for one day and then fire them.

It would be like deciding to dig the channel tunnel with a spade because TBMs are quite complicated to set up.


I mean if you have ever written a for loop before you know enough go to get started. I like go fine but this seems like a very short sighted thing to optimize for.


golang and Clojure are not in the same space, so it doesn't make sense to compare them. And I question this litmus test of using a junior dev anyway.


> And I question this litmus test of using a junior dev anyway.

As you churn employees, anyone new will inevitably be "junior"[1] when it comes to your code base - Go is (mostly)[2] ridiculously easy to read because you can't (easily) do things[3] that make reading - and more importantly, maintaining - it hard.

[1] Sometimes literally a junior too.

[2] There's a few things I think might trip up someone brand new to Go.

[3] Looking at you, Perl and Ruby.


Junior as in non-expert in the field. Using simplistic languages means optimizing for quickly pumping out code (at the expense of maintainability in the case of golang, as I've personally experienced at an employer that has one of the largest golang setups on the planet). Not to mention that golang comes with its own set of gotcha's and idiosyncrasies that need to be understood, especially in more involved programs.

There's a good middle ground that can and has been achieved by other languages.


What does this actually mean? They are both programming languages made for solving general problems, what makes the comparison inappropriate? I've used both and pretty much all of their cousins, so I'm confused why you would think they aren't comparable.

The "test" itself I agree is a bit shortsighted. I think Go might be a fine choice over time anyway, but the experiment itself doesn't necessarily highlight the things I'd want to highlight for medium- to long-term viability.


> And Go has succeeded despite these condescending diatribes on how a language needs to have a Hindley-Milner type system with ADTs and type classes to be useful

Fine. But it's beside your parent comment's point. The article claims that everyone thought Go's type system was an advance on the state of the art. This isn't even close to true.


Building something that can be successfully adopted and operationalized at mass scale is an element of the state of the art.


Amen. I’ll give Microsoft crap any day, but the most popular language on Earth is now routinely developed with a sane type interpreter that eliminates entire error categories.


golang mainly ended up competing with and replacing the likes of python and ruby, where it was intended to compete with C and C++, where it didn't really change anything.

It makes sense in retrospect of course, python and ruby are slow, dynamically typed languages, and any improvement in performance and typing is welcome. It doesn't mean that golang is inherently better somehow to other offerings. I still maintain that Java and C# are superior languages and ecosystems.


Good point. I know I'm just being bitter, but I still can't stand Python after dealing with the 2->3 transition. I code mostly Java and use Go where I might have reached for Python before. The Go built in libs make it so easy to build small CLI apps where I may have used scripting before. And the cross compile situation makes it easy to ship wherever. I'm not a PLT person, and Go is easy for me to simply solve problems with. Probably also why I'm fine using modern Java.


Why C#? It definitely has its own niche, and is good for building userspace Windows apps and games, however beyond Windows I don't think it has neither an established presence nor ecosystem. If anything, C# attempts to be more of a Java replacement than address Go's niche.

Performance-wise, C# and Go are head-to-head: https://programming-language-benchmarks.vercel.app/go-vs-csh... I also would say Go has been much more adopted for cross-platform use than C# ever was. Regarding language features, Go's devotion to staying intentionally bare-bones is worth a lot. C# may be more elegant in a lot of ways, however it seems to be in the process of being choked by Microsoft's continuous feature-creep.


C# has way, way more market share than Go as well as a bigger ecosystem and it's not even close. I would hardly call a Java replacement a "niche" since that's everywhere.

There are, of course, other benchmarks that rank c# above go, but benchmarks are flawed. I imagine people are comparing C# to go because it's got a pretty solid type system


I would beg to differ.

On Github[0], Go currently sits at #3 for pull request volume (C# is at 10), #3 for stars (C# is at 8), #6 for pushes/commits (C# is at 10) and #6 for issues opened (C# is at 9). By each of those metrics, Go has a significantly more vibrant ecosystem than C#.

[0]: https://madnight.github.io/githut/#/pull_requests/2023/2


If you like arbitrary metrics, here is another:

https://survey.stackoverflow.co/2023/#most-popular-technolog...


I would think that data is only about the publicly visible part of GitHub, and guess C# has relatively more activity in the dark part of GitHub.

Now, whether that would move C# over golang, I wouldn’t dare guess.


https://redmonk.com/sogrady/2023/05/16/language-rankings-1-2...

This is one of the better rankings. But I would add that these will significantly underrepresent enterprise-type projects, where C# is often used. Some say that job listings give a more accurate picture, and while I didn’t look it up, I do believe that C# has more positions.


> however beyond Windows I don't think it has neither an established presence nor ecosystem

I'd respectfully disagree with this statement based on my personal experience. Ever since .NET Core was introduced, I've noticed a significant shift in the hosting of ASP.NET apps. Many developers, including myself, now prefer hosting applications in containers on Linux systems rather than relying solely on Windows. This change reflects a broader trend among C# developers.

While I understand that my perspective might not encompass the entire developer community, I strongly believe that the adoption of Linux-based hosting for ASP.NET applications has grown considerably. It demonstrates the expanding reach and influence of .NET Core beyond the Windows ecosystem, proving its establishment in other platforms.

Please note that this is solely my viewpoint based on my experiences and interactions with other developers. Other opinions may vary, but I remain confident in the growing prominence of .NET Core outside of the traditional Windows environment.


> and why they are so absolutely lost when it comes to creating successful languages

They aren't lost—they're just more interested in actually good ideas than in popularity. Popular languages must appeal to all kinds of programmers with varying backgrounds, so they are heavily constrained. Your argument is basically that mathematicians don't know what they're doing because their most advanced theories aren't used by mechanical engineers.


> And Go has succeeded despite these condescending diatribes on how a language needs to have a Hindley-Milner type system with ADTs and type classes to be useful.

And nothing says that go wouldn't have been more successful had they added those features. In the final analysis, the relationship between the success of a language and any intrinsic qualities is very hard to qualify. But IMO, the success is not a good measure of wether or not the criticism of go were/are valid.

> Go made me truly realize how insufferable the PLT community is

Agree PLT folks can be a passionate bunch, i am not sure they are any worst to any other online community.

> and why they are so absolutely lost when it comes to creating successful languages.

Depends on who you include in the PLT group :

- C# and typescript were design by Anders Hejlsberg , arguably the most successful language designer - Scala is also pretty successful and really tied to the PLT community - Kotlin by JetBrain, Dart started with Gilan Bracha

Not to mentioned the wide range of features seen in most recent language (async-await, reactive programming for msft research) etc... etc...

The fight between pragmatic and simple language vs complex and expressive language is not happening outside the PLT, we have proponent of both ways of thinking inside the community. Not everyone in the PLT is pushing for overly complex theoretical approaches.

But more importantly, let's not forget the 1000's of engineer quietly implementing the compiler, libraries etc... that make go, or any other language possible.

Back to go, my personal gripe with go wasn't the decisions they made, but the rational given for those decisions.

Take the most famous example of not including generics. Designing a good generic type system is a very complicated task, and if the team had come out and say they didn't want generics because they didn't have the bandwidth or the know-how to do so, i wouldn't have care. But the rational given, describing generics as border line useless, or somehow too hard for the average programmer to grasp not only fly against basically 25 years of programming language history, but were just plain rediculous.


> In the final analysis, the relationship between the success of a language and any intrinsic qualities is very hard to qualify.

Not at all. It's the same as with literally any other product. If it's good, people will use it. Outliers are rare.

Now, you may disagree with what most people consider good, but that's another discussion ;)



I guess marketing industry is billion dollar money sink then.

"Just get-good at making products, duh"


The thing about marketing is it can at best get people to check your product out. It can't make them keep using it.

That's why I didn't mention it, it's boring table-stakes stuff. Obviously if you don't tell anyone about your product, no one will use it. That's not the hard part.

The hard part is of course making a compelling enough product that people actively want to use it. The really good products even advertise themselves to a certain extent.


What do you mean by "Go has succeeded"? It has found its niche, yes, but so have those HM languages (maybe the Go niche is a bit bigger), and a bunch of other languages that people also like to hate.

Meanwhile, most code is still written in languages like Java, C and PHP.

That's no value judgement or anything, I don't think Java is an amazing language (neither do I think it's a terrible one), but it's not like Go has revolutionised anything. It's just become a new option among many.


The fact that you mention Go, an 11-year-old language, alongside a 27-year-old, a 51-year-old and a 28-year-old language proves that it has succeeded.


Then so has Ruby, Haskell, Erlang, and a bunch of other languages that espouse virtues completely contrary to Go. So I'm not sure what your point is?

Also you didn't even get my point because I explictly contrasted Go, as a niche language, with the big languages that most things are written in.


I'd phrase it as: ruby, python, and javascript created a generation of programmers that didn't know how to use a type system. When they needed something more light weight and performant, they migrated to the language with the most primitive type system.

Typescript and swift have shown a better type system can appeal to the masses.


> you can learn in a day and keep in your head.

I'm not sure that this successpoint is really that valuable. It sounds more valuable for those who want butts in seats than it does for long term satisfaction and survivability of your code base.


Being able to read your code you wrote 2 years ago does wonders for survivability, which is a gross simplification of course.

It's a great language for some problems, horrible for others. The problem is people tend to talk as though everyone works in their domain.


> Being able to read your code you wrote 2 years ago

It won't stop you from asking wtf did someone write this and then doing a git blame and seeing your name.


If “each individual line is easy” would work, we would be reading-writing in assembly. I don’t accept the premise that Go would be any better in this region, if anything, it is worse by cluttering up the happy path by error “handling” noise, and much harder than they should be opewtions.


Yeah, I can keep Brainfuck in my head easily. Doesn't mean I'm going to use it as anything other than a toy language.


Go has succeeded because it's not horrible and supported and used by one of the biggest companies out there. That means you end up with a long list of decent libraries, which to me feels like the main factor of success for a language.


Why has Dart not taken off?


Because politics, Google wanted to do an hostile takeover JavaScript, it backfired, Chrome dropped DartiumVM, and they only got rescued thanks AdWords.

Key language designers left after this.

However it seems to have enough management support, that since it found an home in Flutter, I would assert there are having more adoption than Xamarin, React Native or Cordova.


Dart is advertised as a client-optimized language. Optimized for UI is what the home page says. Say Goodbye to convincing people to use it on the server.


Part of that success may have to do with "Wow, Google invented this...let's use it."

Kubernetes adoption seems to be going up but does it really add value for most that adopt it? I would say, no.


Now you’ve got me curious. I’d like to type inference in go. Can you tell me what languages do it better?


In ML-lineage languages (including Haskell) you almost never need any type annotations whatsoever, at least not unless you’re poking around at the fringes of those languages (GADTs, various GHC extensions).

Type annotations for top-level definitions are often encouraged for readability and better error messages, but the compiler can almost always figure everything out itself.


From my experience, such type inference systems are awful in practice. Rust designers tried to do something like that initially but quickly realized understandability suffered greatly. You really do want to specify types manually, at least at boundaries, e.g. in function definitions.

> Type annotations for top-level definitions are often encouraged for readability and better error messages, but the compiler can almost always figure everything out itself.

See? That's not a good thing at all. If the compiler's capability makes the code less understandable, then it's undesirable. Doesn't matter how fancy and cool, or state-of-the-art it may be.

You probably don't want to strap a jet engine to a car, no matter how cool you may think it would be.


Right, global inference turns out to be too much inference. Function boundaries are a convenient place to draw a line.

As usual C++ choose to something much weirder and more dangerous. Instead of inference C++ can deduce types, in some cases there's no way to write a type's name so you have to deduce types, and they can be deduced at the edges of functions however unlike inference it's not an error to have ambiguity, in some cases deduction may choose one of the possibilities that it liked better even if that's astonishing for you.

Because Rust's functions must tell you their types explicitly, and because some types can't be named specifically, the result is that in Rust you have to write these functions polymorphically, even if in practice there's only one possibility. In C++ you can write the non-polymorphic function, despite not being able to say the name of the type. How do you document that? It's OK, C++ doesn't require you to provide even halfway usable documentation.


> You really do want to specify types manually, at least at boundaries, e.g. in function definitions

I think PureScript has the best compromise here, top-level type signatures are not enforced but if you don't include one you'll get a warning with the inferred signature. On the one hand, this is very helpful because sometimes I have a grasp for what expression I want to use, but am not sure about its type, so I simply comment out the desired signature and let the compiler tell me which direction I'm moving in, i.e. what's the type of what I just wrote. On the other hand, it being a warning also basically ensures nobody writes their top-level functions without the type signatures. Really the best of the two worlds. I think simply disallowing top-level functions without type signatures would hurt my workflow a lot.


That’s just artificial limits. You yourself mentioned Rust that does the exact same type inference with the limit for usability of having signatures typed. Here is your example of a language with eons better type system/type inference, but go’s is really not a high mark.


Interesting. I feel much better having some kind of explicit indicator, especially with numeric types.


Nah mate, sorry. Go is a language already past its prime, which is laughable given how young it is, and how it was modeled on better, long-lived languages. Take a look at Google trends if you disbelieve me. There was a slight bump in interest when generics finally got pushed out, but that's died off, and Go is entering the same decline as other has-beens like Ruby. Good riddance to a language designed for people that aren't good at programming (you'll have to look up the quote yourself, I'm too lazy).


Most SWEs aren't particularly good at programming. It's not a shame to embrace that.


As proven by Oberon-2 and Limbo ancestors, it helps to have a good godfather.


> PLT community

What is that?


“Programming Language Theory”


Go has "swept up entire markets" because they have an unlimited marketing budget. It has nothing to do with the merits of the language itself.


Can you point to something Go has spent this hypothetical marketing budget on?


Docker, Kubernetes and key CNCF projects.


What was the marketing investment here..?


Google is using Go to implement Kubernetes.

Kubernetes is cool, I want to play with Kubernetes.

Maybe I should lean Go to play.


Ok, but calling this a marketing spend that has nothing to do with the language seems ridiculous.


Even back when Go first came out, anyone who knew anything about programming languages rolled their eyes at pretty much everything about Go's type system, including the inference.

The Go team was populated by people who had created one of the most influential languages of all time, C. Who created the new language based on theories about how to encourage good engineering practice. Theories that they were able to test through internal access to a very large and actively maintained codebase at Google. A codebase into which they had inserted several other languages that they had devised for various purposes.

I'm pretty sure that nobody in the external programming languages community had the same depth of experience in the practical use of programming languages that the Go team had. And so it was bizarre to them to see the Go team deliberately leave features out because of concerns about how those features are used in practice.

Few of the critics who "knew about programming languages" have created any language that ever made it into the top 10 programming languages in the world. It is therefore funny to me that they were complaining about exactly the kinds of choices that lead to Go becoming popular.

I'll generally take the design choices of a team with 2 popular languages under their belt over academics in the ivory tower.


C is also a very low-level and close to the machine language.

Just because you can build a microchip doesn't mean you can build a spaceship and vice versa.

So I'm not surprised about that they e.g. left out generics and said they did so because they didn't know a good or right way to add them to the language. They were honest at least which I value a lot.

As to the success of Go that you mention. Well, let's be honest: it targets junior developers, or at least that was originally a major goal. It is backed by Google and is marketed.

There are just currently way more junior developers due to the demand and the developement of the field.

However, you can already see that a lot of junior developers that started with Go are not so junior anymore and now that they got more experienced, they demand language features that make them more productive - like generics. And they will be added and in the end Go will be a language that is not simple anymore, it will be the new python.

Go is a very practical and pragmatic language, no doubt. It's one if its strengths. But it is not by any means an advanced high-level language in any sense that I would know of.


Well, let's be honest: it targets junior developers, or at least that was originally a major goal. It is backed by Google and is marketed.

It actually targeted Google developers. It was a 20% project that was not backed by the company officially. Official backing only came *AFTER* others were convinced internally that it was worthy. Their marketing budget started at a grand total of $0.

So it stated with NONE of the factors that you cite as its advantages. Though, to be fair, its core team includes people who a lot of programmers respect.

... they demand language features that make them more productive - like generics. And they will be added...

You mean already were added about a year ago. See https://go.dev/blog/go1.18.

... it will be the new python.

Interestingly, internally at Google it replaced Python. As a reasonably fast to develop in language, with substantially lower maintenance costs. And let me assure you, Google has done a lot of work on looking at what both development and maintenance cost them internally.

But it is not by any means an advanced high-level language in any sense that I would know of.

Can you provide a reason why we should care about a language being "an advanced high-level language"?


A programming language "starting" is usually a longer period of time, at least by my definition. Not the first year or so.

Also, that go targeted junior developers were the authors words, not mine.

Last:

> Can you provide a reason why we should care about a language being "an advanced high-level language"?

Because it makes a certain group of developers more productive. If you care about that or not is your decision. I'm not saying you should.


Because it makes a certain group of developers more productive. If you care about that or not is your decision. I'm not saying you should.

I would qualify that further.

It makes a certain group of developers more productive for certain kinds of problems. In particular they are more productive for rapidly creating relatively small prototypes. Particularly if performance is not a significant concern.

By the data that I've encountered (private and proprietary), using the features that make for high level languages, also make development in the large harder. And increase the cost of maintenance. You also get the issue that different parts of the code become likely to follow different styles. And the points where they meet are likely to become problematic.

If you care about that or not is your decision. I'm not saying you should. But these were the issues that the golang designers were attempting to address.


I think you are quite mistaken. It is pretty much commonly agreed that powerful typesystems are not great for prototypes, rather the opposite, they allow easier maintenence of bigger codebases.


I have seen much in the way of opining that this should be true.

I have seen zero in the way of real-world maintainability data saying that it is true. Mind you, there is very little good real-world public maintainability data. And therefore I will say that, when I was at Google, I saw some of their private data, and it wasn't true there.

Specifically, most of the win from a type system is simply having one.

Mind you, the data that I saw includes some of the same data that informed the Go team's decisions. And therefore it is no surprise that it fits their ideas. Also that data was rather lacking on, say, real world examples of Hindley-Milner type systems. So maybe those actually work well in practice.

I also have reason to suspect that Rust's type system actually is a big win. Though I haven't seen actual data on it. But since Rust was in early development back when Go was started, it isn't a shock to me that Go didn't incorporate a lot of lessons from Rust.

So I am inclined to think that I'm basically right. However I would also classify my knowledge as only moderately well-informed. (But in a subject where I think that most people who chatter about it are essentially uninformed.)


> Also that data was rather lacking on, say, real world examples of Hindley-Milner type systems. So maybe those actually work well in practice.

That is of course a bit disappointing, because anything below HM is _certainly_ not what I call advanced. But Google must be using some languages that use HM or a variant no? How comes there is no data about those.


When I was there, the three most widely used languages were C++, Java and Python. So that type of language had the most data. And so the analysis that I read focused on them.

Within those languages, projects that did a good job of KISS fared better than those which didn't. This lead to a discouragement from using advanced features. You'll see that bias in style guides like https://google.github.io/styleguide/cppguide.html.


Well, none of those 3 languages are what I would consider "an advanced high-level language". C++ is not high-level by my definition and python and Java are certainly not advanced - well maybe Java has an advanced typesystem compared to Go, but not compared to the state of the art.

So I'm not surprised by the results, if the majority of the data came from those languages. That doesn't invalidate anything, it just changes the scope and can't be really used to backup your argument imho.


Not advanced or high-level enough for Kubernetes, eh?

Most senior programmers I know love Golang. It is easy to teach, easy to read, easy to understand, and easy to be productive in. And it is difficult to make unclear, complicated, or extremely bad code. A trade-off is that it is more verbose than other languages (if err != nil is a meme for a reason), but I think most people wind up prefer the clarity and correctness rather than hiding error handling.


> Not advanced or high-level enough for Kubernetes, eh?

Let me ask you with two counter questions.

1.: which one do you think is more complex: kubernetes or the linux kernel?

2.: which lange do you think is more "advanced" or "high-level" enough by our definition: C or Go?

I think you can see where I'm going at.

Otherwise, I don't disagree with the rest. Go is the new cool python.


Reframing a bit based on my own friends. The senior people I know like it because it's really easy to manage subordinates who are using it. And based on a few experiences I've had doesn't lend itself to the programmers wasting time on half baked mystery abstractions and broken internal API's.


> Just because you can build a microchip doesn't mean you can build a spaceship and vice versa.

A recent incident of a submersible built by someone with aerospace experience comes to mind.


Ha! I was about to write submarine but then I restrained myself from doing so. :X


> Few of the critics who "knew about programming languages" have created any language that ever made it into the top 10 programming languages in the world.

I often encounter some people in programming which I think lack humility. It's fine to question dogmas and the "big heads", but one has to look at the caliber of people you're up against and maybe give them the benefit of the doubt, if only a little bit.

I certainly wouldn't read a few PLT books and lambda-the-ultimate.org and then point to Ken Thompson and Rob Pike and say "your language has no <Type System thing>, you don't know what you're doing". These are not amateurs.

It's also doubtful they liked everything they put in, or disliked something they left out. Even Rust's creator didn't like some of the direction his language took.

If we were to ask other famous language designers they probably have a fonder feeling towards Go than people in this thread, knowing all of the hard design decisions one has to make to create an impactful language with an identity.


Right? So many complaints about "well but it's hard to make N-leaf tries in golang so we need generics!" It's like people believe the success of a programming language is correlated to its ability to express obscure computer science concepts unrelated to most peoples' jobs.

Golang works extremely well in practice, which is what I really care about.


And here I thought it was because:

1. repeating the same function definition and type declarations over and over varying only one type parameter and the name was a great way to introduce bugs if one of the functions was missed in updating.

2. working with reflection is slow and complex and easy to get wrong

Both of which are solved with something simple like generics.


If a goddamn list/vector requires built-ins from your language, you have already lost.


Golang is responsible for some of the biggest open-source projects out there and has a huge share of developers working in it. Lost… what exactly?


> The Go team was populated by people who had created one of the most influential languages of all time, C

Which is absolutely terrible from a programming language point of view, there were already better languages at its inception. Ken Thompson has a huge legacy in the CS world, but he is frankly not a good language designer at all.

Also, appeal to authority. If go were so good, it should able to be praised on its own accord.


Have you read Worse Is Better?

https://www.dreamsongs.com/WorseIsBetter.html

Go and C are good in a Worse Is Better way. They are not intended for writing the perfect and ideally engineered solution. They are designed to encourage pragmatic solutions to pragmatic problems, and therefore to become popular. And the features that make them suited to becoming popular are, in fact, tied to not trying to produce absolutely ideal solutions to the hard problems.

I am therefore praising them for being good at what they were designed to be good at. And pointing to their popularity is not an appeal from authority - it is a demonstration that they succeeded at their goal.


Those wouldn't have been as successful if AT&T had been allowed to sell UNIX at the same prices as VAX/VMS, instead of having source tapes available for a symbolic price.

It isn't as if C would have been a commercial success, had it come with a price tag.

Plan 9 and Inferno commercial successes, and Limbo adoption, are a clear example of how it would have gone instead.


One of the designers said that they had to dumb it down to the lowest common denominator Google engineer.


> Even back when Go first came out, anyone who knew anything about programming languages rolled their eyes at pretty much everything about Go's type system, including the inference.

The fact that it was even a point of concern shows how misguided the PL community is. Advancing the state of the art is not the goal, producing a tight, clean design is.

> Just because Sun couldn't figure out how to do it in the 90s doesn't mean that type inference wasn't mostly solved in the 70s.

Theory's only as useful as its implementation. If there's no sensible implementation of type inference out there, but then a new language comes out with it, is that not a significant improvement to the status quo, even if the theory behind it may be decades old?

Code autoformatters were not exactly dark PL magic when Go came out either. But somehow the impact gofmt has had on the PL space has been immense. Funny how that works.


Go comes out, some Go cheerleader describes its decidedly not-state-of-the-art type inference engine as "state-of-the-art", I point out that it was hardly state-of-the-art, the Go cheerleaders come along and say "why do you even care if it's state-of-the-art?". Am I getting this right?


> The fact that it was even a point of concern shows how misguided the PL community is. Advancing the state of the art is not the goal, producing a tight, clean design is.

Why should there be only one goal of "the PL community" (whatever that is)? Maybe you need people who advance the state of the art and others who make things ready for production.

> Theory's only as useful as its implementation. If there's no sensible implementation of type inference out there, but then a new language comes out with it, is that not a significant improvement to the status quo, even if the theory behind it may be decades old?

That would be a fine argument if Go had been the first major language with type inference, but even if you're willing to ignore ML languages (because you think they're too niche and/or weird), Scala is almost 10 years older than Go.


> The fact that it was even a point of concern shows how misguided the PL community is. Advancing the state of the art is not the goal, producing a tight, clean design is.

I don't see how having better type inference interferes with the "tight clean design"? If anything I'd argue that not having it is a hinderance. Why do i need to `x :=make([]foo)` or `var x []foo` when I could just `x := []` and let the type system infer that its `[]foo` by the fact that I keep `append`ing `foo`s to it?


> Advancing the state of the art is not the goal, producing a tight, clean design is.

Sure, but they failed to deliver that.


I am immensely thankful for Go.

The simplicity of the language and the stdlib is shockingly well thought out -- things like io.Reader are so obvious and yet not part of many other languages. The language has made me a better programmer. And the cross-compilation story is chefs kiss.

Working on a cross-platform project where in Go, I write code and it just builds. In Java, I fight with Gradle. In Swift, I fight the type system and the way everything's constantly deprecated.

It's not all perfect. I wish the generated binaries were smaller. I wish the protobuf library wasn't awful. And better Cgo/FFI would be nice.

But overall, I've never been so productive.


It's the simplicity of the tooling for me. There's no config file besides a list of dependencies. I push my code to github and it can be a dependency because a dependency is just a URL. My code is automatically formatted and there's no config for that.


> Working on a cross-platform project where in Go, I write code and it just builds

I've worked on large golang code bases that had to build on bazel, and I had the same experience fighting it. It has nothing to do with the language.

> In Java, I fight with Gradle.

So gradle's issue, not Java's. See above.


Pretty much every golang project I've ever touched builds with `go build`. Most that have makefiles just call `go build`, and the makefile is more for building docker containers, or doing infra-related things. If a project is in go, 98% of the time `git clone XXX && cd XXX && go build` will work.

That is absolutely not the case with C, C++, Java, python.

> So gradle's issue, not Java's. See above.

The issue exists with maven too, but _not_ with go build, which is OP's point.


> If a project is in go, 98% of the time `git clone XXX && cd XXX && go build` will work.

I can’t speak for the other languages but I reckon this is the case for Java projects. Git clone, cd XXX, mvn clean package.

Potentially messing about with whether you have Java 8, 11 or 17 installed which saddens me to mention. But if you are into Java, you have them all installed already, and just need to make sure the right one is activated when you do the above steps.


> I've worked on large golang code bases that had to build on bazel, and I had the same experience fighting it. It has nothing to do with the language.

So you deliberately added complexity instead of using Go's own build system and it didn't work out well and that's somehow a proof that Go's cross-compilation story isn't as great as people say?


It wasn't my call on what they did, and they had their reasons (running one of the largest golang monorepos on the planet). My point was to show that this is a build tool issue, not a language issue.


The way you use the language has everything to do with the language. It is the language.

A language is a product, and the programmers writing in it are its users. The same way any other product works.

Would you be okay with buying a Tesla without the capability to charge at supercharger stations? You can only charge at home. The supercharger network is not literally part of the car after all!


> I wish the protobuf library wasn't awful

I’m curious what you don’t like about it? I haven’t used Go in anger, but I love protobufs, and it’s shocking that Go, of all languages, would have a substandard implementation.


My personal gripes are around the generated code not being idiomatic Go. It feels like code written by someone who doesn't really enjoy Go.

In particular, oneofs are *so* awful to work with that I'm often tempted to use an Any instead. For example:

    message Image {
      oneof kind {
        Bitmap bitmap = 1;
        Vector vector = 2;
      }
    }
Should, in my opinion, lead to code like this:

    img := &Image{Kind: &Bitmap{}}
But the reality looks more like this:

    img := &Image{Kind: &Image_Bitmap{Bitmap: &Bitmap{}}}

My other main gripe is that the generated structs embed a mutex, and so can't be copied, compared [ergonomically], or passed by value.

Sadly, both of these issues are explained away as on the issue tracker.

(My use-case is primarily to share data structures across languages, so perhaps it's not totally aligned with what protobufs is trying to do. I just wish there was a better alternative.)


Been a few years since I’ve used Go protobuf library, but it left a sour taste. First memory allocations are awful and slow. At the time there was no way to reuse slices for serializer when serializing and deserializing. The library would often panic instead of returning an error (this basically why we switched to an alternative library).


For me the biggest unsung hero is the _stability_

There's something so liberating about finding code samples or documentation from 5-10 years ago and it is still the correct way to solve a problem. The lack of churn in the ecosystem means you can learn the language and then focus on actually building stuff rather than focusing on the rat race of learning the latest hotness and restructuring your app constantly to account for dependencies that break things.


Go didn't try to be overly clever and abstract. That rubs quite a few people the wrong way, but it also means you are less likely to have to work with people who try to be clever.

My first reaction when seeing Go is that it smelled of "old fart". It looked straightforward and unexciting. I'm an old fart. I like straightforward and unexciting. It tends to lead to code that I can still read 6 months from now.

I want to make stuff and not endure people boring me with this weeks clever language hack they came up with that expresses in one unreadable line what could have been expressed clearly in 3 lines.


I still don't know what people mean by "obvious" code.

Yes, there are people who create a mess with abstraction. This happens in every language, in Java people create FactoryFactories, in Haskell people play type tetris, in Ruby, people abuse metaprogramming and in Go, I assume some people go overboard with code-gen.

But that said, I suspect many people, when they say, "obvious code", they mean "I can easily understand what every line does". Which is a fine goal, but how does that help me with a project that has 100ks of lines of code? I can't read every single line, and even if I could, I can't keep them all in my head at once. And all the while, every one of these lines could be mutating some shared state or express some logic that I don't understand the reason for.

We need ways of structuring large code bases. There are a ton of ways for doing so (including just writing really good and thorough documentation), but just writing "obvious" code doesn't cut it. Large, complex projects are not "obvious" by their very nature.


> I still don't know what people mean by "obvious" code.

This is one of those subjective "you know it when you see it" qualities that are going to be a function of the code itself and how well it conforms to practices you are used to. I also think that we have a tendency to not notice as much when we read code and understand what it does without having to think about it too much.

And you can get lost in Go too. You don't need a lot of language features help you complicate things.

For instance, I recently looked at some code that I had originally written, then someone else had "improved it". In my original version there was some minor duplication across half a dozen files - a deliberate tradeoff that enabled someone to read the code and understand what it did by looking in _one_ place. (This was code that runs only at startup and is executed once. It just needs to be clear and not clever).

The "improvement" involved defining a handful of new types which were then located in 3-4 different files across two new packages placed in a seemingly unrelated part of the source tree. A further layer of complexity was introduced through the use of init() functions to initialize things, which adds to the burden of figuring out which order things are going to happen in since init() functions sometimes have unfortunate pitfalls.

Yes, the code was now theoretically easier to maintain since it didn't repeat itself, but in practice: not really. Rather than look in one place to figure out what happens, you now had to visit a minimum of 3 files and 5 files in one case.

And remember those init() functions? Turns out that the new version was sensitive to which order they would get executed in. Which lead to a hard-to-find bug. Now you could say that this is unrelated to complicating things by decomposing a lot of stuff into more types, but this isn't unusual when people get a bit obsessive about being clever.

> But that said, I suspect many people, when they say, "obvious code", > they mean "I can easily understand what every line does". Which is > a fine goal, but how does that help me with a project that has 100ks > of lines of code?

These are related but different problems. At the micro-scale (what you can see in a screenful in your editor of choice), consistency in how you express yourself is key. In essence the opposite of the "there is more than one way to do it" mantra in Perl. This mantra is bad advice. You should ideally pick one way to express something and stick to it - unless there are compelling reasons to make an exception. (Don't be too afraid of making exceptions. There is a fine line between consistency and obsessiveness).

If you stick to this your brain can make better use of its pattern-matching machinery. You see a "shape" and you kind of know what is going on without actually reading every line of the code.

Also, how you name things is important. When I was writing a lot of Java you could ask me the name of classes, methods, variables, and I'd get it right 90% of the time without looking. Not because I'd remember, but because I had strict and consistent naming practices so I knew how I'd name something.

(I haven't succeeded in being as consistent when I write Go. Perhaps I can get guess the name correctly 70% of the time. I'm not sure why).

Now let's look at "how does that help me with a project that has 100ks of lines of code".

At larger scales it is really about how you structure things so you can reason about large chunks of your code. Think in terms of layers and the APIs between them when you structure your code. Divide your code into layers and different functional domains. Describe them through clear APIs with doc comments that clearly document semantics, preconditions, postconditions etc. The trick is to try to identify things that can be structured as libraries or common abstractions and then pretend that those bits should be re-usable (without going overboard).

Say for instance you are implementing a server that speaks some protocol. You want to layer transport, protocol, and state management with clear APIs between each layer. Your business logic should deal with the implementation through an API that is a clear as possible. Put effort into refining these APIs. A good opportunity is when you are writing tests. You can often identify bad API design when you write tests. If something is awkward to test it'll be awkward to use.

Also, like you would do when you write a library, give careful thought to public vs private types and functions. Hide as much as possible to avoid layer violations and to present a tighter and narrower API to the world. (Remember APIs are promises you make. You want to make as few promises as possible).

This also has the benefit that it gets easier to extend. APIs between layers are opportunities for composability. Need to add support for new transports? If you have structured things properly you already have usable interface types and unit tests that can operate on those. Need different state handling? Perhaps you can do it in the form of a decorator, or you can write an entirely new implementation.

(Look at how a lot of Go code does this. For instance how a lot of libraries, including the HTTP library in the standard library, allows you to inject your own transport. This enables you to do things the original authors probably didn't think of. I have some really cool examples of this if anyone is interested)

Over time you will probably see a lot of parts of your software that can be structured in similar ways. This allows you to develop habits for how you structure your ideas. The real benefit comes when you can do this at project or team scale. When people have a set of shared practices for how you chop systems into functional domains, layer things and design the APIs that present the functionality to the system.

So in summary: you deal with 100kLOC projects by having an understandable high level structure that makes it easy to navigate and understand how the parts fit together. When you navigate to a specific piece of code, your friends are consistency (express same thing the same way) and well documented (interface) and model types.

Years ago I came across a self-published book that taught me a lot about how important APIs are when building applications. The book was about how to write a web server (in Java). It started with Apache Tomcat (I think) and focused on the interface types.

Using the existing webserver as scaffolding it took the reader through the exercise of writing their own webserver from scratch, re-using the internal structure of an existing webserver. One part at a time.

The result was a webserver that shared none of the code with the original webserver, but had the same internal structure (same interface types). This also meant your new webserver could make use of bits and pieces from Tomcat if you wanted to. I found this approach to teaching brilliant because it taught several things at the same time: how Tomcat works, how to write your own webserver, and finally, the power of layering and having proper APIs between the different parts of your (large) applications.

I still think of the model types and the internal APIs of a given piece of software as the bone structure or a blueprint. You should be able to reimplement most of a well designed system by starting with the "bones" and then putting meat on them.

> I can't read every single line, and even if > I could, I can't keep them all in my head at once.

Keeping 100kLOC in your head isn't useful. Nor is it possible for all but perhaps a handful of people on the planet. But if you are consistent and structured, you will know where you'd put a given piece of code and probably get there (open the right file) on the first attempt 70-80% of the time. I do. And I'm neither clever, nor do I have amazing memory that can hold 100kLOC. But I try to be consistent, and that pays off.

> And all the while, > every one of these lines could be mutating some shared state or > express some logic that I don't understand the reason for.

If you have 100kLOC of code where any line can mutate any state directly, you have two huge problems. One is the code base, the other is whoever designed the code base (you have to contain them so they won't do more damage). If you have gotten to that point and you have 100kLOC or more, you are really, really screwed.

I've turned down six figure gigs that involved working on codebases that were like that. It is that bad.

In Go, mutating shared state is bad practice. This is what you have channels for. Learn how to design using channels or even how to make use of immutability. There are legitimate situations where you need to mutate shared state, but try to avoid doing it if you can.

(I've written a lot of code in Go that would typically have depended on mutexes etc in C, C++ or Java, but which use channels and has no mutexes in Go. There is an example of this at the back of the book "The Go Programming language" by Kernighan et al, though this book is getting a bit long in the tooth)

If you do have to manage access to shared state be aware that this is potentially very hard. Especially if you can't get by with single mutexes or single atomic operations. As soon as you need to do any form of hierarchical locking you have to ask yourself if you really, really want to put in the work to ensure it'll work correctly. The number of people who think they can manage this is a lot larger than the number of people who actually can. I always assume I'm in the former group so I avoid trying to implement complex hierarchical locking.


I agree with most of what you say (including that not every minor duplication needs a refactoring) but I don't understand how this relates to using Go or some other language - and you definitely don't make it sound as if "write obvious code" is this easy fix that everyone knows how to do and that abstractions are always bad and if you don't use them, your code gets magically easy to understand.

It takes nuance and balancing tradeoffs to write good code, and that was IMHO missing from the comment I was replying to.


Languages are not just the language definition, but the language and how people use it. The established practices and idioms. The idiomatic approach to Go tends to be very pragmatic, minimalist and direct. And in some areas: highly opinionated.

For instance, it discourages the use of frameworks and prefers libraries. It can be hard to pin down what that means.

Frameworks tend to dictate both how you structure and how you express solutions. Your code is usually very tightly bound to a framework and it is often infeasible to switch to a different framework without a major rewrite. Your application, to a large degree "lives inside a framework".

Libraries imply a greater degree of decoupling and you should be able to rip them out and replace them with something else without having to re-architect your software. If some thought has gone into the design, the change can be as little as a few lines of initialization code. (Think well designed APIs for SQL drivers).

It is important to note that this doesn't really have that much to do with the language. I wrote Java in much the same way I write Go. Prefer libraries, avoid frameworks, prefer writing concrete classes until you a) know you really need something that has to allow for abstraction/flexibility, b) know how to do it because you have already written at least one implementation of the functionality you might want to generalize.

There is nothing stopping you from writing huge frameworks in Go. And some people really want to. They can't help themselves. Thankfully, it hasn't caught on. At least not yet. Nothing in Go dictates it has to be a more "direct" language than Java. But how key people in the Go community practice programming and how idioms evolve has had that effect. It has lead to a lot more code that is approachable.

(Be happy I didn't use C++ as an example, because there every imaginable approach from "C with classes", via "templates everything" to the more modern approach exists. All at the same time. Written by people who all think they are programming in the same language :-))


> Yes, there are people who create a mess with abstraction. This happens in every language, in Java people create FactoryFactories, in Haskell people play type tetris, in Ruby, people abuse metaprogramming and in Go, I assume some people go overboard with code-gen.

It's possible in any language and yet some languages' codebases are consistently worse than others ;)

If you create a culture of cleverness, implicitness and metaprogramming, that's what the programmers using your language will do. It's self-selection to an extent.

"I've suffered long from the Ruby ecosystem's mentality of 'look at what I can do!' of self-serving pointless DSL's and frameworks and solemnly swore to myself to stay away from cute languages that encourage bored devs to get 'creative'." [1]

"I worked at a Scala shop about 10 years ago. Everyone had their own preferred "dialect", kind of like C++, resulting in too much whining and complaining during code reviews. IMHO, the language is too complex." [2]

> And all the while, every one of these lines could be mutating some shared state

That's where the obvious code helps.

Let's circle back.

> I still don't know what people mean by "obvious" code.

The Zen of Python is a nice primer: [3]. A beautiful display of taste right there.

A few concrete examples:

- "Explicit is better than implicit."

Explicitly returning errors means we get to see every single point at which something could error out - explicit, as opposed to exceptions that could implicitly propagate from any line of code, with no way to tell.

Preferring pure functions - a pure function is a black box with a clearly drawn boundary line of input->output. Trivial to reason about in isolation.

No automatic type conversions.

No global state - any part of the code could change it.

No metaprogramming - you've learned Ruby but now some parts of the language have been changed to mean something completely different!

"The syntax has so many ways of doing things that it can be bewildering. As with Macro-based languages, you are always a little uncertain about what your code is really doing underneath." [4]

- "There should be one-- and preferably only one --obvious way to do it."

Uniform code. Iterating through an array always looks the same, so if the code you're looking at does it differently, you'll pay attention.

[1] https://news.ycombinator.com/item?id=13482459

[2] https://news.ycombinator.com/item?id=31219392

[3] https://peps.python.org/pep-0020/

[4] https://alarmingdevelopment.org/?p=562


I'm not sure why you think that quoting random HN users proves anything except personal opinions of specific people? These are fine, but other people have other opinions.

I don't oppose the "principles" you've quoted, but they would just as easily apply to e.g. Haskell (maybe except for the "there's only one way to do it" which however has never been true of any language, including Python).

I feel like you missed my larger point. It's not terribly difficult to write code that you can understand line by line. It's incredibly hard to write a huge code base in a way that you can reason about many code paths simultaneously, however. That's where the abstractions start to make sense.

I don't understand why people think that those facilities were created just to piss people off? People were facing real problems. Yes, sometimes the cure is worse than the disease. Use abstractions judiciously and by employing common sense. That doesn't mean you should never use them.

I've seen over- and underabstracted code (as well as just plain wrongly abstracted code). Both of these situations really suck.

Honestly, what annoys me a bit here is your smugness. It seems as if you feel you've figured out how to write good code, and all the other idiots who use Ruby, Java, etc. haven't. But I don't believe you. Nobody in this industry knows how to write "good" code. I don't even think we know what "good" code is. The most we can do is try our best, learn about better ways to do things, discuss approaches and use our judgment.


> I'm not sure why you think that quoting random HN users proves anything

Does it have to?

How do you prove that what someone expresses is something other than opinion when it comes to programming practices? This isn't a field that is easy to quantify or distill into unquestionable truth. To approach anything nearing proof we'd need data from which clear conclusions can be drawn. A look at scientific publishing on a lot of these topics suggests that "proof" is going to be hard to come by for a lot of the things discussed here.

> but other people have other opinions.

Not all opinions count equally to all people. In fact, most people's opinions don't matter. However we do tend to value the opinions of people who are able to properly articulate reasonable arguments based on demonstrable results or experience.


I'm not looking for rigorous proof, but these are just cherry-picked example quotes instead of a coherent argument. You could just as well quote people who have experience dealing with code that is "not clever enough", e.g. 1000 line files without any internal structure. What does that show?

Writing code is about tradeoffs and not about pithy truisms lime "code should be obvious".


I'm afraid "writing code is about tradeoffs" is just as much of a truism. It's not going to help anybody write beter code. You need to get into the specifics.


The difference is that I didn't presume I could give anyone easy advice about how to write good code.

The only thing you can do is gain lots of experience, constantly question if there are better ways to do things, read about ideas others have had (without getting religiously attached), make your own mistakes, hopefully learn from them, overcorrect, then learn from your overcorrections and so on. I'm afraid there's not really an easier way.

On the other hand "write obvious code" makes it sound as if there's an easy way to do it, and everyone who doesn't write "obvious code" is just deluded, showing off, or something like that.


> On the other hand "write obvious code" makes it sound as if there's > an easy way to do it, and everyone who doesn't write "obvious code" > is just deluded, showing off, or something like that.

There is nothing easy about writing code that is obvious to other people. After all, the point of the exercise is how you communicate ideas clearly to people with the goal of amplifying their productivity. That takes time, empathy, and a willingness to study how other people understand things and accepting when you are not getting through to them.

Most programmers are not good at this. Worse yet, most programmers won't try to be good at this and instead go looking for simple truths. At best, most programmers obsess over "best practices" without really asking themselves if these are the best way to communicate a given idea or if there is a clearer way. To dare to do what's better, yet respect when "better" isn't necessarily better for other people.

(Erik Spiekermann, a designer of typefaces, made an observation that is useful with regard to the last statement. Erik has no love for the font Arial. He doesn't think it is a very readable font. However, he also points out that it is a font most people are so familiar with that even though he thinks its design is poor, it is actually a good functional choice because so many people are used to it that their familiarity makes it easy to read text typeset in Arial).

If your attitude is that it isn't worth trying because it isn't easy to do, then I'm sorry for you, but I think your value to an employer will be limited to only the output you can produce yourself. The biggest potential for a programmer is in their ability to amplify other programmers. And that starts with being able to communicate clearly. Both at the micro level, in code, and at the macro level, in terms of clear structure, abstraction and architecture.

The best way to ensure you don't evolve into a programmer who is able to amplify other programmers, and provide value beyond your own immediate output, is to not even try.


> If your attitude is that it isn't worth trying because it isn't easy to do

I haven't said that and I don't appreciate you insinuating that I don't care about writing good code.


Before you get too upset I think you should practice what you preach. I can only judge from what you write. I don't know you. And you have made assumptions about what I think or say, which I then politely, and at some length, have tried to clarify and explain to you. When you continue to argue in a manner that, at least to me, signals you are not interested in actually understanding what I'm saying: that's on you. You are obviously offended, and I would probably care, if it wasn't for your whiny, self-righteous attitude.


So if it isn't about trying to make things obvious, what do you think the goal is?


Huge code bases are never gonna be "obvious". It's all about optimising for a balance of tradeoffs.


So you claim that beyond some magic size limit a codebase cannot be easy to understand. Well, quantify it. At what number of lines of code does a codebase pass from the domain where it is possible to easily understand it to where it is impossible?

I suspect you didn't pay any attention to what I wrote about how people go about ensuring that large codebases maintain readability. That saddens me a bit because it means you probably aren't interested in learning anything.


This is just an incredibly bad faith reply, and an insulting one at that. I don't know why you felt the need to cross into personal territory, but I will make a mental note not to engage with you again.

PS, please check my comment history before claiming that "I'm not interested in learning anything".


Practice what you preach.


> I'm not sure why you think that quoting random HN users proves anything except personal opinions of specific people? These are fine, but other people have other opinions.

They support my argument so I included them. I don't see a practical difference between a "random HN user" and a random blog post or anything else. A good idea is a good idea, regardless of the medium.

---

> I don't oppose the "principles" you've quoted, but they would just as easily apply to e.g. Haskell

Possibly, I threw it together pretty quickly, so it's not very thought-out. I'm also not familiar enough with Haskell to know what you mean.

---

> (maybe except for the "there's only one way to do it" which however has never been true of any language, including Python).

It's not binary, but a spectrum. No language has just literally one way to express each concept. However, Python is definitely further towards the "one way to do it" end of the scale. Perhaps not as much these days as it used to, but still far more than for example Ruby or Perl.

---

> I feel like you missed my larger point. It's not terribly difficult to write code that you can understand line by line. It's incredibly hard to write a huge code base in a way that you can reason about many code paths simultaneously, however. That's where the abstractions start to make sense.

I thought I adressed that; everything I mentioned - explicit error returns, pure functions, no global state, and no metaprogramming - all show their true worth in large and/or unfamiliar codebases.

---

> I don't understand why people think that those facilities were created just to piss people off? People were facing real problems. Yes, sometimes the cure is worse than the disease. Use abstractions judiciously and by employing common sense. That doesn't mean you should never use them.

I'm not sure what exactly you mean by abstractions, so this is hard to respond to.

---

> I've seen over- and underabstracted code (as well as just plain wrongly abstracted code). Both of these situations really suck.

Both suck, but as far as I can tell, erring on the side of underabstracting is better. Some random references (not HN comments ;>): [1] [2].

> Honestly, what annoys me a bit here is your smugness.

;) I can either post something quick but smug or spend hours polishing it until it's as bland as can be. I prefer the quick and authentic approach.

---

> It seems as if you feel you've figured out how to write good code,

Not even close.

---

> and all the other idiots who use Ruby, Java, etc. haven't.

I'm definitely not calling people who program in <x> language idiots. There are far too many factors at play to be able to make such a broad judgement.

---

> Nobody in this industry knows how to write "good" code. I don't even think we know what "good" code is.

I'd certainly hope that in the last half a century of programming we have at least learned something! A video (and book) you may enjoy: [3].

---

> The most we can do is try our best, learn about better ways to do things, discuss approaches and use our judgment.

I thought that's what we were doing.

---

[1] https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction

[2] https://programmingisterrible.com/post/176657481103/repeat-y...

[3] https://www.youtube.com/watch?v=bmSAYlu0NcY


> I thought I adressed that; everything I mentioned - explicit error returns, pure functions, no global state, and no metaprogramming - all show their true worth in large and/or unfamiliar codebases.

I agree with all of those. I try to avoid these things whenever I write code (in rare cases, there are valid reasons for using them, but I agree they are overused).

But that still leaves tons of room for different ways of writing code, and it's very hard to say which one of these is "obvious". Obvious to whom? Some people want to have large methods, so they can see everything at a glance, others prefer to have more smaller function so they can see the high-level picture before they see the details. Which one of them is right? I can't say - it depends on the person, on the problem, and on many other factors.

What is missing from your list is "don't use advanced language features". You can do all the things you mentioned and still use "advanced" language features. And the debate about Go is often about how (presumably) the absence of "complex" features (something which is IMHO a bit subjective) makes code more "obvious".

But you can have generics (or, more broadly, polymorphism), interfaces, higher order functions, proper algebraic data types, etc. and still uphold all the features you've quoted. That's what I meant with the Haskell comment (but supposedly the same would be true for e.g. Lisp).

Since you shared a talk with me (which I intend to watch, but I haven't had time yet), allow me to also share one (with which you may already be familiar): https://www.infoq.com/presentations/Simple-Made-Easy/

Here, Rich Hickey (the creator of Clojure) makes a point that resonates very well with me: namely that simplicity (which is what we desire) is not the same thing as easiness, and that pursuing the latter can often come at the expense of the former. In other words, if you use "advanced" features the right way, it can help create code that may require more concepts, but is still simpler to reason about.

> https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction

I've never taken this blog post to mean that abstractions are wrong. She does say that a bad abstraction is worse than no abstraction - but, IMHO, also that the right abstraction is even better than that. BTW, Sandi Metz is a Rubyist.

> Both suck, but as far as I can tell, erring on the side of underabstracting is better.

Maybe you haven't had the fortune of working with code written by non-developers (e.g. data scientists). But in any case, my most common experience is that most abstractions are simply wrong. Code that belongs together is spread out over 5 files, and these in turn don't have a single purpose, but end up doing too many things at once. Of course, I've also seen overabstracted "every class has an interface" nonsense. And I've seen huge functions that would have benefited from some internal structuring.

> ;) I can either post something quick but smug or spend hours polishing it until it's as bland as can be. I prefer the quick and authentic approach.

Well, this comment that I'm replying to doesn't seem bland to me. You elaborating your point of view makes for more interesting discussion - at least in my view.


Yeah this was my primary complaint with the inclusion of generics. People will try to be all clever and their code will wind up as an unreadable, unmaintainable disaster. It has definitely led to some cool stuff, but I prefer boring, verbose, and clear any day.


Problems are generic, such as having a tree collection. For solving generic problems, you can either use language generics, or reflection, or type erasure, or codegen. The latter three are about as far from 'boring and clear' as you can get. Still verbose, though, but I don't expect that's a benefit.


I would say 95% of people who are trying to solve a complicated problem with reflection, type erasure, or codegen are solving the problem the wrong way. Obviously I'm glad these tools exist, and some problems can indeed only be solved with them. But I think people reach for them as a first solution and make bad code as a result. Go strongly encourages you not to reach for bad solutions, which is one of its bigger advantages, in my opinion.


Collections obviously need to be generic though. Slices and maps were generic from day 1 in go so it's not like this is controversial.


For sure — a lot of language features need to be generic. But most people aren’t coding language features, and giving them the ability to do so can lead to, well, CodeFactoryFactoryMakerGenericMethodHelpersFactory, rather than good, clear, usable code.


There is no language feature for a tree map. Many problems require a tree map. Either inclusion of generics was a good thing, or it is worth sacrificing static typing or requiring codegen for these use cases. Repeat for numerous collections and APIs.

Disallowing someone from using easy statically typed tree maps is not accomplishing any of the simplicity virtues people trumpet Go for having. While the much-warned-of castles of inappropriately applied generics have yet to be found in any codebase I've worked with in any language, including Rust.


This is a pretty arbitrary benchmark. I would guess, again, that 95% of programmers have never and will never need to create a tree map. Repeat for numerous collections and APIs that are totally irrelevant to most programmers' actual experience of programming.

Go is optimized for use, not computer science edge cases. And as a result it is widely used, and some of the most complicated and widely-used open-source projects out there are built in it, even before it had generics. For example, Kubernetes.

This is because of Go's simplicity, not in spite of it.


    > rg BTreeMap monorepo/ --type rust --no-filename | awk NF | wc -l
    1656
You may not have ever encountered a use case for a tree map, but there is quite a wide gray area between 'computer science edge cases' and cookie-cutter CRUD apps. For example, deterministic map ordering, or sets for map keys, or fast map equality.

Funny that you mention Kubernetes. This is the tree map implementation Kubernetes depends on https://github.com/google/btree


It's not about creating a tree map, it's just about using one. Maybe even one provided by a library, doesn't matter. I'm sure that 95% of the developers will need a tree map at least once in their career.


I've been using Golang professionally for about a decade. In that time the only place I've ever needed to use a tree map was an interview that specifically called for it.

Whose usage are you trying to optimize the language for? Golang is not a good academic language. But it is extremely good for actually solving problems with code.


A tree map is just an example.

More compelling ones are sets, ordered maps, a generic sync.Map, optionals (nulls suck!), abstractions over channels - badly needed!

I wouldn't consider any of those to be academic.


See the example given from Kubernetes. Is that not a real codebase, or what?


I've spent a lot of time writing library-level C++, a reasonable amount of time writing application-level C++, and a reasonable amount of time writing both library and application-level golang.

> But most people aren’t coding language features

It's definitely a tough lever, but supporting generics for the people who are means that the people who aren't can write cleaner code. I think go got it right, eventually.


You are writing a Turing complete language. If someone wants to be stupid architecture astronauts, you can’t help them, beside better filtering people who work on your project.


Some problems benefit more from a solution using generics than others. Collection types, as you note, are a good example because you will have heavy re-use and the way you use generics is simple. It is worth a little bit of extra effort and it isn't going to demand much from the programmer using it.

But in the hands of people who love to go bananas, generics can result in hellish code.

My favorite example of a problem where generics is not useful is the OpenSAML implementation (Java). For all I know it may have been sanitized since I tried to make use of it about a decade ago, but that thing was, for lack of a better word, a pointless exercise in type system wankery. It was so bad it took us less time to implement what we needed from scratch than to figure out how to use the library correctly. (The code I wrote a decade ago has been in production, for a decade on a system that is used by ~150-200M users. Not just because it works, but because it can be maintained by people who aren't really that interested in spending weeks understanding it)


Is writing the same function n times more readable and more maintainable? Is it more maintainable to entirely circumvent the type system with interface{}?


Since when has "clever" become something negative? If someone is clever, that's a good thing! I also try to be clever when I do things, be it repairing something, planning my workout routine or programming.


I've heard the drum against clever code for over a decade. This sums it up well:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." - Brian W. Kernighan


"If code is written as clevery as possible, it's surely written in a way that it's easy to debug it"... is what I would reply, but I get the gist. :-)


It's very unclear what people mean by "clever". For some people, a map or a fold is already "clever", while for others, this is just very straightforward code.


Maintaining "clever" code is incredibly hard because the only "clever" person is the one that wrote it, everyone else is "dumb" to it. One can take hours debugging clever code. Obvious code won't wow anyone but will be immediately understandable, which is an advantage in a team setting.


Clever doesn't necessarily mean smart.

Back in the day when GCC 2.9.5 was a widely used version of GCC we had a codebase that was full of "clever" hacks to trick the compiler to generate the assembler code we wanted. A few of these hacks were used in tight inner-loops that had a huge impact on performance. I can still remember the day someone removed what seemed to be a no-op in a piece of code, compiled it and pushed it to production - only to have everything grind to a halt and fall over because CPU use per request tripled.

Sure, it was a "clever" way to get the compiler to output what you wanted it to output, and it was common knowledge among some of the programmers on the project what that no-op'ish line would do - but not all.

The smart thing to do would have been to fix the compiler and upstream the fix (from our internal branch of the GCC toolchain). The "clever" solution was to just figure out a bunch of tricks to manipulate the compiler and call it a day.

In the context of this thread, "clever" is mostly taken to mean "not as straight forward and understandable as it can be".

Young me loved cleverness.

Older me knows how frustrating it is when something isn't as straightforward as it could be. Either because I have to figure out how something someone else wrote works or because I have to explain what my code does to people who have gotten confused.

When I write code other people can't understand I see that as a failure on my part. Because it is. Code isn't merely a mechanism to convey meaning to a compiler, it is a way to communicate with other human beings. Most of which aren't that interested in indulging my cleverness.


First time I used it, I called it 'C+-' after two weeks and a supervisord clone, but I really liked the language (except the module system). It was ~9 years ago, and other than that I only used it in coding interviews, as it's portable, simple, close enough to C that I'm not lost using it, yet simple enough that I don't introduce bugs carelessly every 30 loc.

But it's a bit boring and I'll never use it for personal projects.


Yeah, the original GOPATH stuff was atrocious.

Two questions:

1) how do you like the module system we have today? 2) can you expand on what you mean by boring, why it is important to you that a language not be boring, and give an example of a language that is not boring?


I'd categorize not being able to convert "unused thing" errors into warnings during development iterations as one of "The Bad".

Just today I had a couple blocks of code which were causing erratic issues. Wanted to see if it was the second one, so I quickly commented it out. This is just an exploratory development session, no need to comply with code quality guidelines. Still, the code failed to compile because now I had to worry about 8 lines with stuff that had become unused. The stubbornness about not adding an escape hatch for, again, exploratory intermediate development iterations is unnerving.

"But code should not leave unused stuff around". I agree. That's why after a hundred iterations, in the final compilation phase for production, this kind of errors-are-warnings flag would be disabled.


Honestly, I'm incredibly frustrated with this issue; it's just pure idiocy. Commenting out a part of the code and observing the effects is such a simple and useful debugging technique, yet this "feature" of Go prevents you from doing so effectively.

What's even more frustrating is that when you search for solutions, you come across two kinds of (pardon my language) completely brain-dead responses:

First, there are those who argue that unused variables/imports lead to bugs and worse performance in production, so they should always be fixed. But that's completely beside the point; I've never seen anyone argue that allowing unused variables is good for production. It's always been about facilitating the development process and debugging. Yes, I am aware now there are unused variables, but please just let me see what does removing this part of the code do.

Secondly, people suggest using a dummy function like UNUSED or a blank variable _ to solve the problem. But again, these suggestions miss the mark entirely. Changing variable names or adding UNUSED calls to "disable" the rule is even worse than what we've been doing to temporarily "circumvent" the rule, which is simply commenting out the declarations, test, and undo afterwards. Not only it involves more effort, but more crucially, you might actually forgot to revert those changes and leave in unused variables.

Frankly, I believe this is just a bad design decision, and it seems like the Go team is stubbornly doubling down on this mistake due to ego.

(Sorry, I just have a very strong opinion on this topic, and I am deeply frustrated when the tools I am using think they know better than I do and are adamantly wrong.)


> I've never seen anyone argue that allowing unused variables is good for production. It's always been about facilitating the development process and debugging. Yes, I am aware now there are unused variables, but please just let me see what does removing this part of the code do.

Yes, but once you allow that, you'll inevitably end up with unused variables in production code (warnings are useless). That's the core of the issue and why the Go team made the decision.

In my opinion, the real solution is to have two build modes: --devel and --release. This would allow for not bothering devs with anal checks during development while preventing substandard code in production.

Though the real advantage would come from the reduced pressure on compilation speed in --release mode which would make room for more optimization passes, resulting in faster production runtime speed and lower binary size.


Fully agree on both of these messages. This is the only thing I don't agree:

> warnings are useless

Warnings are useful, to (no surprise here) warn me about possibly problematic stuff. That's why my C++ builds are usually full of warnings during development, but the final build for production won't contain any, because it is performed with -Werror (i.e. "treat all warnings as if they were compilation errors"), thus any unsolved warning would break the build.

A common reply to this argument, which just in case I'd like to reply in advance, is: "but the end result will be that in order to save time, and to avoid having to fix all those pesky warnings, lazy devs will end up doing production/release builds without the -Werror flag (or equivalent for whatever compiler)". To which my response is: That's a social/political/behavioral problem, a poor project management, or any combination of them, and you cannot even start to pretend that those can be solved with technology.


I will literally not touch any language that does that. Not too sad about missing out on Go, but Zig unfortunately also shares this braindead thing.

The solution is all so easy also — just add a debug and a production profile. Enable your strong linter in prod, for all I care, but this is a must-have development tool.


I completely agree. The wonderful speed of the compiler allows for quick small-change, test, small-change, test development. In theory. They then sabotage it by making "unused" warnings into errors, which forces you to waste the time the fast compiler could have saved, making temporary changes (ex: commenting out/in) that aren't needed for the test and won't be needed for production.


I use a technique I call "runtime comments":

  if false {
    ... stuff I don't want to run right now but the compiler still has to deal with it ...
  }


Well, for that specific case (it was a well defined block that could be all selected and commented out in order to disable it), your technique would have worked fine, true :)

But of course it can get tiring and not be very practical if the logic to disable is a little bit more spread out (I'd say having to "if false" anything more than 2 paragraphs or blocks of code would already start to feel annoying)


This is just an intractable problem: if there is any way for people to leave unused code, they will. If you want the code to be clean, you cannot have (for example) a debug mode that allows unused code, because the code will just be left in 'debug' format. Eventually the community shared code will have so much debug-but-it-works code that you'll never get a system of clean code.

And think of what the current strict rule has done: go is the only(?!) language that has a consistently clean ecosystem. When's the last time you looked at go code that huge chunks commented out?

That said... it is annoying. More annoying the less import the script, and the faster you want to test something.


Make debug mode have runtime costs automatically (like race checks, less optimizations for faster compile). That way people are left alone while developing (plus getting cool tools for that as well), while are incentivized to build final prod builds for sane performance, for which they have to clean up after themselves.

Though personally, I dislike this paternalistic approach to handling devs.


The biggest go problem in practice is ironically omitted in both blog posts:

Verbosity of error handling!

There should be a shorthand for returning if last ret value is non-nil in one line. Otherwise all you code is littered with:

    if err != nil {
        return err
    }
and it makes it four times as long and way less readable as a result. This needless verbosity really reminds me of Java.

Also, go fmt is not opinionated enough! There really should be one way to line break and max width. Right now you can't rely on it to magically format it as it "should be" and fire-and-forget when typing.


Your code shouldn't be littered with that though, those errors should be wrapped or have some kind of logging/handling associated with them. If you find yourself just returning err all the time, you're not doing it right, IMO.


If only a language could have a built in feature to propagate an error up the call stack, recording its context as it goes!

It's always surprised me how negative of a reception checked exceptions had, since they provide the forced handling (or explicit propagating) of (value, err) or Result<T, E>, but with an automatic stack trace and homogeneous handling across the ecosystem

I imagine some of the disdain in Java specifically came with how unergonomic they are with lambdas. Either you don't allow them at all, like in most standard library functional interfaces, or you do, but now every caller has to handle a generic Exception. I guess what was really needed was being able to propagate the "check" generically, e.g.

  <T, E> T higherOrder(Supplier<T throws E> fn) throws E {
    return fn.call();
  }
So a call site of higherOrder would only be checked as far as fn is

I'm unsure if that's even possible to do (and if other languages have done it) or if it leads to undecidability. I'm very rusty on PLT


Checked exceptions suck because they're implemented in Java, where you have to deal with Java. The moral equivalent in Rust of "Result" is great, because the language was designed to handle it nicely. You're right that lambdas are a part of it. I can chain together `.map`, `.and_then`, and `.transpose` nicely with closures even if there's Results and Options in the mix, but that would be godawful in Java.


In practice if you look into existing codebases, it is littered. defer is used for RAII-style clean up, so in 95% of the cases it's just return the error and that's it.


Real world code base developed over a decade. Handles billions of emails.

Searching our prod code, "naked" if-err-return-err showed up in about 5% of error handling cases, the rest did something more (attempted a fallback, added context, logged something, metric maybe, etc).

If you are doing a naked return you are gonna have a bad time.


As a factual matter, both blog posts mention the verbosity of error handling. The second one even specifically mentions wanting Zig's try, which is shorthand for returning early if the error value is not nil.


The stdlib is the killer feature of go. Several times I wrote high enough performance small services/servers that people not familiar with go were astounded with (mostly the speed of writing the code and memory usage). I made many go converts this way. And I always only had to use the stdlib (many benefits when writing within a company).


I consider the stdlib to be Go's best feature too. As a somewhat contrived example, can you name any language that lets you write a HTTP/2 TLS endpoint that computes the HMAC of a PNG file's pixels without any dependencies? And if something is still missing, it's probably in golang.org/x (which is basically stdlib)!

After that, I probably consider readability at 3am [1], defer statements, explicit error handling and fast compile time to be the most important.

[1] Readability of not just your code: being able to go-to-definition into stdlib and immediately understanding it without having to grok a million unrelated decorators/FactoryFactoryFactory/std::_Vector_iterator<std::_Vector_val<std::_Simple_types<<block>>> is incredible


Just cos I was curious and Java has a pretty good comprehensive library.

- PNG: https://docs.oracle.com/en/java/javase/14/docs/api/java.desk...

- HMAC - https://docs.oracle.com/en/java/javase/14/docs/api/java.xml....

You need Jetty for HTTP/2.


Java also has a comprehensive stdlib but the golang stdlib is designed so much better, and has a small number of simple and orthogonal concepts. (e.g io.Reader and io.Writer go a long way, compared to the 20 io interfaces in java).


Dang, If you didn't have the 2 requirement java could do it with this

https://docs.oracle.com/en/java/javase/20/docs/api/jdk.https...

I'll betcha 2 for the server comes soon.


We are now with Java 21 around the corner, and any Java EE server (nowadays Jakarta EE) framework supports HTTP/2, which is basically the Java standard library for servers.


Worked with Go over 6 years; my biggest annoyance at the beginning, verbosity of error handling, has mostly gone away. In most cases the explicit error handling helped us hardening the code.

What does still bother me is the lack of proper enum support. I remember when Java boosted their enum support and the way it impacted the quality of the code. Sure would love to see something similar in Go.


Fully agree with your points. In my experience if people flag error handling it almost always means they didn't spend enough time with the language yet, as in the day to day work it's a non-issue.


Java now has records, record patterns, pattern matching, and switch expressions. Things that the golang author still don't seem to understand the need for (quite ironic for a language that claims it makes concurrency easy).


Is that more or less ironic than Java copying Go's goroutines and still struggling to add value types?


Java's designers have consistently mentioned the approach they're taking that Java has the last mover advantage. They cautiously see what features other languages applied, and take what gives them the highest value compared to the complexity introduced.

Java's virtual threads are already superior to golang's approach because they're working on structured concurrency from the start.

Value types are a huge proposition, but they seem to be coming along nicely. They have already baked in nullables/zero values into the design, something that is a pain in golang, and a big gotcha and source of bugs.

The interesting thing is that Java already beats golang because of its superior GCs, particularly in large programs, so it will be interesting to see what sort of performance improvements come out of value types.


> Java's designers have consistently mentioned the approach they're taking that Java has the last mover advantage.

There's nothing one can do to wiggle away from designing something. You can only make tradeoffs. In this case, Java's sacrificing time. Is that a good call? I don't know.

> They cautiously see what features other languages applied, and take what gives them the highest value compared to the complexity introduced.

They've historically shown poor taste in the features they've chosen, so this strategy doesn't seem to have worked out all that well. Waiting longer doesn't help if you don't know what you're doing.

> Java's virtual threads are already superior to golang's approach because they're working on structured concurrency from the start.

Already? Ten years later! Time matters.

We'll see how much actual adoption there'll be, Go was built with its concurrency solution in mind, so not only are the primitives ergonomic to use, but the entire ecosystem has grown around it. You can't match it in a day.

> Value types are a huge proposition, [...]

"Value types" is a misnomer anyway, it's just "types". We've had those since at least C, some 50 years ago.

> [...] but they seem to be coming along nicely.

I've been hearing this for a while, has it been a decade yet?

> They have already baked in nullables/zero values into the design, something that is a pain in golang, and a big gotcha and source of bugs.

You can't be serious. Complaining about another language's null problems from the perspective of Java?

> The interesting thing is that Java already beats golang because of its superior GCs, particularly in large programs, so it will be interesting to see what sort of performance improvements come out of value types.

It's a wash most of the time AFAIK. Unless we do the classic Java benchmark trick of ignoring memory usage.


> You can only make tradeoffs. In this case, Java's sacrificing time.

When you have billions of LOC, and take backward compatibility seriously, then that's a very valid approach. Furthermore, we've seen the Java team pick up cadence recently since switching to twice a year releases.

> They've historically shown poor taste in the features they've chosen,

Quite disagree. Which features are you talking about? They've shown very good taste in how records and pattern matching have been implemented for example. An also not jumping on the async bandwagon and opting for virtual threads instead.

> Already? Ten years later! Time matters.

Java had other approaches for dealing with heavyily asynchronous code, but it wasn't as ergonomic. And honestly speaking, the JVM already gives you access to native threads (something that neither golang (only "goroutines") nor python (GIL) for example offer), that unless you're doing heavy IO bound work, it's a non-issue to begin with.

> Go was built with its concurrency solution in mind

That is the claim, but in practice it is quite error prone. I was following golang from pre-release, and I voiced concerns about not having a way to declare immutable values (or at least something similar to C++'s const), but they didn't seem to care. Of course it came to bite them back[1]. Passing channels everywhere is tedious and verbose, and doesn't make it clear what the code is doing at first glance. Futures/Tasks are an easier abstraction to deal with, but of course golang didn't have generics until recently, and they still don't offer a future package.

> I've been hearing this for a while, has it been a decade yet?

You can follow the valhalla project to see what they're up to. It's sufficient to say that the pace picked up significantly on these large projects since switching to a twice a year cadence.

> You can't be serious. Complaining about another language's null problems from the perspective of Java?

Having worked on large golang codebases, I've seen first hand the issues that zero types cause in practice. At least a nullable in Java would throw an NPE instead of silently passing through the system and ending up with completely arbitrary behavior that is challenging to track down. Not to mention that you can use annotations to denote @NotNull in Java, something that golang doesn't have.

> It's a wash most of the time AFAIK. Unless we do the classic Java benchmark trick of ignoring memory usage.

Memory usage should improve with value types, but it also seems that people don't configure their JVMs properly (things like Xmx). In any case, they've also modified G1GC (and probably one or two more) to more aggressively release unused memory back to the OS).

[1] https://www.uber.com/blog/data-race-patterns-in-go/


While not perfect, there are ways to generate enums automatically using go:generate, e.g. https://github.com/abice/go-enum


I almost included a section on enums in the post, but I figured it was long enough, and I don't really have strong opinions about it. Probably a normal enum system would be better than iota, but it's not worth switching.


"What I got right...Using capitalization for the public/private distinction in functions, methods, variables, and fields"

I disagree with this strongly. Due to this when you need to change one of these things to the opposite it involves changing every use site as well. This has far reaching implications for refactoring, wrapping external code when you really do need to expose its guts, etc. Any time you need to do this in a non-manual fashion you're required to parse all of the code exactly perfectly (ASTs and such). Even go itself has not figured out how to do this, rf is still experimental and not complete: https://pkg.go.dev/rsc.io/rf.

Example use case: https://github.com/golang/go/issues/46792

I much prefer the non-viral public/private attributes other languages use.


Short of changes within the module itself, if you switch a Public function to a private function in another language, how are you not having to go change everywhere it's being used?


I'd guess the typical situation is you had thought maybe a public API featuring enunciate_spools() was a good idea, but eventually you realised actually nobody actually wants to enunciate spools and few people know how, when you see other people's code that calls enunciate_spools, it's always either a bug (and they shouldn't have) or test code that is inappropriately testing your stuff not their stuff.

So you make it private, in say Rust that's an ABI break, so you need a semver bump, but you aren't changing your code. Internally normal_operation does need to enunciate spools, not to mention the acrobatic use of it in complex_operation and fancy_coroutine but that's because it knows intimately what spools are and why it's enunciating them - it's an internal design element, not an API.

In Go, you have to rename it everywhere. Maybe your tooling helps with that. OK, but, not having to do it also helps with that and for everybody.


Agreed. First thing I frowned at when considering Go. (Plus it looks a bit messy. foo.Init() ?).

That and reserving the verb `make`..


Go has turned into an Awesome language.

I'm one of those that cannot use a PL that has no generics... It kills the DX of Algorithms and Data Structures.

Now it has Generics, soft RT GC, and it might even get official Arena Allocation.

I /LOVED/ the Matklad comment of |error handling converging|, indeed -- it seems that PL community evolved to "any-error" + annotation-at-call-site.


It still doesn't have sum types. Maybe 10 more years and Go can catch up to SML (a language from the 1970s). That's a big weakness for a static language this millennium.


> the DX of Algorithms and Data Structures

sorry, what does this phrase mean?


DX is Developer Experience.


Thanks! I had vaguely imagined it might be related to differentials.


I'm guessing that DX means "developer experience" here.


Copilot and the like are good for Go, because you can generate all the boilerplate code they make you write.

As for the success, it's obviously the minimal set of language features, conformance to established paradigms, not being very broken and being backed by Google.

There's a ton of suboptimal choices in Go, but overall it can work for many applications.


Dart is backed by Google.


> Go’s error handling is more verbose than those other languages, but structurally, there’s a lot of commonality under the surface.

In Go the type system doesn't force you to check for errors, the same way as languages with null pointers don't force you to check pointers before dereferencing them.

That's the real problem with errors and null in Go, not the verbosity (though that doesn't help)


The typical answer is that Go doesn't allow you to not address values returned from a function. So, to ignore an error, you'd typically have to write:

result, _ := myfunc()

That being said, I most certainly prefer the approach that Rust takes to this problem.


  a, err := f()
  if err != nil {
      return
  } 
  a, err = f()
This will compile.


I consider use of the "errcheck" linter mandatory for a professional Go programmer, and honestly even the hobbiest really ought to be using it.

Yeah, it might be nice if it were integrated into the language but on the overall cost/benefits analysis of my actual costs & benefits rather than merely aesthetic ones, this one doesn't actually factor very high for me because using errcheck is easy. And I supplement all languages I use seriously with aftermarket checkers so it isn't like this is special pleading for Go, either. I don't trust any language out of the box any more.


It still doesn't catch everything.


I am not aware of an option that "catches everything", in any langauge.


Any language with exceptions, checked or unchecked, will not allow errors to unintentionally get swallowed unless you write explicit code to do so. Rust and Zig's error handling also has the same property.

This comes from experience working on large golang code bases, with error linters, and seeing errors silently and unintentionally ignored.


If you have an error being ignored accidentally and using errcheck, you need to file a bug for them with a test case for it.

If you mean that programmers caught the error and just made it "go away", convincing the checker that it was handled but in fact it was not, that's not something a language can solve. There is no amount of "forcing" a programmer to handle an error that they can't bypass.


Explicit code to do so in Java involves wrapping two error throwing methods in one try catch, which wither have the same error class or downcasting the exception. Its been a long while since Ice used Java but i ran into that in other peoples code in production constantly.

For me it really stood out because thats when i first tried go, and while i found the error handling annoying, i found it rather directly addressed that issue.


In terms of default, there are enough languages where all errors are detected and you are forced to handle them. But if you want a particular mean language, check Idris. No chance to ignore an error by accident.


I'm not at all defending the practice. I agree it's very easy to navigate around this. My answer is just the canonical one I've seen over and over again in books about Go.


It's not just that it's "very easy to navigate around this", it's that it's not true. The incorrect statement piggybacks on Go forbidding unused variables, however that has two absolutely major holes:

First, it requires having a variable in the first place, if you call a function for its side-effect and don't remember that it returns an error, Go won't tell you.

Second, Go only errors on dead variables, not dead stores, since conventionally the error variable is "err" you can easily forget to check one of them, and Go won't say anything because the err variable was read in one of the other checks.

It's even worse if you're one of the weirdoes which uses named return variables, because named return variables are always used:

    func foo() (v int, err error) {
        a, err := bar()
        b, err := baz()
        v = a + b
        return
    }
compiles just fine. But at least you're returning the second error. No such luck if you're using named return variables for documentation:

    func foo() (v int, err error) {
        a, err := bar()
        b, err := baz()

        return a + b, nil
    }
go also has nothing to say about this.


a isn't used so it won't ;p

Honestly the thing that I think lacks more is macros. With macros it would be trivial to write

   a := Must!(f())
that

* assigns last return value to err

* calls return with that error if it is not nil


    b, err := os.ReadFile(path)
    if err != nil {
      return nil, fmt.Errorf("read %s: %w", path, err)
    }
is so much better than

    b := Must!(os.ReadFile(path))
because when things go wrong, I have exactly the right amount of information I want. Assigning to err magically (it's not even mentioned in the source code) is exactly the kind of thing that'll turn out to be the cause of a subtle bug 6 months later. Why not spend the a couple of extra lines thinking about the error when the context is still in your head?

Additionally I like how error handling acts as a visual delimiter, especially when you use meaningful fmt.Errorfs as strings are usually highlighted with a different colour, making it easy to quickly jump through code.


> I have exactly the right amount of information I want.

Really? Because `ReadFile` already adds the path to the context, so what you actually get is

    read /tmp/foo: open /tmp/foo: no such file or directory
which is more confusing than "the right amount of information".

Furthermore nothing precludes `Must!` taking a prefix and wrapping automatically, does it?

> Assigning to err magically (it's not even mentioned in the source code) is exactly the kind of thing that'll turn out to be the cause of a subtle bug 6 months later.

What bug? It's assigning and returning, the only situation where you'd have "a subtle bug" is if you didn't check the previous call and overwrote its error, which is exactly what you get with the code you propose.


Macros are a lazy design cop-out. They subvert the entire point of having a language in the first place - a shared understanding.


If there is a single result you can also on ignore the return value by calling myfunc()


I was very skeptical of Go, but after trying it, it quickly became my main and favorite language. I think the drawbacks of the language are easily offset by how powerful and simple it is.

Its biggest flaw imo, which I don't think was mentioned in the article, is that Go did not learn from The Billion Dollar Mistake in Java: null references. You have zero protection against nil pointers, and this is likely not something that can be changed now without breaking backwards compatibility.


Lack of sum types and thus impossibility of just having Result<T,E>-like type for returning errors is bigger one for me, I didn't get bitten by nulls all that much in Go

Although I guess that feeds into eachother, sum types would eliminate any need for null types in the first place.


I would have taken sum types over generics, personally. It kinda sucks because you can't create something like Rust's `?` operator without them or making serious kludgy compromises.


Yes, sum types would solve the problem - you are both identifying the same issue imo


The article has a link to the Wikipedia page on null titled "the billion dollar mistake."


Nil is a carefully chosen name in Go, and was a trade-off which was made. It’s not quite right to compare it to null in other languages.

I agree it is not as safe as a language like Rust, however it was the right trade-off to make in my opinion.

The main protection you have against nil pointers are nil receivers, and knowing when to use reference semantics vs value semantics.


What's the tradeoff? what advantage having null pointers give?


default values for everything without significantly increasing language complexity


As far as I'm concerned, that's a drawback. Ubiquitous default values are an attractive nuisance. One which C# had already demonstrated 10 years prior.

The removal of nil leading to the removal of ubiquitous default values would have been positive.


I think that's why people in this thread are calling it a tradeoff. It's a very attractive option that seems like a great idea until it breaks. In happy path having default values is better, in mixed path situations, having a separate and reserved way of saying something hasn't been touched is incredibly powerful


I think default values are a major flaw, much worse than nil. An unintended default value causes data corruption. All types should be nillable in my opinion, using types that have fallback to a default value is source of nasty silent bugs. In Java for example I would never use a primitive data type.


I learned Go on and off in recent years on the side(daily job does not need Go), I like its battery included stdlib and cross platform support.

I do feel its binary size is large comparing to c and c++, and multiple Go executable can not share libraries as easily as how c/c++ uses the shared lib, when I have a few Go binaries they add up, and I do storage constrained embedded development a lot.

On the desktop side, I really hope Go can have a GUI in its stdlib, something like what Flutter/Dart does: adding a Skia kind of engine and let me do GUI cross platform, that will make Go main stream like wild fire.


>multiple Go executable can not share libraries as easily as how c/c++ uses the shared lib

https://pkg.go.dev/cmd/go#hdr-Build_modes

you can actually build with shared libraries :)

I think most people I've seen use go, only use a single application, or have it turned into a docker container; so for them this is pointless but just fyi.

I personally dislike static linking but I see why it was used so heavily with go.


not really, it's not something Golang really cares a lot to say the least, to become 'mainstream', Golang actually has to embrace more use cases.

https://github.com/golang/go/issues/47788


Executable size is big or small to which system we are comparing to. It's definitely bigger than C/C++ but considerably smaller than nodejs/electron.

Also having a single binary is good in lots of cases because then you don't have the install runtime/separately install shared library


https://github.com/flutter/engine/blob/main/impeller/docs/fa...

Impeller is the Skia replacement and is in full c++ that supports all platforms.

It will be great if Go team can work with them(both are in Google) and make Impeller a render engine for Go.

With this no more bloated electron.js and no more Java/Swing or Qt, what a dream for the day.


Go was successful because it came after decades of pretty much no language putting good networking tools in their stdlib.


Not just networking but focusing on concurrency.

In before if you needed to make a highly concurrent network app you had to get into asynchronous programming that generally makes code looks shit (or at least slightly worse) and harder to debug.

With Go and goroutines taking IIRC around 8k for start you can "just" spawn as many of them as there are connections and write your code as if it was serial one. Add some half-decent concurrency primitives and it's pretty easy to not fuck up highly concurrent and highly parallel code.


Erlang's been around forever, also a fairly simple language. I'd imagine concurrency's easier than on Go.


Erlang had a better concurrency story than Go, and better error handling. But alas marketing wins.


Marketing, lol. Erlang is a functional language. Functional. That alone has prevented its widespread adoption.


Erlang needs a VM. Not statically typed. Poor compute performance. Poor strings.


Erlang is dynamically typed.


Erlang, frankly, has a lot of problems. The first major thing I used Go to do in the 1.4-1.6 era was to migrate my multi-year production Erlang system to Go, and I never looked back.

Somehow Erlang gets this special dispensation where people get to talk about it as if it's still 2005 and it's still this unique and interesting snowflake with virtually no competition. Which it was... back then. But having successfully convinced the world that there's an interesting space there, in 2023 there's a ton of options and the point in that space Erlang staked out isn't actually that interesting or unique when you measure it by 2023 instead of 2005.


I agree with you. Are there languages beyond Go that you would measure Erlang against in the dimension of concurrency ergonomics?


The process of bringing types to Elixir is officially underway:

https://elixir-lang.org/blog/2023/06/22/type-system-updates-...

It's not Erlang specifically, but it is a BEAM language, so tomato tomahto.


True, and Go has a much better story in some other areas, but “…if you needed to make a highly concurrent network app you had to get into asynchronous programming that generally makes code looks shit” is inaccurate.


This doesn't say much because people generally throw interface{} everywhere.

Go is used because Google and that's it.


No, they don't. They didn't before generics and they do even less now.

You can tell this is an accusation thrown around by non-Go programmers because frankly, throwing around a lot of interface{} was always really inconvenient. It's not something the language trains you to do... it's something you get punished for, really quite hard.


Not sure if that alone is what made Go successful. But yes, having great networking tools in the stdlib is very refreshing! One of the things Go got right.


The google brand name behind it also definitely played a role.


Very insightful. Re: the generics point —

As a Go programmer I always thought the generics complaint was kind of silly in practice — complicated code should be simplified and made more concrete, not more generic.

I’m glad generics were implemented, if only to silence the chorus of people who didn’t even use Go but whined about the lack of them. Their inclusion has simplified the stdlib and led to some cool new functions. Nevertheless I think people implementing them in their own projects is basically code smell and they are a symptom of poorly thought-out code rather than excellent code.

Anyway. This was a good initial post and a good follow-up. Go is my favorite language out there right now for its clarity, power, and ease-of-use. And with loop variable capture coming (the biggest remaining foot gun in the language in my experience) the language is only getting better.


Prior to generics we relied heavily on dynamic typing via interface{}, for example:

  BulkInsert(db *sql.DB, objs []interface{})
This unfortunately meant that any time you had []Foo.. you had to allocate a new []interface{} and copy over the items. Now a function like that can look like:

  BulkInsert[T any](db *sql.DB, objs []T)
And we're not wasting CPU cycles to copy the slice of []Foo. I'm struggling to see how that's code smell or less excellent than using []interface{} or duplicating the code for BulkInsert for every insertable type in our application.


Dynamic typing via interface{} is also huge Go code smell though, so...

Generics are definitely better for you. But I would say the overall pattern you're employing of bulk inserting different kinds of data structures with one function is the problem. Of course, I don't know your code so I'm sure you have a good reason for choosing what you did, but a BulkInsert of Any certainly made me raise my eyebrow.


The reason is the same reason that anyone ever writes a generic function (outside of writing a library) - to keep code DRY and avoid duplication. There is no upside to maintaining a large number of identical function implementations, and no scenario in which that is preferable to using generics.


But you probably will have to differentiate at some point between what you're inserting. Why not just do that, instead of artificially combine it into one function?

Nevertheless I feel like I'm getting lost in the weeds of this example. Obviously DRY and less duplication is good. However, you have to strike a balance between being clever and being clear. And frankly I would prefer 3 lines of clear code to 1 line of hard-to-read code.


> But you probably will have to differentiate at some point between what you're inserting. Why not just do that, instead of artificially combine it into one function?

Why prematurely optimize for differences that may not (and in practice aren't really likely to) happen?

Keep in mind that the tradeoff with generics is usually not 3 lines of clear or 1 line of hard to read, it's one line of clear code (T -> T), one line of unclear code (Interface{} -> Interface{} + casts), or n lines of complex code (concretely reimplementing the function for your particular case).


I would say that a BulkInsert(Any) is a significant premature optimization over just inserting an object as would apply specifically to that object. Because it sounds like you'd have to do weird reflection stuff on the object to determine how and what to insert where.

If you are inserting an object, you should insert it, rather than create complicated generic insert methods that morph themselves based on the object being inserted. That is idiomatic Go.


My example was a bit simplified but basically the function takes a SQL statement and a slice of structs that use db:column field tags. It's no more doing "weird reflection stuff" than say json.Marshal


Writing and reading repetitive code leads to unintentional defects. This is why for loop syntax that directly iterates through a collection is less error prone than the equivalent loop built on indexed look-ups. "Clever" code, at least when it is shorter, is often clearer than "simple" code.


I entirely disagree; defects are created by complexity, not repetition.


> defects are created by complexity, not repetition.

Any reference for this ?

I though there was a pretty strong connection between the total number of line of code and the number of bug... That's the whole point of DRY.


But why is it a problem (assuming the database in this example can handle any type)? If I'm writing a function that returns the 5th element of a list, I will always write that using generics, even if I know it's only used with one type (right now). Not only is it basically free (`<T> T getFifth(List<T>)` is really not any more complicated than `Foo getFifth(List<Foo>)`), it's also a separation of concerns. The logic is just about the container, so no need to complicate it with a "red herring" of a forced type


> As a Go programmer I always thought the generics complaint was kind of silly in practice — complicated code should be simplified and made more concrete, not more generic.

I guess you have never written a library. It's extremely useful there, stuff like "generic function that runs a channel thru X workers doing f() on it" is now easily possible with full type safety.

> Nevertheless I think people implementing them in their own projects is basically code smell and they are a symptom of poorly thought-out code rather than excellent code.

You can say that about literally any feature used by the incompetent.

But overall yes, they are far more useful for writing libraries than actual applications


I have written a library actually. I found interfaces perfectly sufficient for allowing applications to consume the contracts of the library without needing generics. Of course I understand that they have a use and are useful, but it's not like there weren't excellent solutions for this in Go before generics existed.


> Nevertheless I think people implementing them in their own projects is basically code smell and they are a symptom of poorly thought-out code rather than excellent code.

Who needs data structures right?


Uh, generics are not data structures and generics are neither the only way nor the best way to interact with data structures.


Do you think it is somehow more elegant to have a specialised version of a data-structure for every single type that may need to be placed into it? Or just to ignore the type whatsoever and erase it via interface{}? Even Golang's designers obviously realise that generics are a better way to interact with data structures because the standard types map and array were always "generic", just special cased as such.


Yeah, you must hate use the golang map type then. It is a generic data-structure.


Comments like this are why I like to joke that the G in Golang stands for gaslighting.


Do you actually have anything of value to add to this discussion? Because this comment is pretty bad.


I have to apologize, I misread the context of "people whining..", which was in fact about those that don't even use the language. If this was not intended to be aggressive, sorry.

Funny though that I did get triggered by it. Out of the Go community I've heard way too often "you don't really need xyz", when they mean "we're not going to support xyz, here's why, and if you disagree, we respectfully ask you to look elsewhere".


A PL without Generics is just Terrible DX for me, let me abstract the types in my Algorithms and Data Structures Goddammit!

Now Golang is saved, with Generics it's actually an Awesome Incredible PL.


> Nevertheless I think people implementing them in their own projects is basically code smell and they are a symptom of poorly thought-out code rather than excellent code.

I can't think of any other less confrontational way to say/ask , but beside GO which other language have you used ?

Because the above statement is so far remove from my experiences and understand of programming that i suspect we don't we really use the same day to day tools/languages in general.


> power

What power?


The power of voodoo


Great article. Go may have generics, but because old Go code doesn't, and the standard library doesn't, it still feels like the language doesn't sometimes.


Basically learning the lessons of Java and C# all over again. Yes, there are features you can defer implementing until later, but their absence infects everything until you do. I still see cases where people have to drop down into ADO.Net code for C#, and the fact that you still see DBNull.Value instead of just a simple null value, much less a proper Option type is infuriating.

Like I write a lot of Powershell code and their stuff still returns DBNull.Value when returning a null value from the DB, when nullable value-types were introduced almost 20 years ago.


The documentation for DBNull [1] seems to have some idea that DBNull represents something entirely different.

> Do not confuse the notion of null in an object-oriented programming language with a DBNull object. In an object-oriented programming language, null means the absence of a reference to an object. DBNull represents an uninitialized variant or nonexistent database column.

For all the explanation though, I couldn't fathom what its talking about.

[1]: https://learn.microsoft.com/en-us/dotnet/api/system.dbnull?v...


I don't know C#, but I'll take a guess:

I think it's exactly the same issue as null in Lisp and Lua — you sometimes want to differentiate between null as in "I returned no value", and null as in "I returned the fact that there is no value". Or null vs false vs empty list in the context of Lisp.

This distinction becomes very clear (and sometimes very annoying) when you realize that in Lua, setting a table key to null completely removes it, so there's no way to store the concept of a missing value unless you define a special value (like DBNull). A slot being null literally signifies its absence.


There is no difference between "no value" and "the fact that there is no value". "The fact that" is just rhetorical verbiage.

There is an ambiguity in a polymorphic container between a present entry indicating a null value, and a null return indicating there is no entry.

E.g. hash table being used to represent global variables. We'd like to diagnose it when a nonexistent variable is accessed, while allowing access to variables that exist, but have a null value.

This is because the variables are polymorphic and can hold anything.

When we are dealing with a typed situation (all entries are expected to be of a certain type, and null is not a member of that type's domain) then there is no ambiguity: if a null appears in the search for a value that must be a string, it doesn't matter whether "not found" is being represented by an explicit entry with a null value, or absence of an entry.


This rhetorical verbiage matters a lot in a language like Lua where you can pack an array full of dbnull sentinel types but filling it with nil results in an empty array.


I don't know lua. But the behavior you describe seems like a quirk of lua tables. Instead of having `table.get(key)` return a sentinel value, you can replace it with `table.has_key(key)` and `table.get(key)`.

I'm not sure about the ergonomics of the trade in lua. But in C# the ergonomics of `DBNull` are terrible. If it were just replaced with `null` everywhere, everything would just be better.

IMHO.


> Yes, there are features you can defer implementing until later, but their absence infects everything until you do. I still see cases where people have to drop down into ADO.Net code for C#, and the fact that you still see DBNull.Value instead of just a simple null value, much less a proper Option type is infuriating.

This DBNull.Value may be a problem for C#, but idiomatic Go has largely been untouched by the introduction of generics.


Correct, a new language feature doesn't magically inject itself into all existing code. This simple fact is sadly commonly missed by many.


Comparison with Emacs: an awesome operating system that still needs a great editor.

Go: amazing tooling/libs, only needs a great language :-)

But a few tweaks, like proper Enums and a Option/Return Type (to avoid excessive err != nil), maybe a compilerflag to force dealing with errors (instead of _ it) - much better. If I can wish for something, then some native map/filter/reduce to avoid excessive for-loops…? :-)


Working with Python and Java over the years, I have this feeling that Go isnt battery included. Maybe I miss out 3rd party libraries I am very familiar with? For instance, Python has parse library to help parsing inputs without the need for RegEx. Java has lots of “common” collection libraries. I just feel exhausted coding the same from scratch in Go either because no such library exists or being told just copy and paste!


Its a shame that Go as a language is so lacking, the tooling is amazing.

I wish there was something like Go but with an actually good programming language attached.


This is pretty much my take, too. Go as a compilation target?


I’ve noodled around with generating review-ready Go from CL (because macros), but it’s hard to flatten subexpressions (like multiple-value-bind around a call that returns a value and an error) into blocks of statements that assign to new temp vars, and I couldn’t expect arbitrary CL to just work, it’d be more of a CL-flavored DSL for generating loops and error handling.


What do you think Go is missing?


Anything that made programming easier in the last 30 years.

I miss Hindley Milner type inference, ADTs, default immutability, sane error handling, pattern matching, and functional collection manipulation.

I'm not even mentioning the time it took to add generics to the language, which should've come from the beginning, and we got a bad implementation of it.

Not saying that it HAS to have all of this, but at least 1 or 2 things of that list would already make Go have much better ergonomics.

Go has no excuse since it's a relatively new programming language, and it could've got some of those from the start.


TBH, it sounds like you want a functional language (type inference, pattern matching, collection manip), of which there are many. GO is not such a language. But then you want ADTs, which aren't really functional. I'm not sure you can have both in a clean way. The closest you might get is a multi-paradigm language that tries to allow both, like C++.

Default immutability is great in a language like Rust where it's designed from the ground up to warn you as much as possible at compilation when something is wrong. You can rely on the compiler a lot. But adding default immutability to a language that isn't designed around the same concepts seems odd. You don't gain the same kind of benefits, but you do have to deal with the tedium. Worst of both worlds, in my view.


Not exactly. I don't think Go should have ALL those, but some would be extremely helpful. That's just a wishlist.

There are some examples of languages that have a nice blend of functional features built in that are not fully functional languages, and the developer experience is fantastic (e.g. C#, Typescript, Kotlin, Swift, Java 11+)

They could've improved the developer experience, but they made some kind of C+- with easier concurrency.

And I find it all sad because Go ticks many of my boxes for a "perfect" general-purpose programming language.


> But then you want ADTs, which aren't really functional.

Pretty sure by ADTs, GP means "algebraic data types". These are very much functional, at least in the sense that they're a standard feature of statically typed FP (at least I know of no such language that doesn't have them).

ADT also sometimes stands for "abstract data type", which is something different. Although I wouldn't call them incompatible with FP either, Haskell typeclasses can express abstract data types, for example.


Ahah, whoops. I thought they meant "abstract data types", ie, classes - which AFAIK Go does not have.

Yeah, then if they want algebraics then a functional language is definitely the answer here.


A bit off topic but...

I need some resources for evangelizing GO.

I do not use it, but...

I have a colleague who needs/wants to replace a PHP/Laravel mess. They are talking up Node.js

I think GO would be a better choice

This article is close to what I need, but is there anything better?

The server side code they are looking to replace is handling sensitive financial information and I am very queasy using Node.js in that domain


Go is loved because of three reasons.

- Simple

- Highly concurrent

- Impressive networking stdlibs




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: